jordy.p65 132 College & Research Libraries March 1999 Book Reviews As a Tool for Assessing Publisher Reputation Matthew L. Jordy, Eileen L. McGrath, and John B. Rutledge This article reports on the authors’ efforts to develop a method of using book reviews to establish the reputations of publishers. The authors ex­ amined the quality of books published by de Gruyter, Greenwood, Doubleday, University of Georgia Press, and Louisiana State University Press as it is expressed in abstracts of book reviews published in the online version of Book Review Digest. The authors extracted a sample for each publisher from Book Review Digest, examined the sample, and compared each publisher sample against a control sample. Although it is true that most book reviews are positive, there are discernible varia­ tions in how reviewers express themselves about books. The study also looks at Choice as a source of book reviews, and briefly examines the relationship between price and quality. This study adds to the literature of the use of book reviews as a selection tool. “And as for the publishers, it is they who build the fleet, plan the voy­ age, and sail on, facing wreck, till they find every possible harbor that will value their burden.”—Clarence S. Day, The Story of the Yale Univer­ sity Press Told by a Friend (1920). “Now Barrabas was a publisher.”— Usually attributed to Byron. ibrarians like publishers—ex­ cept when it comes time to pay the bill. Librarians use the reputation of the publisher as a prominent criterion in the selection of books. Indeed, selection criteria found in collection development policy statements place the reputation of the publisher high on the list.1 The ALA itself has issued a publication entitled Evaluating Informa­ tion: A Basic Checklist that asks the ques­ tion: What is the reputation of the pub­ lisher, producer, or distributor?2 Specialized studies of selection method­ ology also recommend the reputation of the publisher as a consideration. A study by John B. Rutledge and Luke Swindler cite “distinguished publisher” as a pri­ mary bibliographical consideration among other criteria for the selection of monographs.3 If a theorem can achieve creedal status in librarianship, surely this one has. Matthew L. Jordy, former Technical Support Manager in Davis Library at the University of North Caro­ lina at Chapel Hill, is a PDVC Lab Intern at Cisco Systems; e-mail: mljordy@email.unc.edu. Eileen L. McGrath is the Collection Management Librarian in the North Carolina Collection at the University of North Carolina at Chapel Hill; e-mail: levon@unc.edu. John B. Rutledge is the Bibliographer of West European Resources in Davis Library at the University of North Carolina at Chapel Hill; e-mail: jbr@email.unc.edu. 132 mailto:jbr@email.unc.edu mailto:levon@unc.edu mailto:mljordy@email.unc.edu Book Reviews as a Tool for Assessing Publisher Reputation 133 The Quest for Quality Librarians affirm the importance of the publisher’s reputation because they know how much the publisher can add to the quality of a published book, from the ini­ tial selection of manuscripts to the distri­ bution for external review, the provision of editorial suggestions, and copyediting. A conscientious editor can significantly improve a manuscript in many ways.4 But are there too many publishers for even a subject specialist to know them all through direct or personal experience? The reputation of the publisher serves as an indispensable shorthand in book selection. Rarely is there enough time to assess each monograph for quality or to wait for reviews to appear. Indeed, the publisher ’s name often provides the only known quantity that selectors have to use in making the decision. It is a necessary shorthand because selection book-in­ hand usually is not an option. Even book­ in-hand selection might not work: librar­ ians can ascertain that the book has a scholarly look to it, but assessing its rela­ tive value for its field among hundreds of competing titles is nearly impossible. Recent advances in Web technology now allow a selector to check the table of con­ tents or to read a summary, but this is time-consuming. The author is, of course, another piece of information available to the selector, but the author’s name may be completely unknown, as is the case with most first-time authors. One also can search to see whether the author has pub­ lished other monographs, but the process soon becomes circular: with whom has he or she published, and what reputations do those publishers have? Publisher reputation forms the basis for some approval plans. One of the most common types of plans is that designed to cover university presses; in effect, these plans give sanction to an entire class of publishers. But there are different “classes” of university presses. Reputations vary. Is the quality consistently high? How Reputation Is Formed If the ALA justly and necessarily approves using reputation as a selection category, how do book selectors go about forming an impression of a publisher ’s reputation? Estimates of reputation can come from the personal experience of research in a spe­ cific discipline. The authors concur with Paul Metz and John Stemmer that “most bibliographers’ impressions of most pub­ lishers represent an amalgam of conscious conclusions and much more visceral im­ pressions that have been gathered over years of academic training, personal read­ ing, discussions with academic faculty and other librarians, inspection of library receipts, and use of book reviews.”5 Most bibliographers will know a small num­ ber of publishing firms well, but a larger number much less well. Because publisher reputation must be relied on for book selection, selectors should be familiar with a large number of publishers. But are there too many pub­ lishers for even a subject specialist to know them all through direct or personal experience? If this is the case (and the authors think it is), does this result in purchases based on merely brand-name recognition or, worse, ill-informed preju­ dices? The reputation of each press prob­ ably varies slightly among groups of people, with scholars holding one view, publishers another, and librarians yet another. Few librarians and still fewer scholars actually have the opportunity to examine hundreds of works by a single press with the intent of forming an opin­ ion as to the quality of the product. As Metz and Stemmer point out, there is a “paucity of information” about publisher quality and reputation.6 A Trial Run The high prices charged by certain pub­ lishers for their monographs strain our ability to see publishers as partners in the educational process. Selectors must make decisions about expensive items with the same paucity of information, although increasingly there are timely electronic sources of information about book con­ 134 College & Research Libraries March 1999 tent. Still, it is time-consuming to pursue this information. With book prices for academic titles now averaging above $57.85, do expensive monographs war­ rant the cost?7 Initially, concerns about high prices for monographs led the authors to choose the publishing firm de Gruyter for study. With de Gruyter titles costing an average of $158.89, this imprint begged for inves­ tigation. The authors first undertook a crude test to see if a method could be de­ veloped that looked promising. They ex­ amined sixty book reviews of de Gruyter titles in the electronic version of Book Re­ view Digest, one of the FirstSearch group of databases.8 Reading sixty mainly posi­ tive reviews suggested that a de Gruyter product will very likely be of high qual­ ity. More important, examination of a manageable number of book reviews per­ suaded the authors that developing a more rigorous method for using Book Review Digest as a tool to assess pub­ lisher quality would lead to significant results. Goals and Method The trial run encouraged the notion of developing a method for forming a rea­ sonable opinion about the quality of the books published by several presses. The authors also wanted to test the hypoth­ esis that certain publishers consistently produce high-quality monographs. Ide­ ally, the method would allow for a com­ parison of the reputation of various types of presses publishing books suitable for research libraries. Finally, the authors wanted to examine the relationship be­ tween their results and those of earlier studies on book reviews. Book Review Digest is truly a remark­ able tool in that it allows users to gather many book reviews very quickly. The authors looked carefully at the list of jour­ nals from which reviews are culled. With few exceptions, these journals fit into the collecting profiles of larger academic li­ braries. Books reviewed by these sources are very likely to be acquired by academic libraries. Unfortunately, however, the tool itself put certain limits on the investiga­ tion. For example, Book Review Digest in­ cludes very few foreign-language titles. Thus, this study confines itself to English- language titles. Many selectors make a mental list of presses thought to be “questionable,” if not actually disreputable. It would have been interesting to examine the quality of certain controversial presses (which must remain nameless). Unfortunately, Book Review Digest contained too few re­ views of their imprints to give a repre­ sentative and statistically valid sample. This in itself is telling. Previous Work on Book Reviews A number of interesting library-focused studies of book reviews were identified before the study method was completed. Scholarly studies of book reviews have repeatedly raised issues that are troubling with regard to the application of book reviews for the purpose of book selection. Important among them is the role of jour­ nal editors, who exercise a large measure of discretion (and power) in deciding which books are reviewed. There is never enough space to review all books pub­ lished. One cannot exclude the possibil­ ity that both book review editors and re­ viewers are subtly and unconsciously influenced by the reputation of the pub­ lisher. In all likelihood, many “bad books” (however that might be defined) do not make it past these gatekeepers. A sad cor­ ollary to this rule is that some good books also are not reviewed because of inad­ equate marketing and promotion. As mentioned earlier, book reviews tend to be positive. Judith Serebnick dis­ covered that “the great majority of re­ views are favorable.”9 “[M]ost reviews are found to be positive,” Dana Watson re­ ported.10 Even Choice, a tool for the library profession, recommended 75 percent of the books reviewed for purchase “with few or no reservations.”11 In an editorial in Library Journal, Francine Fialkoff main­ tained that 85 to 90 percent of reviews are positive.12 Worse, in recent years, “grade inflation” has infiltrated the world of http:positive.12 http:ported.10 Book Reviews as a Tool for Assessing Publisher Reputation 135 book reviews. Robert J. Greene and Charles D. Spornick found a decline in unfavorable book reviews from nine to five percent from the late 1980s to the early 1990s.13 Developing Objective Categories, Minimizing Subjectivity Before settling on a workable ranking system to delineate quality, the authors experimented with various systems us­ ing several sample batches of reviews of works by diverse presses. Despite the fact that most books get positive reviews, it was possible to make some valid and use­ ful distinctions. The authors devised a system that relies on broad distinctions they think are intuitive. The first category (rated 1) was reserved for the outstand­ ing book. Such exemplary titles receive extremely positive or almost wholly lau­ datory reviews. Often a descriptive ad­ jective such as “outstanding” or “magis­ terial” compels this categorization. Second, many books are perceived by re­ viewers as very good (rated 2), even though they may contain flaws in method, content, or style. However, more books fall into the average, adequate, or “pretty good” category (rated 3). Such works attract praise and criticism in about equal measure but can still be recom­ mended by the reviewer. A fourth cat­ egory was reserved for that small percent­ age of books that receive a mostly negative review (rated 4). Finally, a fifth category (rated 0) includes those reviews that are chiefly descriptive in nature or do not provide enough information to permit an assessment.14 Book reviews can be subjective and are not immune from politics. Early on in the research, the authors had to learn to read the reviews in a way that neutralized the subjectivity they brought to the process as individuals. All three authors exam­ ined every review in each batch, and each made an independent assessment mark­ ing 1, 2, 3, 4, or 0 on the reverse side of the text review printout. When the assess­ ments were completed, the authors met to resolve their differences. They first looked at those reviews where there were three different opinions and tried to reach a common understanding; after that, they looked at cases where two agreed and one disagreed. Avoiding subjective evalua­ tions was not easy. The authors had to learn to focus on what the reviewer re­ ally thought about the book, rather than on their own opinions of it. Ironing out the differences through conversation al­ lowed the authors to reach a common un­ derstanding of the categories and thus removed the arbitrariness that can creep into individual assessments. Method All the presses selected for this study publish books routinely purchased by university libraries. To some extent, they are in competition for libraries’ scarce dollars. Although the authors’ special in­ terests influenced the selection of certain presses for the study, every attempt was made to apply the method to a range of presses. First, the authors developed a pool of publishers of professional inter­ est to them. Next, they searched Book Re­ view Digest by publisher. When a pub­ lisher had a large number of titles in the database, a representative sample was taken. The number of reviews per pub­ lisher for the firms of interest to the au­ thors ranged from a low of 99 for de Gruyter to a high of 1,386 for Doubleday. Selecting eighty-one (in the case of de Gruyter, eighty-six) reviews per publisher produced a sample large enough to be representative of the whole. Individual reviews were extracted and printed out (one review per page), in reverse chrono­ logical order as presented in the database. Next, the authors examined the sample and discarded any duplicates. However, they did not exclude reviews of the same book by different reviewers. Each review received an accession number for pur­ poses of identification. Then the authors read each review independently and evaluated it using the established crite­ ria. With each batch, differences of opin­ ion had to be sorted out and the rankings harmonized. After any differences were http:assessment.14 http:1990s.13 136 College & Research Libraries March 1999 TABLE 1* Control Sample Category Count Percentage 1 10 10% 2 22 22 3 42 42 4 10 10 0 16 16 Totals: 100 100% * Figures in some of the following tables may not total 100% due torounding resolved, the data were entered into a spreadsheet. The ratings were stored in the spreadsheet along with the date of the review, the title of the book, the name of the publisher, the date of publication, and the journal in which the review first appeared. Control Sample It soon became evident that an objective standard of comparison was needed to put the assessment of the several types of presses in context. It was decided to draw a random sample of all the reviews in Book Review Digest to serve as a control batch. Using a table of random numbers, one hundred reviews were extracted to rep­ resent everything contained in the data­ base without restriction by press. The con­ trol batch allowed the authors to think about the typical or average “grade” earned by books in Book Review Digest (see table 1). The sample batch corroborated the findings of earlier studies that book re­ views are overwhelmingly positive. Fully 74 percent of the books in the sample re­ ceived a positive evaluation. What was most surprising was that 10 percent of the books in the sample were considered out­ standing by the reviewers. This was taken to be strong evidence of “grade inflation” in book reviews. At the opposite end of the scale, the same percentage (10%) of books earned a mostly negative review. Many of the reviews (16%) simply could not be categorized because they concen­ trated on summarizing the book’s content rather than critiquing it. Many book re­ views do not, in fact, serve as critiques but, rather, simply announce the fact of publication. Nearly half (42%) of the con­ trol batch fell into the broad third cat­ egory. University Presses The university press has been defined as “an organization whose function [i]s to publish works which no one would read.”15 (Precisely the books that aca­ demic libraries seek to acquire!) Because so many academic libraries routinely pur­ chase university press titles, two univer­ sity presses were included in the study. It is a fact of economic life that many uni­ versity presses have developed special­ ties in regional publications, reasoning that they know and serve their local mar­ kets best. Natural curiosity and enlight­ ened self-interest led the authors to choose two presses that have particular relevance for their own collections on the American South. The possibility that the results might contrast somewhat with those found for a large, international, aca­ demic publisher such as de Gruyter made the choice of two university presses attractive. The University of Georgia Press and Louisiana State University Press have fairly large annual title productions, but not nearly as large as the behemoths Ox­ ford and Cambridge, or Chicago, the larg­ est American university press.16 Yankee Book Peddler reports that the average price of a University of Georgia Press title is $35.55; Louisiana State’s books are rela­ tively inexpensive at $26.14.17 Founded in 1935, LSU Press’s output is a respectable TABLE 2 Table for University of Georgia Press Reviews Category Count Percentage 1 4 4.94% 2 35 43.21 3 31 38.27 4 4 4.94 0 7 8.64 Totals: 81 100% http:26.14.17 http:press.16 Book Reviews as a Tool for Assessing Publisher Reputation 137 TABLE 3 Table for Louisiana State University Press Reviews Category Count Percentage 1 2 2.47% 2 24 29.63 3 43 53.09 4 8 9.88 0 4 4.94 Totals: 81 100% seventy titles per year with approxi­ mately one thousand titles in print. The press has “a special emphasis on south­ ern history and literature.”18 The University of Georgia Press was established in 1938 and admitted to the American Association of University Pub­ lishers in 1940. In the past decade, it has published sixty to ninety new titles per year of literary criticism, American his­ tory, Southern studies, and scholarly monographs in related fields. Like many university presses, it has begun to broaden its offerings through the inclu­ sion of memoirs, literary titles, and popu­ lar works on its home state. The authors selected this press for the study because of their interest in Southern studies and because of a sense that it might become as important a publisher in this field as LSU Press or their local favorite, the Uni­ versity of North Carolina Press. Two University Presses Compared Book Review Digest contained 429 reviews of University of Georgia Press publica­ tions (retrieval date: 11/25/97). Eighty- one reviews were selected for examina­ tion. Seven of these abstracts (8.6%) did not provide sufficient information to de­ termine the reviewer ’s estimation of the book’s quality. (These were either short reviews from Booklist and Choice or, sur­ prisingly, longer reviews from the New York Times Book Review or the Times Liter­ ary Supplement.) The overwhelming ma­ jority of reviews (86.4%) were positive (see table 2). Four of the eighty-one re­ views (almost 5%) ranked the book un­ der consideration as outstanding or ex­ ceptional. The largest number of reviews (thirty-five), or 43 percent, fell into the second category, extremely positive. Thirty-one reviews (38.3%) indicated that the book under consideration was of mixed quality, but still a worthwhile con­ tribution. Only four reviews were mostly negative (almost 5%). The pattern for LSU Press titles re­ viewed differed slightly (see table 3). Eighty-one of 509 reviews in the database were selected (retrieval date: 11/09/97). Reviewers were clearer in stating their opinions about LSU books: only four of the eighty-one reviews selected for LSU did not contain sufficient information to allow for an assessment of the reviewer’s opinion. The overwhelming majority of reviews were positive, but only two of the titles (2.47 percent) merited a designation of outstanding—half as many as Georgia. The percentage of LSU reviews that fell into categories 1 and 2 was significantly less than comparable numbers for the University of Georgia Press. Many more titles from LSU fell into the third category. In fact, two-thirds of all the reviews con­ sidered the LSU books under review worthwhile, but flawed. Eight of LSU’s reviews—twice the number for Georgia— received a mostly negative assessment. The negative reviews came from a broad range of publications, from The Economist to Booklist. Editors struggle to produce high-qual­ ity books, and competition among uni­ versity presses can be intense. Persons involved in scholarly publication prob­ ably will find these differences to be of TABLE 4 Table for de Gruyter Reviews Category Count Percentage 1 13 15.12% 2 23 26.74 3 40 46.51 4 7 8.14 0 3 3.49 Totals: 86 100% 138 College & Research Libraries March 1999 TABLE 5 Table for Greenwood Reviews Category Count Percentage 1 5 6.17% 2 27 33.33 3 36 44.44 4 11 13.58 0 2 2.47 Totals: 81 100% greater significance than might appear at first glance. If a large number of titles from any university press fell into the fourth category, serious questions about that press’s reputation and the quality of its list would have to be raised. Although the authors did not have time to examine a large number of university presses, it is probable that reviews of their publishing output would conform to one of these patterns. Publications of the “first-rank” university presses probably conform more closely to the de Gruyter pattern. Although this is admittedly speculation, the authors have provided a method that allows anyone to explore the reputation of a wide range of publishers within a short period of time. Return to de Gruyter Having refined the methodology, the au­ thors reexamined more rigorously the reputation of de Gruyter as it is revealed by the reviews. The name de Gruyter re­ fers to a family of publishers: Walter de Gruyter, which took over Mouton (now Mouton de Gruyter) in 1977, and Aldine Publishing Company (now Aldine de Gruyter) in 1978. It is the yearly output of these three publishers—approximately 350 titles per year—that is under discus­ sion here. De Gruyter describes itself as an “international academic publishing house situated in Berlin,” publishing in almost all fields of knowledge, primarily in English. Its titles tend to stay in print for long periods of time. Currently, the number of de Gruyter titles in print is about 12,000.19 A sample of eighty-six de Gruyter titles taken from Book Review Digest yielded the results shown in table 4. Fifteen percent of the titles in the sample fall into the out­ standing category, a high percentage in­ deed! De Gruyter placed many more titles in the outstanding category than did ei­ ther of the two university presses; no other publisher came close to this num­ ber. However, this high standard could not be maintained in the second category, in which reviewers adjudged only about 27 percent of de Gruyter publications to be very good. Although this is higher than the control batch, it is lower than the univer­ sity presses and the academic trade presses. Just under half of de Gruyter’s books (46.5%) merited only a middling ranking, whereas about 8 percent earned mostly negative comments from reviewers. Why do so many de Gruyter titles fall into the third category rather than the second? In this regard, is de Gruyter an “ordinary publisher?” How does one evaluate this situation? Does de Gruyter lack sufficient editorial staff to shape a larger percent­ age of its titles into outstanding or very good works? What are the practical lim­ its to a drive for quality? Perhaps the over­ all results have to do with the total vol­ ume of books published, excellence be­ ing harder to achieve in great quantities. Greenwood Greenwood Press is one of the five im­ prints of the Greenwood Publishing Group. Originally known as a reprint publisher, Greenwood Press now pro­ duces reference works and scholarly monographs in the humanities and the behavioral and social sciences. Well known to academic librarians, Green­ wood was one of the publishers included in the Metz and Stemmer study. The av­ erage price of a Greenwood title, as re­ flected in Yankee’s approval plan cover­ age, is $62.90.20 Table 5 shows the results for Green­ wood. Just over 6 percent of Greenwood’s titles achieved a ranking of outstanding. Reviewers generally express themselves positively about Greenwood titles. Con­ http:62.90.20 http:12,000.19 Book Reviews as a Tool for Assessing Publisher Reputation 139 TABLE 6 Table for Doubleday Reviews Category Count Percentage 1 2 2.47% 2 19 23.46 3 39 48.15 4 12 14.81 0 9 11.11 Totals: 81 100% sequently, exactly one third (33.33%) of Greenwood’s titles were rated in the very good (second) category, and 44.44 percent were rated to be of average quality (third category). Interestingly, more Greenwood titles fall into the mostly negative cat­ egory (13.6%). What editorial factors or economic considerations produce this kind of result? Is a lack of editorial over­ sight the cause, or is there a multiplicity of factors that could include such diverse matters as corporate profit margins or an insufficiently critical customer base? Doubleday Doubleday is an old, well-known com­ mercial publisher of both popular and lit­ erary fiction, as well as general interest nonfiction. It is now part of the Bantam Doubleday Dell Publishing Group, Inc., which is itself part of the international Bertelsmann AG. All the titles reviewed in this study carried the Doubleday im­ print rather than mass-market Bantam or popular fiction Dell imprints. On the whole, Doubleday titles receive good marks from reviewers (see table 6). Any editor could rejoice to see that about 74 percent of the firm’s publications were positively reviewed. However, about 15 percent drew heavy fire from the critics. This is the highest percentage of fourth- category books reported in the study and higher than the control batch. All Presses Viewed Together When the various presses examined are compared with the control batch (which represents the generality of books), the differences tend to show up most strik­ ingly at the extremes. Looking only at books in the control batch that received a ranking of outstanding or very good (cat­ egories one and two taken together), one finds that 32 percent of the books in the control batch fell into the two upper cat­ egories (see table 7). In this respect, Doubleday and LSU are typical, and Greenwood, de Gruyter, and Georgia per­ form much better. If the third and fourth categories are aggregated, Doubleday and LSU imprints have a higher concentration at the bottom than do Greenwood, de Gruyter, and Georgia. Looking at the fourth category alone, one notes that only the University of Georgia Press was able to avoid a significantly lower number here (4.94%) than the control sample (10%). Results for de Gruyter are compli­ cated by the fact that most of the titles in the sample were in the social sciences. Social sciences titles tend to be reviewed slightly less favorably than humanities titles.21 The authors would very much like to see Book Review Digest expanded to in­ clude a larger number of review sources and more foreign-language titles. Librar- TABLE 7 Table of All Presses in Study Rating 1 2 3 4 0 Totals: Control deGruyter Doubleday Greenwood 10.00% 15.12% 2.47% 6.17% 22.00 26.74 23.46 33.33 42.00 46.51 48.15 44.44 10.00 8.14 14.81 13.58 16.00 3.49 11.11 2.47 100.00 100.00 100.00 100.00 LSU 2.47% 29.63 53.09 9.88 4.94 100.00 UGA 4.94% 43.21 38.27 4.94 8.64 100.00 http:titles.21 140 College & Research Libraries March 1999 TABLE 8 Table for Choice Reviews Category Count Percentage 1 11 8.8% 2 41 32.8 3 57 45.6 4 9 7.2 0 7 5.6 Totals: 125 100% ians might benefit from greater inclusion of titles in the hard sciences where the price of individual monographs can be even more expensive. Publications by some of the most expensive publishers in the world rarely appear in reviews cov­ ered by Book Review Digest. Choice As a Source of Book Reviews Because many of the reviews in the study came from Choice, it was possible to es­ tablish a profile of that journal as a source of book reviews. Table 8 shows the fig­ ures for Choice. It is interesting to com­ pare reviews in Choice with the reviews found in Book Review Digest as represented by the control batch. Choice reviewers are not far from the norm in most respects. They are no more lavish with praise than the typical reviewer: 8.8 percent of books in Choice were judged outstanding, com­ pared to 10 percent in the control. How­ ever, Choice reviewers proved signifi­ cantly more generous in praising a book as very good (32.8%) than the typical re­ viewer (22%). But even for Choice, most books are no more than pretty good (45.6%), just as they are for most review­ ers (42%). It should be noted that few re­ views in Choice fail to state an opinion. Is There a Relationship between Price and Quality? Is it a coincidence that the publisher with the highest percentage of outstanding books (de Gruyter) is also the most ex­ pensive publisher in the study? The au­ thors do not pretend to be economists, but the high cost of books from northern Eu­ rope is surely due to several factors, not just to high standards for production and content. Although editorial attention to detail drives up production costs, Euro­ pean labor and distribution costs are other factors that must also be taken into ac­ count. The inelasticity of the market for books would have to figure in the discus­ sion as well. If cost really did guarantee quality, many more of de Gruyter ’s titles should have fallen into the second cat­ egory, yet three other presses (Green­ wood, Georgia, and LSU) all “outscored” de Gruyter in this category. The “edito­ rial manpower” at de Gruyter may not be adequate to produce a larger number of its publications in the first category. Likewise, is it a coincidence that the pub­ lisher with the least expensive books, LSU, had more books in the third category than the other presses? The lower aver­ age price may be due to the number of fic­ tion titles in LSU’s list.22 Because de Gruyter publishes no fiction or general- interest books, the higher cost for its books is for specialized scholarly materials. Librarian Assessment of Quality versus Reviewer Assessment Librarians queried by Metz and Stemmer collectively gave Greenwood a 3.50 on a five-point scale of perceived quality.23 Doubleday received 3.00. Is it revealing of librarians’ attitudes toward publishing in general that the survey respondents were unwilling to label many presses as low in quality? (Only three presses in their list rated below 3.00—University Press of America, Haworth, and Mellen.) But librarians are likely right in their as­ sessment. This is borne out by the reviews (most of which were surely written by nonlibrarians) in the study. Doubleday had the highest number of books in the fourth category and a low number in the top category. Reviews for Doubleday showed the highest percentage in the bot­ tom two categories taken together. Greenwood, at 3.50, is perceived by Metz and Stemmer ’s librarians to be slightly better than Doubleday (3.00). In the top two categories of the study taken together, Greenwood did better than Doubleday, better even than one univer­ http:quality.23 Book Reviews as a Tool for Assessing Publisher Reputation 141 sity press. In this case, it seems that librar­ ians ought to have ranked Greenwood somewhat higher than they did in the Metz and Stemmer survey. These mixed results (the librarians were right about Doubleday, but not quite so accurate about Greenwood) point to the need for greater familiarity with the presses and more accurate assessment capabilities on the part of librarians. The method pre­ sented in this study can provide that ca­ pability. It is hoped that more librarians will use this method to evaluate the qual­ ity of the output of publishers whose works they regularly purchase. Additional studies can lead to a more precise estimation of publisher reputation, given its inevitable, and important, role in book selection. Notes 1. It has become easier to examine collection development policies now that libraries are mounting them on the World Wide Web. Almost one quarter of ARL libraries have some form of collection development statement on the Web, as do non-ARL academic libraries such as Mansfield University, and many public libraries. 2. Evaluating Information: A Basic Checklist (Chicago: ALA, 1994). Not paginated. 3. John B. Rutledge and Luke Swindler, “The Selection Decision—Defining Criteria and Es­ tablishing Priorities,” College & Research Libraries 48 (Mar. 1987): 129. 4. For a warm testimonial on the work of a good editor, see Francis Paul Prucha, “Livia Appel and the Art of Copyediting,” Wisconsin Magazine of History 79 (summer 1996): 364–80. Unfortunately, the number of in-house editors has declined and some authors are now seeking editorial help from other quarters. See “As Publishing Pressures Rise, So Do Errors,” New York Times, June 29, 1998. 5. Paul Metz and John Stemmer, “A Reputational Study of Academic Publishers,” College & Research Libraries 57 (May 1996): 235. 6. Ibid., 235–36. 7. Trialogue: Publishing News for Publishers, Vendors, and Librarians, no. 7 (spring 1998): 9. Trialogue is published by Yankee Book Peddler, Inc. Average cost figures are based on Yankee’s U.S. approval plan coverage. The authors think that it thus accurately represents the average prices of books that academic libraries are likely to purchase. 8. Book Review Digest is one of the databases in OCLC’s FirstSearch service. It covers a hun­ dred periodicals, many of them scholarly and intellectual. The database contains approximately 403,000 records (as of August 28, 1998). 9. Judith Serebnick, “An Analysis of Publishers of Books Reviewed in Key Library Jour­ nals,” Library and Information Science Research 6 (July–Sept. 1984): 301–2. 10. Dana Watson, “Reviewing: A Strategic Service,” in A Service Profession A Service Commit­ ment: A Festschrift in Honor of Charles D. Patterson, ed. Connie Van Fleet and Danny P. Wallace (Metuchen, N.J.: Scarecrow Pr., 1992), 23. 11. Paula Wheeler Carlo and Allen Natowitz, “The Appearance of Praise in Choice Reviews of Outstanding and Favorably Assessed Books in American History, Geography, and Area Stud­ ies,” Collection Management 20 (1996): 102. 12. Francine Fialkoff, “Too Many Positive Reviews?: Librarians/Publishers/Book Review Edi­ tors Disagree on the Answer,” Library Journal 119 (Jan. 1994): 90. 13. Robert J. Greene and Charles D. Spornick, “Favorable and Unfavorable Book Reviews: A Quantitative Study,” Journal of Academic Librarianship 21 (Nov. 1995): 252. 14. The reviews in Book Review Digest sometimes do not reveal how the reviewer really feels about the book being reviewed. The authors wondered if the full versions found in the source journals might contain better information. Some comparisons of the full version of the review with the summary presented in Book Review Digest convinced the authors that the full version usually did not offer much more evaluative information than the abstract. Tracking down the full version of hundreds of book reviews would have been beyond reasonable time limits for this project. 15. Cited by Sheldon Meyer in “University Press Publishing,” in International Book Publishing: An Encyclopedia (New York: Garland, 1995), 355. 16. Ibid., 357. 17. Trialogue no. 7 (spring 1998): 6. 18. See “Bienvenue à LSU Press” at the LSU Press Web site: http://www.lsu.edu/guests/ lsuprss/welcome.html. Site visited July 14, 1998. 19. See the publisher ’s own Web site at: http://www.degruyter.de/history.html. http://www.degruyter.de/history.html http://www.lsu.edu/guests 142 College & Research Libraries March 1999 20. Trialogue no. 7 (spring 1998): 9. 21. Greene and Spornick, “Favorable and Unfavorable Book Reviews,” 252. 22. LSU Press has now discontinued regular publication of new fiction. See “Submissions” at: http://www.lsu.edu/guests/lsupress/index.html. Site visited July 14, 1998. 23. Metz and Stemmer, “A Reputational Study of Academic Publishers,” 238, 240. http://www.lsu.edu/guests/lsupress/index.html