I was asked by the organizers of this great event to talk a bit today about peer review and quality, no doubt because discussions of open access often trigger an anxiety about quality -- because anyone could seemingly publish anything online! -- resulting in something of a retrenchment in traditional modes of prepublication evaluation. However, much of my recent work has focused on new modes of open review, and this is where my comments will largely be centered.
[[MediaCommons and NYU Press]] received a grant last year from the Andrew W. Mellon Foundation to conduct a study of open peer review practices. Our goal when we proposed this study was to work toward a set of technical specifications that would allow us to develop a platform to support open review.
However, in the process of conducting this study, we discovered that the real challenges that we face in thinking about open peer review practices in the humanities are far less technological in nature than they are [[social]].
I am not wholly surprised by this, in no small part because this conclusion bears much in common with the argument that I make about the future of scholarly communication more broadly in [[Planned Obsolescence]]; many of the challenges that we face with respect to communication in the academy today appear to require technological solutions, when in fact many of them are social or institutional in nature, requiring new ways of thinking, new ways of working together, and new ways of understanding ourselves and our work in order to create the change that we seek.
Where there was a bit more of a surprise for me was in the recognition of the [[complexity]] of that social landscape. Different communities of practice make very different uses of peer review, they have very different desires for its outcomes, and they bring very different values to its execution. Because of these critical differences, any platform that might be built to support open review would have to be extremely customizable, and thus extremely complex both to support and to maintain.
But let me back up a bit: the MediaCommons/NYU Press open review study was conducted by a stellar [[advisory group]] composed of established scholars from a range of humanities disciplines, with a range of investments in new modes of peer review, from advocacy to skepticism. The members of the group were Cheryl Ball, associate professor of new media studies at Illinois State; Dan Cohen, then associate professor of history and director of the Roy Rosenzweig Center for History and New Media at George Mason, and now director of the Digital Public Library of America; Cathy Davidson, the Ruth F. DeVarney Professor of English at Duke; Lisa Gitelman, professor of media and English at NYU; Nick Mirzoeff, professor of media, culture, and communication at NYU; and Sidonie Smith, professor of English and women's studies at Michigan. The meetings were facilitated by the grant leads: me and my MediaCommons co-founder, Avi Santo; Eric Zinner, NYU Press editor-in-chief; and Monica McCormick, NYU Digital Scholarly Publishing Officer.
We began our conversations intending to focus on the following [[issues]]: (a) the merits and pitfalls associated with open review; (b) the desirability of open review for certain types of communities and works; (c) the criteria and parameters needed to organize and conduct successful open reviews; (d) the technological requirements for meeting open review criteria; and (e) the technologies currently available that can help meet those requirements and criteria.
But we also started by asking a number of contextualizing [[questions]], which led our discussions in ways we didn't always expect:
The most central of these questions was [[What is peer review?]] This appears to be a very simple question with a very simple answer: peer review is the review of scholarship and other forms of scholarly activity by one's peers. Peer review plays a foundational role in the determination of scholarly authority, and it is relied upon in all of our major forms of assessment. Yet many scholars, across many fields, are today raising questions about peer review, about the purposes it serves, and about the degree to which those purposes, particularly with respect to new forms of digital scholarly communication, are being served as well as they might be.
Peer review is meant to accomplish a number of different things: for instance, it provides a means of critical [[feedback]] for scholars in the development of their work, and it provides a means of [[selection]] among the work of many scholars. At times review is meant to serve one or the other of those purposes, but most often it is meant to serve both. Peer review is in this sense meant to represent and further the best of scholarly values as they should be espoused, working rigorously to improve work, to determine the best work, and -- especially in the case of double-blind peer review -- to do so in the absence of biases based in rank, gender, class, race, institutional affiliation, and so forth.
However, a fair bit of [[criticism]] has been levied at the existing peer review system, including concerns about the degree to which anonymous reviewers are granted "power without responsibility" and the potential failures of reviewer and inter-reviewer reliability. Moreover, some scholars have begun exploring the ways that the notion of the "peer" is defined in these processes, asking whether there might not be a better way.
Rather than limiting the category of the [["peer"]] to credentialed scholars, and even further, to scholars credentialed in a specific field or subfield -- a narrow and usually vertical community organization in which junior members must prove their worth to those who precede them, resulting in a tendency toward self-replication -- might we begin to understand the notion of the "peer" as one that is more horizontally organized, one based in affinity and, most importantly, in participation in community processes?
This is not to suggest that, in the age of open networks, a "peer" is becoming [["just anyone,"]] but rather to indicate that the status of peer might not pre-date participation in review processes. Instead, a scholar might have the potential to become a peer through the quality of that participation; as Peter Frishauf has noted, in this mode, peers can be selected on the basis of "experience and trustworthiness, not credentials." Such a change in our understanding of the "peer" points to the need to rethink our peer review practices, particularly with respect to scholarship that originates or is published online.
We began this study focused on the term [[peer-to-peer review]], intending to explore review practices and tools that would enable direct communication among a network of existing peers and publications, but this exploration of the shifting notion of the peer led us to think more about the ways that opening review practices to new kinds of peers might further some crucial values and goals in humanities-based scholarship. We aspire, in the humanities, to engage our students, our colleagues, and a range of broader publics in exploring aspects of our complex histories and cultures. Perhaps the crucial change in our engagements with one another lies in introducing new forms of openness --
But what do we mean when we talk about [[open peer review]], and what do we hope it will accomplish? Scholars already conduct much of their work in public; we present work at conferences, discuss it in workshops, share it with our colleagues, and so forth. Typically our publication review processes have operated off-stage, but in an era in which increasing numbers of scholars are sharing their work with the world via their blogs, new open publishing practices are challenging us to explore the possibilities that these practices present for our fields.
We recognize that many different understandings of the [["open"]] can apply in the scholarly context. Must everything be fully open to everyone, or are there degrees of openness that might be useful to different communities of practice at different times? Perhaps a frank discussion among a defined cluster of scholars would be particularly important at certain times, while a discussion opened to broader publics would be crucial at others. Perhaps we might imagine a review process that is open to volunteer participants while nonetheless still being conducted in private. Processes like these might require reviews to appear under their authors' real names, or there might be situations in which some degree of anonymity or pseudonymity remains useful. Moreover, these two forms of openness -- openness of access to the review process and openness of reviewer identity -- may be related, but they are not inseparable.
In thinking about the different valences of openness, we explored a range of existing experiments in the open review of humanities scholarship. The Institute for the Future of the Book worked with McKenzie Wark to post the draft of his book, [[Gamer Theory]], online in commentable form; while this experiment was not explicitly part of a peer review process, it nonetheless raised substantive feedback -- much of it from the gaming community -- that Wark employed in his revisions. The Institute generalized the platform they'd built for Gamer Theory into CommentPress, a WordPress plugin that allows a long text to be discussed paragraph by paragraph.
CommentPress was used, in its very early stages, by Cathy Davidson and David Theo Goldberg in the process of reviewing and revising their MacArthur report [["The Future of Learning Instiuttions in a Digital Age"]], as well as
by Noah Wardrip-Fruin, in seeking feedback on his manuscript for [[Expressive Processing]]. Both projects were greatly improved by the process, and comments from the open reviews influenced and were included in the revised final publications.
Further such experiments in open review have been conducted at [[MediaCommons Press,]] including the open review of my own book, Planned Obsolescence, as well as the two open review experiments conducted in collaboration with Shakespeare Quarterly. All of these texts were at the stage at which they would be submitted for traditional peer review -- and in fact my book was sent out for traditional review in addition to being opened for community discussion, while
the [[Shakespeare Quarterly]] reviews took place as the central part of a multi-stage process, involving editorial pre-selection and a final round of editorial board approval. In all of these cases, the locally targeted, threaded commenting facilitated by CommentPress, along with the underlying social features of WordPress, resulted in robust discussions aimed at helping the authors involved revise their work before final print publication.
Moreover, the CommentPress format allowed reviewers and authors not simply to respond to the text, but to respond to one another as well, and the authors have reported on the helpfulness of having a social context within which to understand and interpret reviewer comments.
Jack Dougherty and Kristen Nawrotzki similarly used CommentPress to facilitate the open review of the essays contained in their forthcoming volume, [[Writing History in the Digital Age,]] using the platform, as they say in their introduction, to help make "the normally behind-the-scenes development of the book more transparent."
Matt Gold likewise used CommentPress in the review process for the essays in [[Debates in the Digital Humanities]], as did
Louisa Stein and Kristina Busse for [[Sherlock and Transmedia Fandom]]. In these two cases, the review process was structured around a community working together, with essay drafts opened to the authors included in the collections for comment. Stein and Busse also invited two external, non-anonymous readers to participate in their review process, engaging directly with the community of authors as they discussed the volume's essays.
Other publications have used other means of opening their review processes; the journal [[postmedieval]] conducted a crowd review using a standard blog format for their special issue entitled "Becoming Media";
the journal [[Kairos]] uses an extensive multi-tied editorial review process, which includes several phases of open communication amongst editorial board members and between editors and authors.
The site [[Digital Humanities Now]] uses PressForward's combination of crowd- and editorial-filtering methods to highlight some of the best work being done in digital humanities across the open web; those highlights are then reviewed for republication in the Journal of Digital Humanities.
These are just a few examples of the kinds of experiments and examples that we discussed. Assessing the [[success]] of review processes such as these presents certain challenges, which may highlight unspoken assumptions about traditional peer review: we assume, for instance, that a review process has been successful -- that reviewers responded to the texts under consideration in a forthright, scrupulous, critical manner, and that authors made use of this criticism in revision -- when good work results from it. In an open review, we have that same marker available -- is the work resulting from the process good? -- but we also have the history of the process itself available for examination. That availability raises several questions that we've never been able to ask before: How many comments would be "enough" in an open review? How many commenters? Are the commenters established or prestigious enough? Is the critical discussion in which those commenters engage sufficiently rigorous?
We believe that these questions of assessment will be addressed in part by projects such as the [[Open Annotation Collaboration]], which seeks to create technical standards and tools to enable the creation of web annotations that can be shared in multiple contexts, the [[Open Research and Contributor ID]] project (ORCID), which is working to develop a standard for the unique identification of scholarly authors, and [[Hypothes.is]], which is working to link open web annotation with reputation management; these projects together will enable open reviews to be linked to researcher IDs, creating a sense of those reviews’ context.
Similarly, there are a number of projects that are seeking alternative means of accounting for the impact of scholarly research, including the work of the [[altmetrics]] group, including projects such as [[ImpactStory]]. These projects might interact with a range of social reading platforms now in development to provide a suite of possibilities for articulating the results of open review.
And a suite of possibilities is what our advisory group finally decided we'll need -- a robust set of technologies that permit communities of practice to make crucial decisions about their values and policies and to find the best tools to support creating the kinds of participatory review process they seek. As a result, our final report leans heavily toward providing a list of issues that communities of practice should consider, rather than specific recommendations that they should follow, as they establish and implement their open review processes.
For instance, communities of practice should articulate for themselves what the desired goals and outcomes of their review processes should be. How are works selected for evaluation? What is being evaluated -- in-process texts or finished texts; articles, monographs, or born-digital projects -- and for what purpose -- for development, for selection, to foster conversation, for credentialing, or for some combination of the above? What aspects of the work are to be evaluated, and at what levels -- from the sentence-level through questions of organization and structure to project design, methodology, and significance for the field -- and through what means -- commenting, rating, liking? Many of these questions seem obvious, and yet it is only in the prior determination of these standards that review communities can assess whether they have been met.
As I discussed earlier, openness can take several different forms. Options include public access to and participation in the review process; removing the anonymity amongst authors and reviewers; and establishing a means of greater back and forth between authors and reviewers and amongst reviewers. These options require careful consideration within communities of practice about the value of open representation of author and reviewer identities, the value of public participation, and the value of reciprocity in the review process.
Extending the kinds of considerations with respect to openness, communities must similarly decide what the ground rules for collegial engagement are, their expectations for civility, reciprocity, and response. Concerns that have been raised about open review often suggest either that these processes will result in reviews that are insufficiently critical, or that they will devolve into the kinds of behavior we often see in online newspaper comments sections. In fact, neither of these things need be true, but creating an atmosphere conducive to collegial and yet serious engagement requires careful stewardship.
One of the largest problems cited in discussions of the traditional peer review process is the labor problem: first, that there is an ever-expanding quantity of this work that needs to be done, and second, that this work is radically unevenly distributed, with good citizens being called on again and again by editors desperate to get viable reviews in a timely fashion. In an open review process, the work done -- and not done -- by reviewers is visible. Even more, the work of review may also become the subject of review, as the community can evaluate the participation of its members not only as authors but also as reviewers. Communities, however, must decide how such review-of-the-reviewers will take place, how its results will be communicated, and what stakes it will have in the life of the community.
There are of course a variety of technologies that can help communities of practice meet their goals for open review, which we discuss in some detail in the full report. But we continue to believe that the most important systems with which such review practices engage are less technological than they are social. Perhaps most important among such social engagements for communities of practice considering open review processes will be figuring out how to articulate their values for themselves, and how their processes will support those values, in order that they might be further communicated and perhaps even defended to assessment bodies such as tenure committees and university administrations. Proponents of open review must find ways to situate their arguments about openness in relation to broader questions about the purposes of scholarly discourse, its potential for public impact, and the importance of visibility for the 21st century academic.
I've only been able to scratch the surface of this project in this talk, but I believe strongly that our most important conclusion is this: open review processes have a key role to play in modeling a conversational, collaborative discourse that not only harkens back to the humanities' long investment in critical dialogue as the essential core of intellectual labor, but also models a forward-looking approach to scholarly production in a networked era. Open review presents the possibility not only of getting traditional forms of scholarship into communication with broader audiences, but also of helping validate new kinds of scholarly output online. Making the process of assessment visible in a thoughtful and deliberate manner can only, we believe, help improve both the assessment and the work under evaluation.
Open Review and
Kathleen Fitzpatrick // @kfitz
kfitzpatrick at mla dot org
“Neon Open Sign,” Wikimedia Commons, 2005.
Wark, McKenzie. GAM3R 7H30RY. Institute for the Future of the Book, 2006.
Davidson, Cathy, and Goldberg, David Theo. “The Future of Learning Institutions
in a Digital Age.” Institute for the Future of the Book, 2007.
Wardrip-Fruin, Noah. “EP 1.1: Media Machines.” Grand Text Auto, 2008.