Open@VT

Open Access, Open Data, and Open Educational Resources

Category Archives: Open Peer Review

Dr. Malte Elson on Peer Review in Science

Few areas in scholarly publishing are undergoing the kind of examination and change that peer review is currently undergoing. Healthy debates continue on different models of peer review, incentivizing peer reviewers, and various shades of open peer review, among many other issues. Recently, the second annual Peer Review Week was held, with several webinars available to view.

Since peer review is currently such a dynamic topic, the University Libraries and the Department of Communication are especially pleased to host a talk about peer review in science by Dr. Malte Elson of Ruhr University Bochum. Dr. Elson is a behavioral psychologist with a strong interest in meta-science issues. Dr. Elson has created some innovative outreach projects related to open science, including FlexibleMeasures.com, a site that aggregates flexible and unstandardized uses of outcome measures in research, and JournalReviewer.org (in collaboration with Dr. James Ivory in Virginia Tech’s Department of Communication), a site that aggregates information about journal peer review processes. He is also a co-founder of the Society for Improvement of Psychological Science, which held its first annual conference in Charlottesville in June. Details and a description of his talk, which is open to the public, are below. Please join us! (For faculty desiring NLI credit, please register.)

Wednesday, October 12, 2016, 4:00 pm
Newman Library 207A

Is Peer Review a Good Quality Management System for Science?

Through peer review, the gold standard of quality assessment in scientific publishing, peers have reciprocal influence on academic career trajectories, and on the production and dissemination of knowledge. Considering its importance, it can be quite sobering to assess how little is known about peer review’s effectiveness. Other than being a widely used cachet of credibility, there appears to be a lack of precision in the description of its aims and purpose, and how they can be best achieved.

Conversely, what we do know about peer review suggests that it does not work well: Across disciplines, there is little agreement between reviewers on the quality of submissions. Theoretical fallacies and grievous methodological issues in submissions are frequently not identified. Further, there are characteristics other than scientific merit that can increase the chance of a recommendation to publish, e.g. conformity of results to popular paradigms, or statistical significance.

This talk proposes an empirical approach to peer review, aimed at making evaluation procedures in scientific publishing evidence-based. It will outline ideas for a programmatic study of all parties (authors, reviewers, editors) and materials (manuscripts, evaluations, review systems) involved to ensure that peer review becomes a fair process, rooted in science, to assess and improve research quality.

The Winnower: An Interview with Josh Nicholson

One participant in our faculty and graduate student panels during Open Access Week at Virginia Tech was Josh Nicholson, founder of a new open access journal, The Winnower. Josh is a PhD candidate at Virginia Tech studying the role of the karyotype in cancer initiation and progression in the lab of Dr. Daniela Cimini. The Winnower will serve the sciences, with sections for different disciplines as well as a science and society section. The new journal will launch in January or February, and is currently looking for beta testers. Some buzz has already been created through a post on the AAAS blog and a Q&A, and The Winnower is active on Twitter and Facebook.

Josh Nicholson

Josh Nicholson

Our e-mail interview occurred over several weeks. The questions and answers below have not been edited, except to add an occasional link. I have also grouped similar topics together, so the questions are no longer in their original order.

How did The Winnower come about?

Ever since I began publishing science articles I have asked: does the publication system make sense? The short answer has always been: no. I think most scientists who have published an article would agree with this but they are often too involved in playing the so-called “tenure games” to do anything about it. Well I don’t have to worry about tenure yet (if ever) so I can focus on the problem at hand and try and actually do something about it.

Why the name?

The Winnower is a tool used to separate the good from the bad. This is a main objective of The Winnower, to identify good pieces of research from flawed pieces based on open post-publication review.

Will the journal use a Creative Commons license or allow authors to choose?

Content published with The Winnower will be licensed under a CC BY license.

Will the journal be able to accommodate data as well?

The journal will accommodate data but should be presented in the context of a paper. The Winnower should not act as a forum for publishing data sets alone. It is our feeling that data in absence of theory is hard to interpret and thus may cause undue noise to the site.

Will there be any screening process before an article appears?

No, articles can be posted on The Winnower immediately. This should not be taken as an endorsement that they are correct but rather a signal that they need to be reviewed, much like the “preprint” system. Of course, articles that are reviewed will be easily distinguishable from those that are not. The site is designed to encourage reviews of papers, indeed it is why we are called The Winnower: to separate the bad from the good. To limit possible spamming of the system as well as to sustain The Winnower there will a charge of 100 dollars per publication.

How will you accommodate the need for fast review in an open peer review system?

The Winnower will strictly utilize open review. This means that all publications will be open to review and all reviews will be open to read. Publication in The Winnower will occur immediately after submission and reviews will be open for variable amounts of time so that authors can make edits based on the reviews. It should be noted that papers will always be open for review so that a paper can accumulate reviews throughout its lifetime. Reviews can be solicited from peers upon submission and reviewed by The Winnower community we hope to build. Based on the system we are building we believe the number of reviews should reflect the number of times the work is read.

At what point can an author say that a paper has been peer-reviewed?

An author can say it has been peer-reviewed as soon as a paper receives a review. But we hesitate to say that this paper has passed peer review because doing this causes some problems. Indeed, as you may be aware all work published now has “passed” peer review but that has done nothing to limit the high rates of irreproducibility. In fact, it may be a cause of it. We want to change the conversation from “passing” peer review to what is the percent confidence scientists have in this paper. To accomplish this we will be implementing semi-structured reviews (i.e. turning reviews into a measurable quantity).

How will reviews be a “measurable quantity”?

Much like reviews are performed by the National Institutes of Health scoring will be implemented for different criterion. Obviously there will be no way to score free form reviews but various questions can be assigned a numerical score. PLOS Labs is working on establishing structured reviews and we have talked with them about this. We think it would be great if there is an industry standard to use for structured reviews, but until then we will implement the best system that we can think of.

What do you mean when you say that peer review can be a cause of irreproducibility?

Peer review, as it stands now, is more or less a pass/fail system. So, if you design 4 experiments to test a hypothesis and only 3 confirm your hypothesis you are likely to leave out the 1 experiment that did not fit your hypothesis in order to pass peer review. The problem is that ultimately you can’t hide from nature, she will reveal her truth one way or another. If there is no system to pass or fail and you wish your paper to stand the test of time you will include all results, even those that contradict your hypothesis. Moreover, editors are literally selecting for simple studies but very often studies are not simple and results are not 100% clear. If you can’t publish your work because it is honest but poses some questions then eventually you will have to mold your work to what an editor wants and not what the data is telling you. There is a significant correlation between impact factor and misconduct and it is my opinion that much of this stems from researchers bending the truth, even if ever so slightly, to get into these career advancing publications.

How can you ensure that each paper is reviewed, or receives enough reviews?

Authors when submitting their research will be encouraged to invite reviewers directly to review their paper. Some may argue this will allow authors to invite their friends and the reviews will be biased. We think the transparency of reviews will limit this from happening. In addition to authors driving reviews to the site each article will display a prominent “write a review” button.

Isn’t bias often hidden? For example, if a submitter invites friends to review, wouldn’t that relationship be invisible to readers, and reviewers could go easy on criticism and exaggerate praise?

This is certainly a possible problem that could arise but it is not anything new with our system. Currently, scientists are allowed to suggest those that should and should not review their papers. Indeed, you heard this blatantly revealed during Open Access Week by a researcher [Note: Josh is referring to the faculty panel during which Dr. Good said some journals prompt authors to suggest reviewers]. Arguably there is an editor to limit any bias but the editor themselves could be biased one way or another. While The Winnower won’t eliminate bias (we are humans, after all) the content of the reviews can be evaluated by all because they will be readily accessible. [Note: reviewers could list competing interests in the template suggested on The Winnower’s blog.]

You recently wrote a blog post “Sexism in Science” that cites an article advocating abandoning secrecy. But other research concludes that double-blind review is best, and since even the article you cite mentions other studies in which female representation is better when gender is unknown, wouldn’t double-blind review do a better job of eliminating sexism?

Double blind review is indeed better than single blind review in regards to eliminating sexism in science but this does not mean that it is the best. As far as I am aware there has been no test between open review and double blind review. Any instances of sexism that do occur in open review can be addressed and fixed because they can be exposed unlike closed review. In the Sexism in Science blog I discuss a few cases in which blatant examples of sexism in science occur. In the end many have been remedied because of the open dialogue that occurs on the internet.
The Winnower

Does open peer review mean that all authors and reviewers must reveal their real names?

Yes.

How will you ensure that reviewers are using their real names?

This is not easy, but we think with the system that we are building reviewers will want to use their real names. Reviews will be assigned DOIs and over time we hope to put the reviews on the same level as the research. Indeed, I can imagine researchers that specialize in reviewing and being rewarded for doing so. Full time Winnowers, if you will. But regardless if a reviewer uses their real name or not , the transparency of reviews will discourage personal/inappropriate reviews. It is the serious criticisms/reviews that will be difficult for authors to respond to. I strongly believe that if you’re scared of open peer review then we should be scared of your results.

Do you plan to use altmetrics on the site?

Yes, we will use various metrics on the site, including altmetrics. We want to shift the focus from the journal to the article itself and we think employing various article-level metrics is the best way to do this.

Have you decided on an altmetrics service and will some revenue go toward that?

Yes, we will be using Altmetric and yes some of the revenue will indeed go towards that.

At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish?

Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.

From what funds do you think most scientists will pay the $100 fee?

I believe that most academic scientists will pay the $100 fee with grant money. If they do not currently have grant money the fees could theoretically be paid for by departmental funds or even personal funds.

Is The Winnower a for-profit or non-profit enterprise, and are you registered as such?

The Winnower is a for-profit limited liability company.

Is there a preservation plan for the content in case the journal does not continue?

Yes, we will be using the CLOCKSS program.

Is it possible for an author (or journal staff) to withdraw an article?

Yes, it is possible to withdraw an article and it is also possible for us to retract the article if necessary.

Since many scientists do need to play “tenure games”, wouldn’t the Winnower’s lack of indexing, impact factor, etc. serve as a disincentive to submit or review?

Yes, this is certainly an obstacle The Winnower will have to face but it is not only an obstacle for The Winnower rather it is an obstacle for the entire scientific community. We think we need to get away from judging scientists based upon IF or other measures of prestige and we are not alone. The San Francisco Declaration on Research Assessment (SF DORA), which has been signed by nearly 10,000 researchers and publishers in less than a year, calls for new ways to evaluate researchers. As the community moves away from journal-level metrics and into article-level metrics The Winnower should be well positioned to thrive. Indeed, we will utilize many article-level metrics as well as information from reviews themselves.

With most journals, if I submit a paper that is rejected, that information is private and I can re-submit elsewhere. In open review, with a negative review one can publicly lose face as well as lose the possibility of re-submitting the paper. Won’t this be a significant disincentive to submit?

This is precisely what we are trying to change. Currently, scientists can submit a paper numerous times, receive numerous negative reviews and ultimately publish their paper somewhere else after having “passed” peer review. If scientists prefer this system then science is in a dangerous place. By choosing this model, we as scientists are basically saying we prefer nice neat stories that no one will criticize. This is silly though because science, more often than not, is not neat and perfect. The Winnower believes that transparency in publishing is of the utmost importance. Going from a closed anonymous system to an open system will be hard for many scientists but I believe that it is the right thing to do if we care about the truth.

Is there anything else you would like to add?

The Winnower will also feature two sections called “The Grain” and “The Chaff.” The Grain will be short essays by authors of papers that have received 1,000 citations or have passed a specific Altmetric score. In these essays authors will describe the work and the story behind the work (i.e. was it initially rejected, was it funded, where did the idea come from etc.). They will be very similar to the former series Citation Classics run by Dr. Eugene Garfield. Indeed, Dr. Garfield has expressed much enthusiasm for The Winnower to pursue this. In parallel, we will be launching a section called The Chaff that highlights retracted papers. These papers will be written by authors of retracted papers in order to really find out why studies failed or what led to the authors to fabricate data etc. We want to position papers published in The Chaff in a non-accusatory manner so that we may learn from these papers. The Chaff will not be a forum to castigate authors of retracted papers.

OA Week Event: Faculty and Graduate Student Panels

There were some excellent discussions last night during our Open Access Week faculty and graduate student panels. Our faculty panelists were Dr. Zachary Dresser (Religion and Culture), Dr. Deborah Good (Human Nutrition, Foods, and Exercise), and Dr. Joseph S. Merola (Chemistry).

Faculty Panel (from left, Zach Dresser, Debby Good, Joe Merola)

Faculty Panel (from left, Zach Dresser, Debby Good, Joe Merola)

Both Dr. Good and Dr. Merola have had positive and negative experiences with open access journals. Dr. Good has had positive interactions with PLoS One as an author and peer reviewer, but criticized some hybrid open access journals for asking whether she wanted to take the open option before the paper had been peer reviewed, which could lead to a real or perceived bias due to the fee involved. She has also been asked to become editor of a journal on Beall’s list of predatory journals.

Dr. Merola serves on the editorial board of an open access journal and has had good experiences with open access in general. But he has submitted to another open access journal that would not withdraw a paper or remove him from its editorial board. Dr. Merola also noted that hybrid journals are unlikely to reduce subscription prices with open access takeup. Both Dr. Merola and Dr. Good noted that abstracting and indexing can be a problem with open access journals.

Dr. Dresser primarily writes in the field of history, and noted that humanities journals have shown little movement toward open access. The monograph is the gold standard in these fields, and he referred to the AHA controversy that was the subject of Monday’s ETD Panel. Dr. Good asked why ETDs (electronic theses and dissertations) could not be broken into separate articles as happens in the sciences. Dr. Dresser responded that though it happens on occasion, history is a very traditional field that places value on a story or narrative as a whole (thus the focus on monographs). Interestingly, Dr. Dresser is participating in an open textbook effort in American history.

Our graduate student panelists were Stefanie Georgakis (Ph.D. candidate in Public and International Affairs), Jennifer Lawrence (Ph.D. candidate, ASPECT), and Joshua Nicholson (Ph.D. candidate in Biological Sciences). Stefanie and Jennifer are co-editors of the Public Knowledge Journal, an interdisciplinary open access journal for publishing work by graduate students (at any university). Josh Nicholson is co-founder of The Winnower, an open access journal in the sciences that will be starting in 2014.

Graduate Student Panel (from left, Stefanie Georgakis, Jennifer Lawrence, Josh Nicholson)

Graduate Student Panel (from left, Stefanie Georgakis, Jennifer Lawrence, Josh Nicholson)

Stefanie and Jennifer are struggling with the sustainability of PKJ, though not in the way you might think. While the journal is hosted on campus, the challenge is finding editors, peer reviewers, and submissions from a constantly changing population. PKJ is seeking a formal partnership to ensure its sustainability. Stefanie and Jennifer are also hoping to increase readership and provide for the preservation of journal content. They felt that alternative perspectives are suited for open access, and enabling open discussion of articles on the journal site can combat the inward-looking culture of some traditional journals. PKJ can help graduate students become familiar with the publishing environment, a need also identified by Dr. Good earlier in the evening.

Josh is critical of traditional publishing, and especially of peer review. The Winnower will serve the sciences as a low-cost ($100 article processing charge) open access journal that will also employ open peer review (he noted that the NIH’s PubMedCentral has just begun post-publication review). Articles under review could be revised as a result of review for the first 3 months, then assignment of a DOI would signify publication, though further reviews could be added.

Attracting reviewers could be a problem, and he is open to using a centralized service such as PubPeer. Reviews would be structured, avoiding a problem Stefanie and others brought up of short, insubstantial reviews. Reviewers themselves would be rated (similar to Amazon), with top reviewers perhaps receiving credit toward article publication. While there has been some concern about the potential for racism or sexism in an open environment, the session attendees seemed to agree that transparency was the best option, particularly in fields with single-blind peer review where bias could occur but not be revealed.

I asked whether The Winnower would try to become a member of OASPA (Open Access Scholarly Publishers Association), but Josh replied that the journal’s model would not fit their guidelines (such as having an editorial board) or PubMed’s listing criteria, echoing the abstracting and indexing concern mentioned by Dr. Good and Dr. Merola earlier.

Thanks again to all of our panelists for a great discussion, and to the event organizers, Kiri Goldbeck DeBose and Purdom Lindblad.

Thanks to the University Libraries’ Event Capture Service for the videos. [Edit 2/28/14]

Faculty Panel:

[Edit 5/23/14]:

Graduate Student Panel:

Open@VT on Mastodon

Loading Mastodon feed...