Reviewing scientific manuscripts.
Abstract: AIM: To provide guidance on reviewing scientific manuscripts for publication. REVIEW: Scientific peer review is possibly one of the most important tasks a scientist is asked to do. It carries a great responsibility and needs to be conscientiously and thoroughly carried out. It is most important that a reviewer decides very quickly whether to undertake a review and if so to complete the task. It must at all times be objective, as positive as possible and seen as contributing to the advancement of our knowledge. This review provides suggestions as to best practice in reviewing a scientific manuscript in dentistry. The various aspects of importance: accepting or declining a review, objectivity, approaches to reading and taking notes, assessment of methods, validity and reproducibility of results and evaluating a discussion, are covered in detail and the standards that are required considered. Suggestions are made as to how a review should be reported.

Key words: Scientific papers, reviewing.
Subject: Medical publishing (Management)
Medical publishing (Services)
Medical journals (Analysis)
Authors: Curzon, M.E.J.
Cleaton-Jones, P.E.
Pub Date: 08/01/2011
Publication: Name: European Archives of Paediatric Dentistry Publisher: European Academy of Paediatric Dentistry Audience: Academic Format: Magazine/Journal Subject: Health Copyright: COPYRIGHT 2011 European Academy of Paediatric Dentistry ISSN: 1818-6300
Issue: Date: August, 2011 Source Volume: 12 Source Issue: 4
Topic: Event Code: 200 Management dynamics; 360 Services information Computer Subject: Company business management
Product: Product Code: 2721391 Medical Periodicals NAICS Code: 51112 Periodical Publishers
Geographic: Geographic Scope: South Africa; United Kingdom Geographic Code: 6SOUT South Africa; 4EUUK United Kingdom
Accession Number: 277106767
Full Text: Background

There has been a significant growth in the numbers of manuscripts submitted to journals concerned with paediatric dentistry, as well as in all other branches of medicine and basic sciences. Not only is there pressure on editors to accommodate this increase but also it has meant there is a need for many more people to review manuscripts submitted for publication. As the modern process of peer review requires that at least two independent experts look in detail as to the merits of a manuscript, editors are continually seeking reviewers.

An editor needs to receive from a scientific reviewer an objective detailed comment on the quality and merit of a submitted manuscript. It is of little use to an editor to receive comments such as 'This paper is no good' or conversely 'This is a good paper'. Both do not say why a paper is good or bad and more importantly could a paper be improved or, even if good, whether any changes should be made to make a paper better or more readable.

Most reviewers start off not being educated into the techniques of reviewing. For many of us reviewing starts when, out of the blue, one receives a letter or e-mail asking if a paper can be reviewed. Usually this request comes as a result of an editor checking the current literature and identifying a previous paper by the potential reviewer. It is therefore assumed that there is a pertinent knowledge of the main subject of the paper to be reviewed. But if you have never carried out a review before how is it to be done? What is required and to what degree of detail should the review be completed?

The aim of this paper is therefore to give some guidance for reviewers as to what is expected and some direction to completing a useful and objective review. This paper is based on our combined personal experience of 80 years in dental research that includes publishing, reviewing, teaching postgraduate students to carry out reviews, and editing scientific manuscripts.

Accepting or declining the review

When the request for a review first arrives it is usually in the form of a letter or e-mail providing a copy of the paper concerned and a request with a time limit in which to complete the task. Traditionally the time allowed is three to four weeks. Therefore the potential reviewer must consider two things.

* 'Do I know enough about the subject of the paper to carry out the request?'

* 'Can I complete the review in the time proposed?'

It is essential to be honest here and write back that either not enough is known about the subject or, if it is, whether the review can be completed within the time allowed. Also, one has to decide if there is any conflict of interest that might unfairly bias a review.

In some circumstances the potential reviewer does know about the topic to an extent that a review would be possible. However because of other pressing needs a review cannot be carried out within three/four weeks. In that case it is perfectly proper to reply to the editor that you are willing to complete the review but not until a date some weeks ahead. It is then up to the editor to accept a longer time period for the review or else to seek a different reviewer. What is not acceptable is to say that the review will be carried out and then forget about it for many weeks until the editor has to follow-up to find out what is happening. If you say you will do it within a specified time, then of course, you should make sure that you do provide a review by that date.

It should be appreciated that if a reviewer accepts the task and does nothing it is only after 3-4 weeks that the editor realises you have not done the review. The editor then has to decide whether to chase you or find another reviewer, delaying the process even further. In these authors' experience, as editors, there have been instances of promises being repeatedly made that a review will be carried out and the process being extended over many months. Indeed, in some cases up to six months can be taken up by procrastination by reviewers and an editor trying to obtain reviews.

Objectivity

It should go without saying that a reviewer must be entirely objective in writing a review. It is known that there have been instances where a reviewer has delayed or obstructed the publication of a manuscript because they were writing a similar paper and did not want any competition. Coupled with this is that the manuscript for review must be kept confidential and under no circumstances may any of the content be plagiarised or used in some way to the benefit of the reviewer. All this may sound completely unprofessional but regrettably it has and does happen.

The prime questions for a reviewer to answer are whether the scientific research is valid, valuable and properly carried out, are the methods appropriate and reproducible and the results reliable. Any personable bias by a reviewer must be rigorously set aside. The only criterion is the scientific integrity of the work.

Carrying out a review

Obviously the first step is to quickly read through the manuscript to get an overall view as to the subject matter and an indication of the quality of the work and its presentation. At that point ask yourself the questions:

* Do I want to do a review?

* Am I an appropriate reviewer?

* Can I do the review in the time requested?

* Am I free from any personal bias?

Some journals attempt to make the process anonymous by removing the title page from submitted manuscripts. Even if the title page is present in a manuscript the reviewer should of course not introduce a bias into the review. It does happen that conflicts occur between scientists that may be acrimonious. If that has occurred then a potential reviewer should decline the review.

The review

a systematic approach is invaluable in completing the review starting with the main paper. Before considering the title and abstract it is advisable to read through the whole manuscript first so that the appropriateness of the title and details provided within the abstract can be judged on what has been reported. It is quite surprising that the titles of papers may not be related to the work carried out.

Introduction

After reading this section thoroughly the reviewer needs to assess:

* Is it up to date in its coverage of the literature?

* Does it state an aim and null hypothesis?

* Is it too short?

* Is it too long?

* Is it critical?

* Does it reach a proper conclusion?

An introduction should be simply that, to introduce the subject of the research and give evidence of why the research was carried out and why it was felt needed. Many introductions are far too long and a reviewer might well feel that the authors are only trying to show how well they have completed a literature search. The reviewer, who it is expected knows the literature well, should be able to evaluate what has been presented and make an assessment accordingly. These days it is useful for a reviewer to go to Pubmed or similar reference site, enter the key words listed on the paper's title page and see how many previous papers have been published on the subject. This will indicate whether the manuscript is reporting entirely new material, confirmation of existing knowledge, adding a new insight or just what is known as a 'pot-boiler' simply repeating what has already been extensively published. If the reviewer thinks that the work being reported is but a repetition of well-known work they can recommend rejection or suggest a considerable reduction of the size of the paper, perhaps as a 'short communication' only confirming well-known results.

An introduction must be critical of the existing literature and say if previous work, if any, has been thoroughly carried out. The text should lead to a justification of why the research was undertaken. When appropriate, this should lead to a statement of the aim of the research. Ideally there should also be a statement of a null hypothesis.

Materials and methods

This section of a manuscript is crucial and should be written in great detail. The reviewer must assess the text and ask the question "Is there sufficient detail that any other researcher could exactly carry out the same study?" Far too often the author(s), of a manuscript under review, having spent months or years carrying out the research therefore know it inside out and neglect to provide sufficient details because they assume too much of a reader. Regrettably, sometimes a vital step may be deliberately omitted to prevent others from repeating the work; or worse that this may even indicate that the work was never actually done in the first place.

It is thus very important for the reviewer to take the time to critically look at what has been written:

* Have ethical requirements been adhered to such as approval by a research ethics committee, has informed consent been obtained where required?

* Are sample sizes, population selection properly constructed? This applies whether it is human subjects, animals or laboratory samples?

* Has there been a power calculation to determine the best sample size?

* Are the methods appropriate and/or valid to the aims of the study?

* If examiners are involved have they been properly trained and calibrated?

* What methods have been used to assess reproducibility and reliability?

* What statistical methods have been used to assess the data that is to be collected and are they appropriate?

If the methods are inadequate or inappropriate then the whole study is compromised and the paper may well be rejected on this section alone. In addition the reviewer should also consider whether any results are statistically sound but rather clinically meaningful.

Results

There must be a comprehensive report of the results. At the same time the reviewer must look critically to see if the author(s) are swamping the reader with far too much data. Have the author(s) been critical in what they have selected to report? Oftentimes papers will be seen with far too many tables of data. It appears that researchers may have looked at just about every variable they could think of and they are very determined to include every possible result and interaction. A reader/reviewer then has to wade through trying to find the significant results, if any. The reviewer should make this judgement. Mass reporting of every piece of data indicates a serious lack of critical judgement.

On the other hand reviewers may well have to consider papers where the results are so minimal as to be almost useless. A lack of statistical reporting may be a problem here. For example means are provided but without standard deviations or standard errors of the means. The reviewer must be able to assess whether the data spread is too large to give a meaningful result. At the same time a reviewer should crosscheck that the tables within the paper are discussed in the text. It is surprising that this is sometimes not the case. In addition is there 'double information' where exactly the same data is reported within the text as in any tables or figures.

At the same time a question a reviewer should ask is whether there is a statistical assessment given as significant, which it might be mathematically, but which is of no clinical significance. Previous examples of this error occurs with intervention dental caries studies where perhaps after 4-5 years of a clinical trial at enormous cost a reduction in DMFS of half a surface is reported as highly statistically significant. Such a reduction would be clinically meaningless and of a very low cost-benefit. In such a case a reviewer should clearly state that the value of the research reported is of little clinical value.

If a 'null hypothesis' has been stated as part of the aim of the research then at the end of the results there should be an outcome. Has the null hypothesis been proved or disproved?

Discussion

The relevance of the findings of the research now needs to be discussed in the light of previous reports. A common mistake here is for author(s) to just repeat the introduction. Another deficiency is for authors to not critically evaluate their own results. This is essential.

Far too often one reads papers where the writers present their work as being perfect when there are glaring deficiencies in methods, statistics or interpretation. Related to this is where the data, and hence the findings, are extrapolated far beyond reliability or appropriateness of the study. The reviewer must look out for this.

Conclusion

A brief conclusion should not be a repetition or discussion of the results but a highlighting of the most important findings should be made. It should be no more than a simple paragraph and the reviewer should watch out for long-winded repetition of the results. Here again the reviewer should be aware of authors making political statements and recommendations or changes in clinical or research practice. Those judgements must be left to the readers.

References

Each paper will have a reference list and the reviewer should make sure they look at this. Are the references within the reviewer's knowledge appropriate? There may be too few and may be many years out of date. On the other hand there may be far too many as, again, an author tries to show just how many papers have been found in a literature search and has not gleaned them out critically to only those of merit. Is the reference style used that requested by the journal. References should be double checked for style.

Overall presentation

How has the paper been written and presented? Does it conform to the guidelines of the journal concerned? Is the reference style correct? These points are not as important as the content and in many instances these are the responsibility of the editor. However, if the English, spelling and grammar are very poor then they compromise the intelligibility of the work. It is then appropriate for the reviewer to say, "This paper is so badly written and presented that I cannot understand what the authors are trying to say." At that juncture the reviewer decides it is a work of no inherent value and being re-written is not worthwhile or it should be completely revised and then re-submitted when properly presented.

A final assessment

Having now read the paper thoroughly a final task is to go back and look at the title--is it appropriate for the research? Then re-read the abstract and again is it accurate and does it tell enough about the study? Is the data in the abstract the same as in the results? It is surprising how often it is not.

Writing the review

Having read the manuscript thoroughly, and made extensive notes and comments, the reviewer now writes a report. Many journals provide a review form and some even specify sections/boxes to be 'ticked' especially when the review is online, which is becoming common nowadays. These help but a good reviewer should not be constrained by that and write a full report. Online review systems generally have sections for the editor's eyes only and another for the report that is to be sent to the researcher who submitted a manuscript.

A good approach is to comment briefly in each section making both positive and negative comments as appropriate. The reviewer should focus on the most important points and not descend into nit-picking on trivial matters. The focus must be on the science.

The reviewer must eschew personal bias in preparing this report, particularly if the research contra-indicates what the reviewer's own studies have found. Pejorative aggressive statements must be avoided. Of course if the research is clearly deficient and not reliable then there can be grounds for criticism. But criticism must not be based on personal bias.

Once the reviewer has written their report it is suggested that it be put aside for 48 hours. Then it should be looked at again. Oftentimes a reviewer may feel that their comments have been too harsh or too laudatory and then there is the opportunity to modify the comments accordingly. A useful method is to consider the report as if one is the recipient. Only then should the review be sent off to the journal concerned.

Final comments

Scientific peer review is possibly one of the most important tasks a scientist is asked to do. It carries a great responsibility and needs to be conscientiously and thoroughly carried out. It is most important that a reviewer decides very quickly whether to undertake a review and if so to complete the task within a short period of time. It must at all time be objective, as positive as possible and seen as contributing to the advancement of our knowledge.

Further reading

Black N, van Roovan S, Godlee F, Smith R, Evans S. What makes a good reviewer and a good review for a general medical journal? JAMA, 1998; 15:231-233.

Callaham ML and Tercier J. The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med, 2007; 4:e40.

Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cohrane data base Syst Rev, 2007; 18:MR000016.

Pyke DA. Referee a paper. In: Lock S, ED. How to do it. London, Brit Med J, 1979; 143-146.

Roberts LW, Coverdale J, Edenharder K, Louie A. How to review a manuscript: a 'down to earth' approach. Acad. Pyschiatry, 2004; 28:81-87.

Schroter S, Tite L, Hutchings A, Black N. Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA, 2006; 295:314-317.

Wahner E, Godlee F, Jefferson T. How to survive peer review. London, Brit Med J Books, 2002; 13-19.

Emeritus Professors of: M.E.J. Curzon, Leeds Dental Institute, Leeds, England; P.E. Cleaton-Jones, University of the Witwatersrand, Johannesburg, South Africa.

Postal address: Prof. M.E.J. Curzon. Foxgloves, Galphay, Nr Ripon, North Riding, England, HG4 3NJ.

Email: curzongalphay@aol.com
Gale Copyright: Copyright 2011 Gale, Cengage Learning. All rights reserved.