Skip to content
ALL Metrics
-
Views
10
Downloads
Get PDF
Get XML
Cite
Export
Track
Study Protocol

Exploring Discrepancies between Protocols and Published Scoping Reviews in Implementation Science: Protocol for a Methodological Study

[version 1; peer review: awaiting peer review]
PUBLISHED 28 Aug 2025
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

Abstract

Background

Discrepancies appear to be common between systematic reviews and their protocols, potentially undermining the credibility of their findings if discrepancies are not transparently reported. However, it is unclear to what extent such discrepancies also exist within scoping reviews, which may be more prone to such changes due to their greater flexibility. Scoping reviews are increasingly common within implementation science; biases in their conduct therefore may have detrimental effects to the real-world settings in which evidence is applied, in addition to undermining the scientific validity of the reviews in this growing discipline. This study aims to investigate discrepancies between scoping reviews and their protocols using reviews in the field of implementation science as an exemplar. In particular, the study will examine how common such discrepancies are, why they occur, and how they are reported in the literature.

Methods

This is a methodological study of completed scoping reviews on implementation science topics which will be gathered from five key journals: Implementation Science, Implementation Research and Practice, Implementation Science Communications, BMJ Quality and Safety, and JBI Evidence Implementation. Those with available protocols will be examined for discrepancies between their earliest protocol and their final report. Methodological details will be extracted from the protocols and reviews. These data will be coded to ascertain whether discrepancies are found, what aspect of the review these relate to, the extent of this change (e.g. major vs. minor), whether discrepancies are acknowledged and where this occurs in the paper, and any justification given for this change. The data extraction tool is in development, informed by relevant guidelines for conducting and reporting scoping reviews.

Conclusions

By understanding the extent, nature, and reasons for discrepancies in scoping reviews, findings can inform guidance for conducting such reviews, particularly when planning review protocols, and when reporting methodological discrepancies.

Keywords

scoping reviews, discrepancies, methodological discrepancies, reporting quality, implementation science

Introduction

Availability of evidence synthesis protocols has become commonplace, to increase transparency regarding review methodology and potential sources of bias in this process (Higgins et al., 2023). Preparation of protocols is beneficial in ensuring careful planning and documentation of the intended method, enabling a more consistent approach within the review team, and reducing arbitrary decision making later in the process (Moher et al., 2015; The PLoS Medicine Editors, 2011). Making review protocols publicly available also enables readers to identify changes made to the planned approaches, allowing them to consider whether such deviations may have biased the results obtained and the interpretations drawn from these.

Multiple studies demonstrate that discrepancies exist between systematic reviews and their protocols (Kirkham et al., 2010; Koensgen et al., 2019; Pandis et al., 2015; Parsons et al., 2019; Silagy et al., 2002; Tricco et al., 2016). Various terms have been used in previous studies, such as ‘difference’ (Koensgen et al., 2019), or ‘change’ (Silagy et al., 2002) but ‘discrepancy’ appears to be the most commonly used term across studies. This is the case both in studies of systematic reviews (Kirkham et al., 2010; Pandis et al., 2015; Parsons et al., 2019; Tricco et al., 2016), and in studies of reporting discrepancies between clinical trials and their trial registry records (e.g. He et al., 2025; Hudson et al., 2020; Matvienko-Sikar et al., 2024; Rivero-de-Aguilar et al., 2024). While explicit definitions of this term are generally lacking in previous studies, discrepancies have been previously defined as any incongruity between the content of a manuscript and its associated protocol (TARG Meta-Research Group and Collaborators, 2022, p2). This could, for example, involve the addition, removal, or modification of methodological details. Such classifications of discrepancies as additions, removals or modifications have been used in previous studies investigating protocol-review discrepancies in outcome reporting specifically (Chan et al., 2004; Kirkham et al., 2010; Pandis et al., 2015).

While some changes to the review plan may be required for logistical reasons, unexpected challenges or complexity, changes could also be influenced by reporting bias or publication bias which may undermine the reviews’ credibility. When undisclosed, this increased risk of bias is disguised from the reader, hindering their ability to judge the validity of the study (TARG Meta-Research Group and Collaborators, 2022). Several studies have demonstrated such discrepancies within systematic reviews, despite being widely considered to be a gold standard of evidence synthesis. For example, Tricco et al. (2016) report that one-third of systematic reviews contained a discrepancy in their primary outcome, while Koensgen et al. (2019) report that almost all systematic reviews they investigated had at least one methodological difference between their protocol and review. Furthermore, Silagy et al. (2002) claim that even when changes may be beneficial to a study, these changes and the rationale for them should still be clearly documented in the report of the final review.

No evidence appears to exist regarding discrepancies in scoping reviews. Scoping reviews are defined as “a type of evidence synthesis that aims to systematically identify and map the breadth of evidence available on a particular topic, field, concept, or issue, often irrespective of source (i.e., primary research, reviews, non-empirical evidence) within or across particular contexts. Scoping reviews can clarify key concepts/definitions in the literature and identify key characteristics or factors related to a concept, including those related to methodological research” (Munn et al., 2022, p950). Therefore, while such reviews should still be conducted systematically, they tend to be more exploratory and more inclusive of different types of evidence, compared to approaches such as systematic reviews. However, it is possible to set narrower boundaries for scoping reviews, including through the specificity of the predefined eligibility criteria such as focusing on particular subgroups, study designs, or time periods (Arksey & O'Malley, 2005; Levac et al., 2010).

Although scoping reviews continue to be a popular form of evidence synthesis, they are often criticised for their poor conduct (Khalil et al., 2021; Xue et al., 2024). Tricco et al. (2016) found that key steps recommended by JBI were commonly not used or reported, such as screening studies in duplicate, or use of a study flow diagram. Additionally, they report that protocols were apparent for only 13% of the scoping reviews they examined. Khalil et al. (2021) and Xue et al. (2024) also report that availability of such protocols appears low. This is perhaps unsurprising, since the use of protocols in scoping reviews has only been advocated for since JBI released its conduct guidance on scoping reviews in 2020 (Peters et al., 2020). The fact that PROSPERO does not publish scoping review protocols may also discourage some reviewers from sharing their protocols, even if they are accustomed to doing so for systematic reviews, although other repositories such as the Open Science Framework (OSF) remain available to facilitate this. As scoping reviews are more flexible and iterative than systematic reviews (Mak & Thomes, 2022; Peters et al., 2020; Peters et al., 2021; Peters et al., 2022), deviations from planned approaches may be more common in scoping reviews than in other review types. For example, adjustments to their questions, inclusion/exclusion criteria and search can be made during the review process (Levac et al., 2010). Furthermore, the complexity of the concepts and information being considered (e.g. Thomas et al., 2020), as well as the large body of literature and wide range of study designs typically included in scoping reviews (Pawliuk et al., 2021) may lead to unforeseen challenges in time and resources, necessitating methodological changes. Authors may struggle to decide when changes are required, how to navigate these, and how to report deviations transparently. Reviewers and editors may also experience challenges in evaluating reviews that include such deviations, particularly as it has been suggested that many reviewers and editors lack the necessary expertise to evaluate scoping review methodology in general (Khalil et al., 2021). There is therefore a need for clearer guidance on navigating and reporting such methodological changes during the scoping review process, to aid not only authors but also those evaluating such reviews.

Scoping reviews are commonly used in implementation science as they allow researchers to map the available literature, define concepts and identify research gaps. Their popularity within this discipline may be due to the approach’s versatility. Implementation science is applicable across many different settings, populations and topics, and is a highly multidisciplinary field which utilises concepts, theories and methods drawn from other disciplines. Scoping reviews’ suitability for exploring broad topics across large volumes of literature therefore enables them to accommodate the wide and diverse scope of literature that may be relevant to implementation. Furthermore, the holistic overview provided by scoping reviews may allow to better capture the complexity of the topics which implementation science investigates. As implementation science is intended to be a practical and applied discipline, any biases in the evidence base may be detrimental to the real-world settings in which the insights gained from scoping reviews are used. It is therefore important to understand to what extent methodological discrepancies appear to occur within scoping reviews in implementation science in particular, and to promote transparent reporting of such discrepancies.

Aim & objectives

This study aims to investigate discrepancies between scoping reviews and their protocols, using the field of implementation science as an exemplar due to the popularity of this methodology within this discipline.

Specific objectives are to identify:

  • - The prevalence of discrepancies between protocols and final review publications

  • - The nature and extent of discrepancies

  • - How commonly discrepancies are acknowledged, where such acknowledgements are reported in the paper, and what justifications are given for these changes.

Methods

Design

This study was conceived as a study within a review (SWAR; Devane et al., 2022), which was situated in the field of implementation science. Conceptual, methodological and logistical challenges encountered in this scoping review (Riordan et al., 2022) necessitated deviations from the review’s original protocol, thereby inspiring the focus of the current study. The proposed design for the SWAR is a methodological study of scoping reviews in implementation science, using a retrospective cohort design.

Sample

A selection of five key journals publishing scoping reviews related to implementation science topics will be searched for scoping reviews. Implementation Science, Implementation Research and Practice, Implementation Science Communications and JBI Evidence Implementation were selected for inclusion as they are leading journals in the field of implementation science. BMJ Quality and Safety will also be searched for completed scoping reviews conducted on implementation topics, due to the relatively high volume of relevant submissions that they publish. Inclusion is restricted to these five leading journals to keep the sample size feasible for the extraction and coding team. The main search engine for each journal’s website will be searched using the term “scoping review”. No filters will be applied, and the results will be exported initially to Mendeley reference management software before being uploaded to PICO Portal for screening.

Studies from these journals will be eligible for inclusion if they report the results of a scoping review of an implementation science topic. Included scoping reviews must have reported using a relevant framework for conducting scoping reviews. Such frameworks may include, but are not restricted to, Arksey and O’Malley (2005), Levac et al. (2010), or JBI guidance (Peters et al., 2020). Reviews citing only frameworks for reporting scoping reviews, rather than for the conduct of scoping reviews, such as PRISMA-SCR (Tricco et al., 2018), will not be sufficient for inclusion. Included reviews will be restricted to those published in English, due to the team’s language capabilities. Inclusion will not be restricted by publication year. The resulting eligible scoping reviews will be examined for an available review protocol. This process will include reading the article for mentions of relevant terms (including but not limited to protocol, registration, registered, preregistered, and preregistration) and repositories (such as OSF, zenodo or AsPredicted, among others), and searching for any references to published protocols. Those with available protocols will be assessed for discrepancies between their protocol and final review. If multiple versions of a protocol exist, the earliest version will be used for this assessment, regardless of whether this is published in a peer-reviewed journal or shared in a repository. Although the earliest version will be used, changes made throughout different versions will also be recorded in the data extraction tool.

Data extraction and coding

A two-part assessment tool is in development to extract relevant characteristics of the protocol and the final manuscript, and to code any discrepancies between these. The extraction and coding steps for each paper will be conducted concurrently by the same reviewer for both steps. The tool is informed by relevant guidance for conducting and reporting scoping reviews (e.g., Peters et al., 2022; Tricco et al., 2018). Previous studies of discrepancies in systematic reviews (e.g. Koensgen et al., 2019; Pandis et al., 2015; Parsons et al., 2019; Tricco et al., 2016) were also considered while developing the data extraction tool. Due to the level of detail provided in these previous studies, it has not been necessary to contact the authors to request further information about the measures they used in their studies. Characteristics to be extracted will include details such as the stated aim/objectives, stated scoping review framework applied, stated search approach (terms, databases), screening approach (e.g. single vs. duplicate screening), approach to resolving screening disagreements, data extraction approach (e.g. single vs. duplicate vs. extracting a subset in duplicate), outcomes stated, primary vs. secondary outcomes, analysis approach, or software used. A free text section will be included, to gather any other important details not initially accounted for. The first part of this tool will capture details for each of the above characteristics from both the protocol and manuscript, to help identify whether there is a discrepancy in this characteristic between the two sources.

Using the second part of this tool, the same reviewers will code their judgements for whether there is a discrepancy between the two sources (yes/no), what characteristic or stage of the review this relates to (as determined by the characteristics listed in the extraction tool), the extent of this change (e.g. major, minor or negligible), the nature of the change (omission, addition or modification), whether any acknowledgement or justification for this change has been given by the review authors (yes, no, partially, or unclear), and if so, the section of the paper where this is reported will be recorded, and open text will be extracted for their justification. The total number of discrepancies identified for that review will also be recorded. This tool is currently being piloted on a small sample of reviews and will be further refined as needed. As the tool is still in development, the details and the precise range of codes specified above reflect the current version of the tool but are still subject to change pending further refinements.

The full two-part extraction and coding process will be conducted in duplicate on at least 10% of the reviews to calibrate team members’ approaches and assess interrater reliability (IRR) and agreement. The remaining reviews will be assessed by individual reviewers with any particularly challenging papers or details discussed with the wider team. If team discussions are not sufficient to reach a consensus, the authors of the reviews may be contacted for clarification if needed, though it is anticipated that any particularly challenging decisions are likely to be related to subjective coding judgments about the discrepancies rather than to lack of clarity regarding the methodological details themselves.

Analysis

The main outcomes examined will include the proportion (%) of potentially eligible reviews that have available protocols, the number and proportion (%) of included reviews with discrepancies, the total and median number of discrepancies between protocols and reviews, the nature of the discrepancies (% of discrepancies that are either an omission, addition, or modification), the most common aspects of the review that the discrepancies pertain to (e.g. screening, data extraction, primary vs. secondary outcomes, etc.), the extent of discrepancies (proportions of major vs. minor discrepancies), the proportion of discrepancies that are acknowledged in the published study (e.g., % for yes/no/partially/unclear) and the proportion of the reviews that acknowledge these discrepancies, in what section of the paper these acknowledgements are reported, and whether justifications were provided for the changes (% for yes/no/partially/unclear, and open-ended responses for the justifications provided). Descriptive statistics will be used to summarise each of the numerical and categorical outcomes listed above. Inductive content analysis (Kyngäs, 2020; Vears & Gillam, 2022) will be used to explore the justifications provided.

For the subset of reviews extracted in duplicate, interrater agreement (IRA) will be assessed using overall percentage agreement for each two raters per study, and IRR will be determined using intraclass correlation.

Dissemination plan

The report of the completed study will be submitted to a suitable peer reviewed journal. Submissions will be made to relevant conferences with interests related to either evidence synthesis, implementation science, open science or research integrity. The published results will be disseminated through the JBI Scoping Review Network newsletter and via social media sites such as Bluesky and LinkedIn.

Study status

At the time of submission of this protocol, development of the screening criteria and of the data extraction/coding tool has begun. The search process for this study has been conducted and screening has commenced at the time of submission, though neither of these steps had yet commenced at the time this protocol was initially written.

Discussion

To the best of our knowledge, this is the first study examining discrepancies in the conduct or reporting of scoping reviews in any discipline. It is expected that the findings of this study will reveal the extent to which the published scoping review studies adhere to the approach planned in their publicly available protocols, whether any discrepancies are acknowledged in the review and if so, the reasons reported for any changes made and justifications for alternative approaches chosen. This may help to demonstrate the need for more careful planning and conduct of scoping reviews, and support calls for scoping reviews to have an associated protocol as an important standard of rigour (e.g. Khalil et al., 2022; Peters et al., 2022).

The interpretations that can be drawn from the study’s results will be limited by the study’s observational nature and its reliance on the clarity and detail provided in the sampled publications. Due to the dearth of literature on this topic, however, this study serves as an important first step in exploring the extent of, and solutions to, this issue.

By understanding the extent and nature of discrepancies and the reasons for these, this study can inform future guidance for researchers conducting scoping reviews, particularly when writing protocols, enabling them to better anticipate and mitigate common challenges. The findings could also inform the development of recommendations for reporting protocol-review discrepancies. Such guidance could in turn help to inform reviewers and aid their decision-making when handling scoping reviews or other forms of evidence synthesis containing changes to the methodological approach between the protocol and final publication stages. Additionally, the insights gained from the findings could be used to inform future scoping review training. Future research, informed by the current study’s findings, could also contribute towards each of these longer-term goals.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 28 Aug 2025
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
VIEWS
61
 
downloads
10
Citations
CITE
how to cite this article
O'Mahony A, Haseldine C, Albers B et al. Exploring Discrepancies between Protocols and Published Scoping Reviews in Implementation Science: Protocol for a Methodological Study [version 1; peer review: awaiting peer review]. HRB Open Res 2025, 8:95 (https://doi.org/10.12688/hrbopenres.14158.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 28 Aug 2025
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions

Are you a HRB-funded researcher?

Submission to HRB Open Research is open to all HRB grantholders or people working on a HRB-funded/co-funded grant on or since 1 January 2017. Sign up for information about developments, publishing and publications from HRB Open Research.

You must provide your first name
You must provide your last name
You must provide a valid email address
You must provide an institution.

Thank you!

We'll keep you updated on any major new updates to HRB Open Research

Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.