Pre-screening and Approval for Review
NREPP identifies programs for review in three ways:
- Nominations from the field: SAMHSA announces an open submission period. The open submission period generally lasts several months and allows developers, researchers, practitioners, and other interested parties to submit programs for review.
- Environmental scans: SAMHSA and NREPP contractor staff conduct environmental scans (including literature searches, focus groups, public input, and interviews) to identify potential interventions for review.
- Agency nomination: SAMHSA identifies programs and practices addressing specific agency priorities.
Programs identified through the open submission process are prioritized for review.
Programs are pre-screened to ensure that at least one evaluation study meets the minimum criteria for review. Programs are submitted to SAMHSA for approval to move into the review process. Applicants are notified whether their intervention has been accepted for review or rejected.
Literature Search and Screening
NREPP contractor staff contact the developer to request any additional evaluation studies and information about resources for dissemination and implementation (RFDI). Applicants will be asked to complete the RFDI checklist. Although SAMHSA has determined that programs no longer need RFDI materials to be reviewed, programs with such materials will be prioritized for review.
To ensure a comprehensive report, a literature search is conducted to identify other relevant evaluation studies. All evaluation materials are screened and NREPP contractor staff determine which studies and outcomes are eligible for review. SAMHSA has determined that all eligible outcomes will be reviewed, but that programs with positive impacts on outcomes and populations of interest will be prioritized over programs without positive impacts.
All evaluation studies that meet minimum criteria, including being published within the past 25 years (1990 or later), and fall within a 10-year time frame—as defined by the most recent eligible article of a study—are eligible for review.
NREPP contractor staff identify two certified reviewers to conduct the review (Note: re-reviews of programs posted on NREPP before September 2015 may be completed by one reviewer). Reviewers must complete a Conflict of Interest form to confirm no conflict exists that would require recusal.
Review packets are sent to reviewers to assess the rigor of the study and the magnitude and direction of the program’s impact on eligible outcomes.
Reviewers independently review the studies provided and calculate ratings using the NREPP Outcome Rating Instrument.
Outcomes are assessed on four dimensions and elements within those dimensions, such as design/assignment and attrition.
Study reviewers assign numerical values to each dimension in the NREPP Outcome Rating Instrument (with the exception of effect size). To support consistency across reviews, the dimensions include definitions, and the NREPP Outcome Rating Instrument provides other guidance that reviewers consider when rating the elements. Reviewers also make note of any other information that should be highlighted as being of particular importance.
The study reviewer is responsible for making a reasonable determination as to the strength of the methodology, fidelity, and program effect, based on the provided documentation and his/her specialized knowledge with regard to program evaluation and subject matter. If the reviewers’ ratings differ by a significant margin, a consensus conference to discuss and resolve the differences may be held.
In addition to this review by certified reviewers, NREPP staff also assess programs’ conceptual frameworks.
Evidence Classes and Outcome Ratings
The outcome rating is based on the evidence class and the strength of the conceptual framework. The graphic below summarizes all of the components of the outcome rating.
Components of the Final Outcome Rating
The evidence class for each reported effect is based on a combination of evidence score and effect class.
- Evidence score is based on the rigor and fidelity dimensions and is rated as strong, sufficient, or insufficient.
- Effect class is based on the confidence interval of the effect size:
- Favorable: Confidence interval lies completely within the favorable range
- Probably favorable: Confidence interval spans both the favorable and trivial range
- Trivial: Confidence interval lies completely within the trivial range
- Possibly harmful: Confidence interval spans both the harmful and trivial range
The conceptual framework is based on whether a program has clear goals, activities, and a theory of change.
These two dimensions are then combined to categorize programs into one of five evidence classes as depicted below.
|DESCRIPTION OF EVIDENCE CLASSES|
|Evidence Class||Evidence Description|
|Class A||Highest quality evidence with confidence interval completely within the favorable range|
|Class B||At least sufficient quality evidence with confidence interval completely within favorable OR spanning both the favorable and trivial range|
|Class C||At least sufficient quality evidence with confidence interval completely within the trivial range|
|Class D||At least sufficient quality evidence with confidence interval completely within the harmful range OR spanning both the harmful and trivial range|
|Class E||Limitations in the study design preclude from reporting further on the outcome.|
The evidence classes for each reported effect within an overall outcome are then pooled into an overall outcome score. Next, the overall outcome score is linked with the conceptual framework score to determine the final outcome rating for each outcome (see table below). To be rated effective for an outcome, a program must have a strong conceptual framework and strong evidence of a favorable program effect.
|Outcome Rating||Description||Conceptual Framework|
|Effective||The evidence base produced strong evidence of a favorable program effect.||Yes|
|Promising||The evidence base produced at least sufficient evidence of a favorable program effect.||No|
|Ineffective||The evidence base produced at least sufficient evidence of a trivial, possibly harmful or wide-ranging program effect.||No|
|Inconclusive||Limitations in the study design or a lack of effect size information preclude from reporting further on the program effect.||No|
Each outcome rating is depicted with the following icons:
Outcomes rated as “Inconclusive” are not depicted with an icon.
The ratings and descriptive information are compiled into a program profile.
A courtesy copy of the program profile is shared with the developer or submitter of the program for review, who may suggest revisions to the profile. All completed profiles will be published on the NREPP website except those for which outcome ratings could not be determined due to inconclusive evidence.*
The final program profile is submitted to SAMHSA for review, approval, and posting on the NREPP website.
* Programs that are reviewed—but that for which there is only inconclusive evidence for outcomes—will be listed by name on the NREPP website.