Theme Colors
The Joanna Briggs Institute's approach to evidence-based health care is unique. The JBI considers evidence-based health care to be reliant on the evidence, the context in which care is delivered, individual client preference and the professional judgement of the health professional.

The JBI regards evidence-based health care as a cyclical process. Global health care needs, as identified by clinicians or patients/consumers, are addressed through the generation of research evidence that is effective, but also feasible, appropriate and meaningful to specific populations, cultures and settings. This evidence is collated and the results are appraised, synthesised and transferred to service delivery settings and health professionals who utilise it and evaluate its impact on health outcomes, health systems and professional practice.

Therefore, in order to provide those who work in and use health systems globally with world class information and resources, the Joanna Briggs Institute:

  •  Considers international evidence related to the feasibility, appropriateness, meaningfulness and effectiveness of health care interventions (evidence generation)
  •  Includes these different forms of evidence in a formal assessment called a systematic review (evidence synthesis)
  •  Globally disseminates information in appropriate, relevant formats to inform health systems, health professionals and consumers (evidence transfer)
  •  Has designed programs to enable the effective implementation of evidence and evaluation of its impact on health care practice (evidence utilisation)
It is this unique approach that is encompassed in the JBI Model of Evidence-based Health Care.

For more information about the JBI Model:

     Pearson A, Wiechula R, Court A and Lockwood C. 2005. The JBI model of evidence–based healthcare Int J Evid Based Healthc 3 (8): 207–215. (PDF 205kb)

     Jordan, Z (2008) Decision, Decisions…, PACEsetterS 5(1): 24-25 (PDF 236kb)

The tabs above provide information about the history of evidence based healthcare, how the Institute develops evidence to inform practice and the methodology developed and followed by the Institute.

The JBI Model

The core of evidence synthesis is the systematic review of literature of a particular intervention, condition or issue. The systematic review is essentially an analysis of the available literature (that is, evidence) and a judgement of the effectiveness or otherwise of a practice, involving a series of complex steps.

What counts as evidence?

The nature of systematic reviews has changed over the years and significant progress has been made regarding what constitutes appropriate evidence for inclusion in a review. Increasingly, these reviews are used to answer a broad range of questions for health professionals. Traditionally, the evidence based practice movement has focussed on the results of quantitative evidence (considering the RCT as the gold standard) to answer questions of effectiveness. However, the Joanna Briggs Institute (JBI) has as its central focus not only effectiveness, but also appropriateness, meaningfulness and feasibility of health practices and delivery methods. These questions are often answered by considering other forms of research evidence. The JBI regards the results of well-designed research studies grounded in any methodological position as providing more credible evidence that anecdotes or personal opinion. However, when no research evidence exists, expert opinion can be seen to represent the ”best available” evidence.

Overview of the JBI process

In line with this broader view of evidence, the Institute has developed theories, methodologies and rigorous processes for the critical appraisal and synthesis of these diverse forms of evidence in order to aid in clinical decision-making in health care. These processes relate to the synthesis of quantitative evidence, qualitative evidence, the results of economic analyses and expert opinion and text. JBI systematic reviews begin with the development of a proposal or protocol that is peer reviewed and approved by the Institute. A rigorous and extensive search of the international literature on a given topic is undertaken, which are then assessed for their applicability to the topic and appraised using standardised tools to ensure that only the results of the highest quality research are included. Two trained JBI reviewers complete this process and where disagreements occur a third reviewer is consulted. Once this process is complete, the results are combined and published in a report.

For more information about the JBI approach to conducting systematic reviews download the JBI Reviewers Manual or the JBI SUMARI User guide.

Alternatively you may wish to enrol in our Comprehensive Systematic Review Training Program.

The JBI Model

These levels are intended to be used alongside the supporting document outlining their use. Using Levels of Evidence does not preclude the need for careful reading, critical appraisal and clinical reasoning when applying evidence.

The JBI Levels of Evidence are:

The New JBI Levels of Evidence and Grades of Recommendation are now being used for all JBI documents as of the 1st of March 2014.
Levels of Evidence - Effectiveness
Level 1 – Experimental Designs
Level 1.a – Systematic review of Randomized Controlled Trials (RCTs)
Level 1.b – Systematic review of RCTs and other study designs
Level 1.c – RCT
Level 1.d – Pseudo-RCTs
Level 2 – Quasi-experimental Designs
Level 2.a – Systematic review of quasi-experimental studies
Level 2.b – Systematic review of quasi-experimental and other lower study designs
Level 2.c – Quasi-experimental prospectively controlled study
Level 2.d – Pre-test – post-test or historic/retrospective control group study
Level 3 – Observational – Analytic Designs
Level 3.a – Systematic review of comparable cohort studies
Level 3.b – Systematic review of comparable cohort and other lower study designs
Level 3.c – Cohort study with control group
Level 3.d – Case – controlled study
Level 3.e – Observational study without a control group
Level 4 – Observational – Descriptive Studies
Level 4.a – Systematic review of descriptive studies
Level 4.b – Cross-sectional study
Level 4.c – Case series
Level 4.d – Case study
Level 5 – Expert Opinion and Bench Research
Level 5.a – Systematic review of expert opinion
Level 5.b – Expert consensus
Level 5.c – Bench research/ single expert opinion
Levels of Evidence - Diagnosis
Level 1 – Studies of Test Accuracy among consecutive patients
Level 1.a – Systematic review of studies of test accuracy among consecutive patients
Level 1.b – Study of test accuracy among consecutive patients
Level 2 – Studies of Test Accuracy among non-consecutive patients
Level 2.a – Systematic review of studies of test accuracy among non-consecutive patients
Level 2.b – Study of test accuracy among non-consecutive patients
Level 3 – Diagnostic Case control studies
Level 3.a – Systematic review of diagnostic case control studies
Level 3.b – Diagnostic case-control study
Level 4 – Diagnostic yield studies
Level 4.a – Systematic review of diagnostic yield studies
Level 4.b – Individual diagnostic yield study
Level 5 – Expert Opinion and Bench Research
Level 5.a – Systematic review of expert opinion
Level 5.b – Expert consensus
Level 5.c – Bench research/ single expert opinion
Levels of Evidence - Prognosis
Level 1 – Inception Cohort Studies
Level 1.a – Systematic review of inception cohort studies
Level 1.b – Inception cohort study
Level 2 – Studies of All or none
Level 2.a – Systematic review of all or none studies
Level 2.b – All or none studies
Level 3 – Cohort studies
Level 3.a – Systematic review of cohort studies (or control arm of RCT)
Level 3.b – Cohort study (or control arm of RCT)
Level 4 – Case series/Case Controlled/ Historically Controlled studies
Level 4.a – Systematic review of Case series/Case Controlled/ Historically Controlled studies
Level 4.b – Individual Case series/Case Controlled/ Historically Controlled study
Level 5 – Expert Opinion and Bench Research
Level 5.a – Systematic review of expert opinion
Level 5.b – Expert consensus
Level 5.c – Bench research/ single expert opinion
Levels of Evidence - Economic Evaluations
 Level 1  Decision model with assumptions and variables informed by systematic review and tailored to fit the decision making context.
Level 2 Systematic review of economic evaluations conducted in a setting similar to the decision makers.
Level 3 Synthesis/review of economic evaluations undertaken in a setting similar to that in which the decision is to be made and which are of high quality (comprehensive and credible measurement of costs and health outcomes, sufficient time period covered, discounting, and sensitivity testing).
Level 4 Economic evaluation of high quality (comprehensive and credible measurement of costs and health outcomes, sufficient time period covered, discounting and sensitivity testing) and conducted in setting similar to the decision making context.
Level 5 Synthesis / review of economic evaluations of moderate and/or poor quality (insufficient coverage of costs and health effects, no discounting, no sensitivity testing, time period covered insufficient).
Level 6 Single economic evaluation of moderate or poor quality (see directly above level 5 description of studies).
Level 7 Expert opinion on incremental cost effectives of intervention and comparator.
Levels of Evidence - Meaningfulness
 Level 1  Qualitative or mixed-methods systematic review
Level 2 Qualitative or mixed-methods synthesis
Level 3 Single qualitative study
Level 4 Systematic review of expert opinion
Level 5 Expert opinion

Resources


History of Levels of Evidence

From 2003-2004 The Joanna Briggs Institute used the levels of evidence from Australian National Health & Medical Research Council - NHMRC Development, implementation and evaluation for clinical practice guidelines published in 1999. These levels assess the validity of recommendations for clinical guidelines and focuses, understandably, on the effectiveness of treatment. As The Joanna Briggs Institute has a broader definition of what constitutes evidence, a more inclusive approach to the development and grading of levels of evidence and implications for practice was later developed. Until 2003 the Institute used the levels of evidence as specified by the NHMRC 1995 Guidelines for the Development and Implementation of Clinical Practice Guidelines. From 2003 to 2014 the below levels of evidence were used, structured according to the FAME approach.


Levels of Evidence Feasibility
F(1-4)
Appropriateness
A(1-4)
Meaningfulness
M(1-4)
Effectiveness
E(1-4)
Economic Evidence
1 Metasynthesis of research with unequivocal synthesised findings Metasynthesis of research with unequivocal synthesised findings Metasynthesis of research with unequivocal synthesised findings Meta-analysis(with homogeneity) of experimental studies (eg RCT with concealed randomisation) OR One or more large experimental studies with narrow confidence intervals Metasynthesis (with homogeneity) of evaluations of important alternative interventions comparing all clinically relevant outcomes against appropriate cost measurement, and including a clinically sensible sensitivity analysis
2 Metasynthesis of research with credible synthesised findings Metasynthesis of research with credible synthesised findings Metasynthesis of research with credible synthesised findings One or more smaller RCTs with wider confidence intervals OR Quasi-experimental studies(without randomisation) Evaluations of important alternative interventions comparing all clinically relevant outcomes against appropriate cost measurement, and including a clinically sensible sensitivity analysis
3

a.

Metasynthesis of text/opinion with credible synthesised findings

b.

One or more single research studies of high quality

a.

Metasynthesis of text/opinion with credible synthesised findings

b.

One or more single research studies of high quality

a.

Metasynthesis of text/opinion with credible synthesised findings

b.

One or more single research studies of high quality

a.

Cohort studies (with control group)

b.

Case-controled

c.

Observational studies (without control group)
Evaluations of important alternative interventions comparing a limited number of appropriate cost measurement, without a clinically sensible sensitivity analysis
4 Expert opinion Expert opinion Expert opinion Expert opinion, or physiology bench research, or consensus Expert opinion, or based on economic theory
Grades of Recommendation are used to assist healthcare professionals when implementing evidence into practice.

The Joanna Briggs Institute and collaborating entities currently assign a Grade of Recommendation to all recommendations made in its resources, including Evidence Summaries, Systematic Reviews and Best Practice Information Sheets. These Grades are intended to be used alongside the supporting document outlining their use.


JBI currently uses the following Grades of Recommendations:

The New JBI Levels of Evidence and Grades of Recommendation are now being used for all JBI documents as of the 1st of March 2014.

JBI Grades of Recommendation
Grade A  

A ‘strong’ recommendation for a certain health management strategy where:

  1. it is clear that desirable effects outweigh undesirable effects of the strategy;
  2. where there is evidence of adequate quality supporting its use;
  3. there is a benefit or no impact on resource use, and
  4. values, preferences and the patient experience have been taken into account.
Grade B

A ‘weak’ recommendation for a certain health management strategy where:

  1. desirable effects appear to outweigh undesirable effects of the strategy, although this is not as clear;
  2. where there is evidence supporting its use, although this may not be of high quality;
  3. there is a benefit, no impact or minimal impact on resource use, and
  4. values, preferences and the patient experience may or may not have been taken into account.
The FAME (Feasibility, Appropriateness, Meaningfulness and Effectiveness) scale may help inform the wording and strength of a recommendation.

F – Feasibility; specifically:

  • What is the cost effectiveness of the practice?
  • Is the resource/practice available?
  • Is there sufficient experience/levels of competency available?

A – Appropriateness; specifically:

  • Is it culturally acceptable?
  • Is it transferable/applicable to the majority of the population?
  • Is it easily adaptable to a variety of circumstances?

M – Meaningfulness; specifically:

  • Is it associated with positive experiences?
  • Is it not associated with negative experiences?

E – Effectiveness; specifically:

  • Was there a beneficial effect?
  • Is it safe? (i.e is there a lack of harm associated with the practice?

Resources

History of Grades of Recommendation

From 2007- 2014, the following Grades of Recommendation were used:
Grade of Recommendations Feasibility
F(1-4)
Appropriateness
A(1-4)
Meaningfulness
M(1-4)
Effectiveness
E(1-4)
A Strong support that merits application Strong support that merits application Strong support that merits application Strong support that merits application
B Moderate support that warrants consideration of application Moderate support that warrants consideration of application Moderate support that warrants consideration of application Moderate support that warrants consideration of application
C Not supported Not supported Not supported Not supported
During 2007 the following grades of recommendation were used by the Institute
Grade of Recommendations Description
A Strong support that merits application
B Moderate support that warrants consideration of application
C Not supported
From 2004-2006 the following grades of recommendation were developed and used for the Best Practice series
Grade of Recommendations Feasibility
F(1-4)
Appropriateness
A(1-4)
Meaningfulness
M(1-4)
Effectiveness
E(1-4)
A immediately practicable Ethically acceptable ad justifiable Provides a strong rationale for practice change Effectiveness established to a degree that merits application
B Practicable with limited training and/or modest additional resources Ethical acceptance is unclear Provides a moderate rationale for practice change Effectiveness established to a degree that suggests application
C Practicable with significant additional training and/or resources Conflicts to some extent with ethical principles Provides limited rational for practice change Effectiveness established to a degree that warrants consideration for applying the findings
D Practicable with extensive additional training and/or resources Conflicts considerably with ethical principles Provides minimal rationale for advocating change Effectiveness established to a limited degree
E Impracticable Ethically unacceptable There is no rationale to support practice change Effectiveness not established