Skip to main content

Development of Guidelines from Research: A Briefing Document

Clifton Chow, Ph.D., Consultant - Anthony Petrosino, Ph.D., WestEd

The move towards evidence-based policy has focused on generating trustworthy evidence upon which to base decisions. For example, a critical component of the evidence-based policy movement has been the support for more rigorous primary studies with strong "internal validity" such as randomized experiments and well-controlled quasi-experiments. Another critical component has been the increased importance of transparent and explicit methods for synthesizing research such as systematic reviews and meta-analyses.

But once the scientific community has generated trustworthy research findings, how can this research be used to generate policy and practice guidance? Processes for doing this in a systematic, defensible and accessible way has presented a challenge across different disciplines and fields. This briefing summarizes some of examples from different fields on the development of such processes.

The Importance of Guidelines and Minimizing Bias

The importance of guidelines development was articulated by the U.S. Centers for Disease Control:

Guidelines affect practice, policy, and the allocation of resources on a wide scale. Thus, it is critical that recommendations included in guidelines documents are based on an objective assessment of the best available evidence. Systematic literature reviews and rigorous methods used to rate the quality of evidence can assist in reducing scientific bias by increasing the probability that high‐quality, relevant evidence is considered. However, guideline development involves more than assessing scientific evidence. Developers also use expert opinion to interpret research and offer insights from practice. If not gathered carefully, expert opinion has the potential to bias the evidence synthesis and decision‐making process (see Appendix A for further elaboration by the CDC on selecting work group experts and the consensus approach).


Identifying Guidelines Development Documents

To rapidly identify some different approaches, we solicited information from over 50 colleagues across fields of education, justice, public health, psychology, and sociology. We received 15 documents and several electronic exchanges from these colleagues to identify how research evidence has been used to create public policy and practice guidelines.  These are common approaches, but no systematic search of the literature was undertaken to identify all relevant approaches. Of the 15 documents,i seven provided sufficient information to understand the framework proposed to move from research to policy recommendation.  We begin with a discussion of six prominent guidelines creation documents.

Six Approaches to Guidelines Development

In this section, we summarize six approaches to creating guidelines: (1) The National Academies Study Process; (2) The Institute for Education Sciences Practice Guide; (3) Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations (GRADE); (4) British Columbia Handbook on Developing Guidelines and Protocols; (5) International Standards for Clinical Policy Guidelines; and (6) The National Institute for Health and Care Excellence (NICE) Guidelines process.

Social Sciences

National Academies Study Process    

National Academies reports are often viewed as being valuable and credible because of checks and balances that are applied at every step in the study process to protect the integrity of the reports and to maintain public confidence in them. Experts are selected to serve on a pro bono basis (travel expenses are reimbursed) on committees to produce study reports. The Academies provide independent advice; external sponsors have no control over the conduct of a study once the statement of task and budget are finalized. Study committees gather information from many sources in public meetings but they carry out their deliberations in private in order to avoid political, special interest, and sponsor influence.

The process for producing a study report involves the following:

Defining the study.  Academies' staff and members of their boards work with sponsors to determine the specific set of questions to be addressed by the study in a formal "statement of task," as well as the duration and cost of the study. The statement of task defines and bounds the scope of the study, and it serves as the basis for determining the expertise and the balance of perspectives needed on the committee.

Committee Selection and Approval. All committee members serve as individual experts, not as representatives of organizations or interest groups. Each member is expected to contribute to the project on the basis of his or her own expertise and good judgment. A committee is not finally approved until a thorough balance and conflict-of-interest discussion is held at the first meeting, and any issues raised in that discussion or by the public are investigated and addressed. Full details on how committee members are selected can be found in Appendix B.

Committee Meetings, Information Gathering, Deliberations, and Drafting the Report.  Study committees typically gather information through: 1) meetings that are open to the public and that are announced in advance through the Academies' website; 2) the submission of information by outside parties; 3) reviews of the scientific literature; and 4) the investigations of the committee members and staff. In all cases, efforts are made to solicit input from individuals who have been directly involved in, or who have special knowledge of, the problem under consideration.

Report Review. As a final check on the quality and objectivity of the study, all Academies reports whether products of studies, summaries of workshop proceedings, or other documents must undergo a rigorous, independent external review by experts whose comments are provided anonymously to the committee members. The Academies recruit independent experts with a range of views and perspectives to review and comment on the draft report prepared by the committee. The review process is structured to ensure that each report addresses its approved study charge and does not go beyond it, that the findings are supported by the scientific evidence and arguments presented, that the exposition and organization are effective, and that the report is impartial and objective.ii

Education

Institute for Education Sciences Practice Guides

A panel comprised of research experts, policymakers and practitioners is convened (generally around 5 persons). The panel works with staff from the IES What Works Clearinghouse (WWC) to cull through the research evidence. The WWC standards divide studies into those that meet minimum standards (either randomized controlled trials or quasi-experiments with strong evidence of group equivalence at baseline). All other studies are considered to not meet minimum standards. The studies are ranked according to these WWC evidence standards. The panel meets several times (sometimes over several days) to weigh the evidence and structure recommendations. Each recommendation comes with a weak-to-strong qualifier that signals the extent to which conclusions were based on WWC-acceptable standards, or those that did not. When panels are not able to come to consensus, WWC management gets involved to determine resolutions and to push the process forward.

Health Care

Grading of Recommendations Assessment, Development and Evaluation (GRADE)

In 2000, a working group began as an informal collaboration of people with an interest in addressing the shortcomings of grading evidence in health care. The working group developed an approach to grading quality of evidence and strength of recommendations. Many international organizations have provided input into the development of the approach and have started using it (e.g., the World Health Organization, the Cochrane Collaboration and more than 25 other organizations).  GRADE was not developed for generating specific guidelines but is a process that was developed so that any organization can use it to create its own set of recommendations and standards. There are now literally dozens of articles on the GRADE process, and this briefing only discusses the overarching framework.

The first consideration in GRADE is a determination of whether the scientific evidence is of high quality (i.e., the evidence indicates that the chances for desirable effects outweighs the chances for adverse effects). The second consideration is to use this scientific evidence to produce a simple, transparent rating of the strength of the evidence supporting each recommendation (e.g., “strong” or “weak”).  

The GRADE approach can be summarized as follows:

  • The overall quality of evidence should be assessed for each important outcome and expressed using four (e.g. high, moderate, low, very low) or, if justified, three (e.g. high, moderate, and very low and low combined into low) categories. These are defined as follows:
    • High quality— Further research is very unlikely to change our confidence in the estimate of effect
    • Moderate quality— Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate
    • Low quality— Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
    • Very low quality— Any estimate of effect is very uncertain
  • Evidence summaries (narrative or in table format) should be used as the basis for judgments about the quality of evidence and the strength of recommendations. Ideally, these should be based on systematic reviews. At a minimum, the evidence that was assessed and the methods that were used to identify and appraise that evidence should be clearly described.
  • Explicit consideration should be given to each of the GRADE criteria for assessing the strength of a recommendation (the balance of desirable and undesirable consequences, quality of evidence, values and preferences, and resource use) and a general approach should be reported (e.g. if and how costs were considered, whose values and preferences were assumed, etc.).
  • The strength of recommendations should be expressed using two categories (weak/conditional and strong) for or against a treatment option and the definitions for each category should be consistent with those used by GRADE. Different terminology to express weak/conditional and strong recommendations may be used, although the interpretation and implications should be preserved. These decisions should be explicitly reported.
  • Strong: Based on the available evidence, if clinicians are very certain that benefits do, or do not, outweigh risks and burdens they will make a strong recommendation.
  • Weak: Based on the available evidence, if clinicians believe that benefits and risks and burdens are finely balanced, or appreciable uncertainty exists about the magnitude of benefits and risks, they must offer a weak recommendation. In addition, clinicians are becoming increasingly aware of the importance of patient values and preferences in clinical decision making. When, across the range of patient values, fully informed patients are liable to make different choices, guideline panels should offer weak recommendations.

British Columbia Handbook on Developing Guidelines and Protocols

The Canadian Guidelines and Protocols Advisory Committee (GPAC) is charged with developing clinical policy guidelines and recommendations for the British Columbia Province’s Ministry of Health.  Their process in judging the state of evidence involves identifying the evidence from meta-analyses or other quantitative systematic reviews.  If the systematic reviews on the topic are not yet available, the Committee conducts its own literature searches for individual studies, preferably randomized controlled trials (RCTs).  If this evidence is also unavailable, recommendations are based on the "best available" evidence.  

One feature of GPAC is the way it assigns expert work groups.  A separate workgroup is formed for a specific clinical guideline and is composed of general practitioners, relevant medical specialists and often a pharmacist from the ministry of health.  The workgroup is overseen by a research officer (who may or may not be the chair).  The workgroup could be disbanded after an initial guideline has been developed and a new workgroup formed for subsequent revision. This process addresses the problem of expert or “scientific bias” noted by many observers, the tendency for guidelines to be based overwhelmingly on the opinions of scientific experts to the exclusion of input by practitioners.

International Standards for Clinical Policy Guidelines

The American College of Physicians outlined some problems in developing clinical guidelines that included variations in quality, limitations of systematic reviews, and lack of transparency and adequate documentation of methods.  To address these short-comings, the ACP created a set of recommendations for guideline creation and advocated for a use of a panel to form recommendations from research (i.e. the panel should include diverse and relevant stakeholders, such as health professionals, methodologists, experts on a topic, and patients).  ACP also presented a set of ruling principles for the creation of guidelines including:

1. A guideline should describe the process used to reach consensus among the panel members and, if applicable, approval by the sponsoring organization. This process should be established before the start of guideline development.
2. A guideline should clearly describe the methods used for the guideline development in detail.
3. Guideline developers should use systematic evidence review methods to identify and evaluate evidence related to the guideline topic.
4. A guideline recommendation should be clearly stated and based on scientific evidence of benefits; harms; and, if possible, costs.
5. A guideline should use a rating system to communicate the quality and reliability of both the evidence and the strength of its recommendations.
6. A guideline should include an expiration date and/or describe the process that the guideline groups will use to update recommendations.
7. A guideline should disclose financial support for the development of both the evidence review as well as the guideline recommendations

The National Institute for Health and Care Excellence (NICE)

The National Institute for Health and Care Excellence (NICE) in the UK makes evidence-based recommendations on a wide range of topics, from preventing and managing specific conditions, improving health, managing medicines in different settings, to providing social care and support to adults and children, and planning broader services and interventions to improve the health of English communities. The NICE promotes both individualized care and integrated care (for example, by covering transitions between children's and adult services and between health and social services).

NICE guidance is based on the best available evidence of what works and what it costs--and it is developed by Committees of experts. The NICE uses both scientific and other types of evidence from “multiple sources, extracted for different purposes and through different methods… within an ethical and theoretical framework.” Evidence is classified into:

Scientific evidence: which is defined as “explicit (codified and propositional), systemic (uses transparent and explicit methods for codifying), and replicable (using the same methods with the same samples will lead to the same results).  It can be context-free (applicable generally) or context-sensitive (driven by geography, time and situation)”.

Colloquial evidence: essentially derived from expert testimony, stakeholder opinion and necessarily value driven and subjective.

The evidence is then debated by a committee and the guidance developed and agreed upon.  One feature of NICE is that clinical evidence is augmented by economic evidence in forming judgments for guidelines.  There are many documents on NICE and an extensive manual. Appendix C provides a diagram of the NICE guideline creation process, and a summary of its core features.

General Learnings from the Six Approaches

Table 1 provides a summary of some characteristics of the six approaches described here. These characteristics include the field/discipline in which the guidelines were developed, whether a deliberating body was used to develop the guidelines, and whether the evidence and the strength of the recommendation were rated.

Approach Field/Discipline Type of
Deliberating Body
Rating of
Evidence?
Strength of
Recommendation?
National
Academies Study
Reports
Sciences,
More Broadly
Committees of
Experts
No No
Institute for
Education
Sciences Practice
Guides
Education 5-person Panel Yes Yes
GRADE Health Care Organizations Yes Yes
British Columbia
Handbook
Health Care Work Group Yes Yes
American College
of Physicians
Health Care Panel Yes Yes
UK-National
Institute of
Clinical
Health Care Committees of
Experts
Yes Yes

These six approaches suggest some overarching characteristics to be considered when developing guidelines:

  1. transparent and explicit process for developing guidelines from research and other evidence is optimal.
  2. Because research and other types of evidence varies in terms of quality, a rating is needed. This does not mean that guidelines cannot be developed if a solid research base does not exist, but the strength of the evidence supporting the guidelines should be made explicit.
    • In several of these guidelines developments approaches, systematic reviews of existing evidence are prioritized as "best evidence" to consider.
  3. To ensure that the guidelines are relevant and believable to the practice community, an expert input process that includes non-researchers (practitioners and policymakers) should be part of the guidelines development process.
    • 3a. Several documents recommend that steps be taken to minimize potential conflicts of interest and biases of consensus or expert panels.

Conclusion

This briefing summarized some examples from different disciplines and fields on the development of processes for generating trustworthy research findings into policy and practice guidelines.  From electronic exchanges with colleagues and documents we obtained from them we outlined six approaches on how research has been used to create recommendations for public policy and practice guidelines.   All of these approaches rest on a transparent process at every stage, from the formation of deliberating bodies that are diverse in expertise to the discussion on the nature of the evidence and judgment place on their internal validity.  No study that are relevant to the topic are excluded, even if they were not from randomized-controlled studies.  Care was also given to ensure that panel members are unbias, including rotating team members.  The care taken and the flexibility of including a variety of evidence ensure that policy and guidelines developed can be trusted and practical.

Appendix A. U.S. Centers for Disease Control Advice for Developing Guidelines

Selecting Work Group Experts

Scientific bias may enter into guideline development when important scientific perspectives are not adequately represented. Guideline developers should select work group members in such a way that all relevant disciplines and perspectives are included and that members of both the science and practice perspectives are represented. Having a multidisciplinary work group can help ensure the evidence is reviewed and interpreted by individuals with varying values, preferences, and perspectives and that the resulting recommendations are balanced.

Using Consensus Development Methods

Scientific bias may also arise when the opinions of work group experts are not adequately represented. The work group members may have differences in professional status or scientific knowledge. Some work group members dominate discussions more than others. Because of these differences and other social processes that emerge in group decision making, ensuring that information is shared and opinions are adequately represented can be challenging. Consensus development methods can help ensure that all expert perspectives are shared and that bias is counterbalanced. Consensus methods that might be considered include the Delphi method, the Nominal Group process, and the Glaser approach. These methods structure group interaction in ways that bring consensus on recommendation statements; for example, by using an iterative process to solicit views through questionnaires, note cards, or written documents, reflect views back to work group members systematically, and formulate final written recommendations. Regardless of the method used, systematic ways of gathering expert opinion, views, and preferences for recommendations can help to reduce bias.

Appendix B. National Academies Committee Selection Criteria

  • An appropriate range of expertise for the task. The committee must include experts with the specific expertise and experience needed to address the study's statement of task. One of the strengths of the Academies is the tradition of bringing together recognized experts from diverse disciplines and backgrounds who might not otherwise collaborate. These diverse groups are encouraged to conceive new ways of thinking about a problem.
  • A balance of perspectives. Having the right expertise is not sufficient for success. It is also essential to evaluate the overall composition of the committee in terms of different experiences and perspectives. The goal is to ensure that the relevant points of view are, in the Academies' judgment, reasonably balanced so that the committee can carry out its charge objectively and credibly.
  • Screened for conflicts of interest. All provisional committee members are screened in writing and in a confidential group discussion about possible conflicts of interest. For this purpose, a "conflict of interest" means any financial or other interest which conflicts with the service of the individual because it could significantly impair the individual's objectivity or could create an unfair competitive advantage for any person or organization. The term "conflict of interest" means something more than individual bias. There must be an interest, ordinarily financial, that could be directly affected by the work of the committee. Except for those rare situations in which the Academies determine that a conflict of interest is unavoidable and promptly and publicly disclose the conflict of interest, no individual can be appointed to serve (or continue to serve) on a committee of the institution used in the development of reports if the individual has a conflict of interest that is relevant to the functions to be performed.
  • Point of View is different from Conflict of Interest. A point of view or bias is not necessarily a conflict of interest. Committee members are expected to have points of view, and the Academies attempt to balance these points of view in a way deemed appropriate for the task. Committee members are asked to consider respectfully the viewpoints of other members, to reflect their own views rather than be a representative of any organization, and to base their scientific findings and conclusions on the evidence. Each committee member has the right to issue a dissenting opinion to the report if he or she disagrees with the consensus of the other members.
  • Other considerations. Membership in the NAS, NAE, or NAM and previous involvement in Academies studies are taken into account in committee selection. The inclusion of women, minorities, and young professionals are additional considerations.

Specific steps in the committee selection and approval process are as follows:

  • Staff solicit an extensive number of suggestions for potential committee members from a wide range of sources, then recommend a slate of nominees.
  • Nominees are reviewed and approved at several levels within the Academies; a provisional slate is then approved by the president of the National Academy of Sciences, who is also the chair of the National Research Council.
  • The provisional committee list is posted for public comment in the Current Projects System on the Web.
  • The provisional committee members complete background information and conflict-of-interest disclosure forms.
  • The committee balance and conflict-of-interest discussion is held at the first committee meeting.
  • Any conflicts of interest or issues of committee balance and expertise are investigated; changes to the committee are proposed and finalized.
  • Committee is formally approved.
  • Committee members continue to be screened for conflict of interest throughout the life of the committee.

Relevant features of NICE

I. Committee Membership

In terms of participation in committees, NICE also differs from other panels in that it includes lay members and public at-large.  Lay members are defined as those with personal experience of using health or care services, or from a community affected by an established or soon to be considered guideline. In developing the guidelines, the Committee is the independent advisory group that considers the evidence and develops the recommendations, taking into account the views of stakeholders. It may be a standing Committee working on many guideline topics, or a topic-specific Committee put together to work on a specific guideline.  NICE also advocates flexibility in calling for participation in the Committee.  If needed for a topic, the Committee can co-opt members with specific expertise to contribute to developing some of the recommendations.  For example, members with experience of integrating delivery of services across service areas may also be recruited, particularly where the development of a guideline requires more flexibility than “conventional organisational boundaries” permit.  If the guideline contains recommendations about services, NICE could call upon individuals with a commissioning or provider background in addition to members from  practitioner networks or local authorities.

II. Evidence    

The NICE approach towards evaluating clinical evidence differs from other approaches. In addition to clinical evidence, the committee is implored to also take into account other factors, such as the need to prevent discrimination and to promote equity. Similarly, NICE recognizes that not all clinical research could or should result in implementation; therefore, NICE has added an indication as to whether a procedure should only be tested in further research or that it be put forward for implementation.  Factors that might prevent research from being implemented in practice would be evidence that the committee considers to be insufficient at the current time. A 'research only' recommendation is made if the evidence shows that there are important uncertainties which may  be resolved with additional evidence (presumably from clinical trials or real world settings).Evidence may also indicates the intervention is unsafe and/or not efficacious, and the committee will make a recommendation, under those conditions, not to use the procedure.  

III. Economic Evidence

An important feature in the NICE framework is its use of economic evidence in guidelines development.  There are two primary considerations in drawing conclusions from economic studies for a given intervention.  The first is that the methodology is sufficiently strong to avoid the possibility of double-counting costs or benefits.  NICE recommends that the way consequences are implicitly weighted should be recorded openly, transparently and as accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process.  A related process is that an incremental cost-effectiveness ratio (ICER) threshold be used whenever possible and that interventions with an estimated negative net present value (NPV) should not be recommended unless social values outweigh costs.

The second consideration NICE put forward on using economic evidence in translating research to clinical practice/policy concerns cost-minimization procedures.  The committee took care to avoid blindly choosing interventions with the lowest costs by declaring that cost minimization can be used only when the difference in benefits between an intervention and its comparator is known to be small and the cost difference is large.  Given the criteria, NICE believes that cost-minimisation analysis is only applicable in a relatively small number of cases. 

In sum, economic evidence estimating the value of the intervention should be considered alongside clinical evidence, but judgment by social values (policy) should also be taken into account to avoid choosing intervention  merely because it is offered at the lowest cost.

IV. Producing Guidelines from Evidence

The final step in translating research evidence into practice and policy guidelines is drafting recommendations.  Because many people read only the recommendations, the wording must be concise, unambiguous and easy to translate into practice by the intended audience. As a general rule, the committee recommends that each recommendation or bullet point within a recommendation should contain only one primary action and be accessible as much as possible to a wide audience.

An important guideline explicitly stated by NICE is to indicate levels of uncertainty in the evidence.  It is the only institution to have created a "Research recommendations process and methods guide," which details the approach to be used to identify key uncertainties and associated research recommendations.  In considering which research intervention or evidence to put forward for recommendation, the committee established guidelines that includes three levels of certainty:

1. Recommendations for activities or interventions that should (or should not) be used
2. Recommendations for activities or interventions that could be used
3. Recommendations for activities or interventions that must (or must not) be used.

Bibliography

  • CDC (2012).  Reducing Scientific Bias from Expert Opinion in Guidelines and Recommendations Development.
  • Grade Working Group (2015).  From Evidence to Recommendations:  Transparent and Sensible.  Retrieved from:  http://www.gradeworkinggroup.org/.
  • Guyatt, G.H. (2008).  GRADE: An Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations.  BMJ 336.
  • Ministry of Health of British Columbia (2014).  Guidelines and Protocols Advisory Committee Handbook: Developing Clinical Practice Guidelines and Protocols for British Columbia.
  • National Academies of Sciences, Engineering & Medicine (2015).  Our Study Process: Ensuring Independent, Objective Advice.  Washington, D.C.  Retrieved from: http://www.nationalacademies.org/studyprocess/.
  • Quaseem, A. Towards International Standards for Clinical Practice Guidelines.  Presentation delivered by the Chair of the Guidelines for International Network, Scottish Charity.
  • UK National Institute for Health and Care Excellence (2015).  The Manual on Developing NICE Guidelines.
  • Institute of Education Sciences (2015).  What Works Clearinghouse.  Retrieved from:  http://ies.ed.gov/ncee/wwc/Publications_Reviews.aspx?f=All%20Publication%20and%20Product%20Types,3.

Clifton Chow, Ph.D., Consultant - Anthony Petrosino, Ph.D., WestEd

The move towards evidence-based policy has focused on generating trustworthy evidence upon which to base decisions. For example, a critical component of the evidence-based policy movement has been the support for more rigorous primary studies with strong "internal validity" such as randomized experiments and well-controlled quasi-experiments. Another critical component has been the increased importance of transparent and explicit methods for synthesizing research such as systematic reviews and meta-analyses.

But once the scientific community has generated trustworthy research findings, how can this research be used to generate policy and practice guidance? Processes for doing this in a systematic, defensible and accessible way has presented a challenge across different disciplines and fields. This briefing summarizes some of examples from different fields on the development of such processes.

The Importance of Guidelines and Minimizing Bias

The importance of guidelines development was articulated by the U.S. Centers for Disease Control:

Guidelines affect practice, policy, and the allocation of resources on a wide scale. Thus, it is critical that recommendations included in guidelines documents are based on an objective assessment of the best available evidence. Systematic literature reviews and rigorous methods used to rate the quality of evidence can assist in reducing scientific bias by increasing the probability that high‐quality, relevant evidence is considered. However, guideline development involves more than assessing scientific evidence. Developers also use expert opinion to interpret research and offer insights from practice. If not gathered carefully, expert opinion has the potential to bias the evidence synthesis and decision‐making process (see Appendix A for further elaboration by the CDC on selecting work group experts and the consensus approach).


Identifying Guidelines Development Documents

To rapidly identify some different approaches, we solicited information from over 50 colleagues across fields of education, justice, public health, psychology, and sociology. We received 15 documents and several electronic exchanges from these colleagues to identify how research evidence has been used to create public policy and practice guidelines.  These are common approaches, but no systematic search of the literature was undertaken to identify all relevant approaches. Of the 15 documents,i seven provided sufficient information to understand the framework proposed to move from research to policy recommendation.  We begin with a discussion of six prominent guidelines creation documents.

Six Approaches to Guidelines Development

In this section, we summarize six approaches to creating guidelines: (1) The National Academies Study Process; (2) The Institute for Education Sciences Practice Guide; (3) Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations (GRADE); (4) British Columbia Handbook on Developing Guidelines and Protocols; (5) International Standards for Clinical Policy Guidelines; and (6) The National Institute for Health and Care Excellence (NICE) Guidelines process.

Social Sciences

National Academies Study Process    

National Academies reports are often viewed as being valuable and credible because of checks and balances that are applied at every step in the study process to protect the integrity of the reports and to maintain public confidence in them. Experts are selected to serve on a pro bono basis (travel expenses are reimbursed) on committees to produce study reports. The Academies provide independent advice; external sponsors have no control over the conduct of a study once the statement of task and budget are finalized. Study committees gather information from many sources in public meetings but they carry out their deliberations in private in order to avoid political, special interest, and sponsor influence.

The process for producing a study report involves the following:

Defining the study.  Academies' staff and members of their boards work with sponsors to determine the specific set of questions to be addressed by the study in a formal "statement of task," as well as the duration and cost of the study. The statement of task defines and bounds the scope of the study, and it serves as the basis for determining the expertise and the balance of perspectives needed on the committee.

Committee Selection and Approval. All committee members serve as individual experts, not as representatives of organizations or interest groups. Each member is expected to contribute to the project on the basis of his or her own expertise and good judgment. A committee is not finally approved until a thorough balance and conflict-of-interest discussion is held at the first meeting, and any issues raised in that discussion or by the public are investigated and addressed. Full details on how committee members are selected can be found in Appendix B.

Committee Meetings, Information Gathering, Deliberations, and Drafting the Report.  Study committees typically gather information through: 1) meetings that are open to the public and that are announced in advance through the Academies' website; 2) the submission of information by outside parties; 3) reviews of the scientific literature; and 4) the investigations of the committee members and staff. In all cases, efforts are made to solicit input from individuals who have been directly involved in, or who have special knowledge of, the problem under consideration.

Report Review. As a final check on the quality and objectivity of the study, all Academies reports whether products of studies, summaries of workshop proceedings, or other documents must undergo a rigorous, independent external review by experts whose comments are provided anonymously to the committee members. The Academies recruit independent experts with a range of views and perspectives to review and comment on the draft report prepared by the committee. The review process is structured to ensure that each report addresses its approved study charge and does not go beyond it, that the findings are supported by the scientific evidence and arguments presented, that the exposition and organization are effective, and that the report is impartial and objective.ii

Education

Institute for Education Sciences Practice Guides

A panel comprised of research experts, policymakers and practitioners is convened (generally around 5 persons). The panel works with staff from the IES What Works Clearinghouse (WWC) to cull through the research evidence. The WWC standards divide studies into those that meet minimum standards (either randomized controlled trials or quasi-experiments with strong evidence of group equivalence at baseline). All other studies are considered to not meet minimum standards. The studies are ranked according to these WWC evidence standards. The panel meets several times (sometimes over several days) to weigh the evidence and structure recommendations. Each recommendation comes with a weak-to-strong qualifier that signals the extent to which conclusions were based on WWC-acceptable standards, or those that did not. When panels are not able to come to consensus, WWC management gets involved to determine resolutions and to push the process forward.

Health Care

Grading of Recommendations Assessment, Development and Evaluation (GRADE)

In 2000, a working group began as an informal collaboration of people with an interest in addressing the shortcomings of grading evidence in health care. The working group developed an approach to grading quality of evidence and strength of recommendations. Many international organizations have provided input into the development of the approach and have started using it (e.g., the World Health Organization, the Cochrane Collaboration and more than 25 other organizations).  GRADE was not developed for generating specific guidelines but is a process that was developed so that any organization can use it to create its own set of recommendations and standards. There are now literally dozens of articles on the GRADE process, and this briefing only discusses the overarching framework.

The first consideration in GRADE is a determination of whether the scientific evidence is of high quality (i.e., the evidence indicates that the chances for desirable effects outweighs the chances for adverse effects). The second consideration is to use this scientific evidence to produce a simple, transparent rating of the strength of the evidence supporting each recommendation (e.g., “strong” or “weak”).  

The GRADE approach can be summarized as follows:

  • The overall quality of evidence should be assessed for each important outcome and expressed using four (e.g. high, moderate, low, very low) or, if justified, three (e.g. high, moderate, and very low and low combined into low) categories. These are defined as follows:
    • High quality— Further research is very unlikely to change our confidence in the estimate of effect
    • Moderate quality— Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate
    • Low quality— Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
    • Very low quality— Any estimate of effect is very uncertain
  • Evidence summaries (narrative or in table format) should be used as the basis for judgments about the quality of evidence and the strength of recommendations. Ideally, these should be based on systematic reviews. At a minimum, the evidence that was assessed and the methods that were used to identify and appraise that evidence should be clearly described.
  • Explicit consideration should be given to each of the GRADE criteria for assessing the strength of a recommendation (the balance of desirable and undesirable consequences, quality of evidence, values and preferences, and resource use) and a general approach should be reported (e.g. if and how costs were considered, whose values and preferences were assumed, etc.).
  • The strength of recommendations should be expressed using two categories (weak/conditional and strong) for or against a treatment option and the definitions for each category should be consistent with those used by GRADE. Different terminology to express weak/conditional and strong recommendations may be used, although the interpretation and implications should be preserved. These decisions should be explicitly reported.
  • Strong: Based on the available evidence, if clinicians are very certain that benefits do, or do not, outweigh risks and burdens they will make a strong recommendation.
  • Weak: Based on the available evidence, if clinicians believe that benefits and risks and burdens are finely balanced, or appreciable uncertainty exists about the magnitude of benefits and risks, they must offer a weak recommendation. In addition, clinicians are becoming increasingly aware of the importance of patient values and preferences in clinical decision making. When, across the range of patient values, fully informed patients are liable to make different choices, guideline panels should offer weak recommendations.

British Columbia Handbook on Developing Guidelines and Protocols

The Canadian Guidelines and Protocols Advisory Committee (GPAC) is charged with developing clinical policy guidelines and recommendations for the British Columbia Province’s Ministry of Health.  Their process in judging the state of evidence involves identifying the evidence from meta-analyses or other quantitative systematic reviews.  If the systematic reviews on the topic are not yet available, the Committee conducts its own literature searches for individual studies, preferably randomized controlled trials (RCTs).  If this evidence is also unavailable, recommendations are based on the "best available" evidence.  

One feature of GPAC is the way it assigns expert work groups.  A separate workgroup is formed for a specific clinical guideline and is composed of general practitioners, relevant medical specialists and often a pharmacist from the ministry of health.  The workgroup is overseen by a research officer (who may or may not be the chair).  The workgroup could be disbanded after an initial guideline has been developed and a new workgroup formed for subsequent revision. This process addresses the problem of expert or “scientific bias” noted by many observers, the tendency for guidelines to be based overwhelmingly on the opinions of scientific experts to the exclusion of input by practitioners.

International Standards for Clinical Policy Guidelines

The American College of Physicians outlined some problems in developing clinical guidelines that included variations in quality, limitations of systematic reviews, and lack of transparency and adequate documentation of methods.  To address these short-comings, the ACP created a set of recommendations for guideline creation and advocated for a use of a panel to form recommendations from research (i.e. the panel should include diverse and relevant stakeholders, such as health professionals, methodologists, experts on a topic, and patients).  ACP also presented a set of ruling principles for the creation of guidelines including:

1. A guideline should describe the process used to reach consensus among the panel members and, if applicable, approval by the sponsoring organization. This process should be established before the start of guideline development.
2. A guideline should clearly describe the methods used for the guideline development in detail.
3. Guideline developers should use systematic evidence review methods to identify and evaluate evidence related to the guideline topic.
4. A guideline recommendation should be clearly stated and based on scientific evidence of benefits; harms; and, if possible, costs.
5. A guideline should use a rating system to communicate the quality and reliability of both the evidence and the strength of its recommendations.
6. A guideline should include an expiration date and/or describe the process that the guideline groups will use to update recommendations.
7. A guideline should disclose financial support for the development of both the evidence review as well as the guideline recommendations

The National Institute for Health and Care Excellence (NICE)

The National Institute for Health and Care Excellence (NICE) in the UK makes evidence-based recommendations on a wide range of topics, from preventing and managing specific conditions, improving health, managing medicines in different settings, to providing social care and support to adults and children, and planning broader services and interventions to improve the health of English communities. The NICE promotes both individualized care and integrated care (for example, by covering transitions between children's and adult services and between health and social services).

NICE guidance is based on the best available evidence of what works and what it costs--and it is developed by Committees of experts. The NICE uses both scientific and other types of evidence from “multiple sources, extracted for different purposes and through different methods… within an ethical and theoretical framework.” Evidence is classified into:

Scientific evidence: which is defined as “explicit (codified and propositional), systemic (uses transparent and explicit methods for codifying), and replicable (using the same methods with the same samples will lead to the same results).  It can be context-free (applicable generally) or context-sensitive (driven by geography, time and situation)”.

Colloquial evidence: essentially derived from expert testimony, stakeholder opinion and necessarily value driven and subjective.

The evidence is then debated by a committee and the guidance developed and agreed upon.  One feature of NICE is that clinical evidence is augmented by economic evidence in forming judgments for guidelines.  There are many documents on NICE and an extensive manual. Appendix C provides a diagram of the NICE guideline creation process, and a summary of its core features.

General Learnings from the Six Approaches

Table 1 provides a summary of some characteristics of the six approaches described here. These characteristics include the field/discipline in which the guidelines were developed, whether a deliberating body was used to develop the guidelines, and whether the evidence and the strength of the recommendation were rated.

Approach Field/Discipline Type of
Deliberating Body
Rating of
Evidence?
Strength of
Recommendation?
National
Academies Study
Reports
Sciences,
More Broadly
Committees of
Experts
No No
Institute for
Education
Sciences Practice
Guides
Education 5-person Panel Yes Yes
GRADE Health Care Organizations Yes Yes
British Columbia
Handbook
Health Care Work Group Yes Yes
American College
of Physicians
Health Care Panel Yes Yes
UK-National
Institute of
Clinical
Health Care Committees of
Experts
Yes Yes

These six approaches suggest some overarching characteristics to be considered when developing guidelines:

  1. transparent and explicit process for developing guidelines from research and other evidence is optimal.
  2. Because research and other types of evidence varies in terms of quality, a rating is needed. This does not mean that guidelines cannot be developed if a solid research base does not exist, but the strength of the evidence supporting the guidelines should be made explicit.
    • In several of these guidelines developments approaches, systematic reviews of existing evidence are prioritized as "best evidence" to consider.
  3. To ensure that the guidelines are relevant and believable to the practice community, an expert input process that includes non-researchers (practitioners and policymakers) should be part of the guidelines development process.
    • 3a. Several documents recommend that steps be taken to minimize potential conflicts of interest and biases of consensus or expert panels.

Conclusion

This briefing summarized some examples from different disciplines and fields on the development of processes for generating trustworthy research findings into policy and practice guidelines.  From electronic exchanges with colleagues and documents we obtained from them we outlined six approaches on how research has been used to create recommendations for public policy and practice guidelines.   All of these approaches rest on a transparent process at every stage, from the formation of deliberating bodies that are diverse in expertise to the discussion on the nature of the evidence and judgment place on their internal validity.  No study that are relevant to the topic are excluded, even if they were not from randomized-controlled studies.  Care was also given to ensure that panel members are unbias, including rotating team members.  The care taken and the flexibility of including a variety of evidence ensure that policy and guidelines developed can be trusted and practical.

Appendix A. U.S. Centers for Disease Control Advice for Developing Guidelines

Selecting Work Group Experts

Scientific bias may enter into guideline development when important scientific perspectives are not adequately represented. Guideline developers should select work group members in such a way that all relevant disciplines and perspectives are included and that members of both the science and practice perspectives are represented. Having a multidisciplinary work group can help ensure the evidence is reviewed and interpreted by individuals with varying values, preferences, and perspectives and that the resulting recommendations are balanced.

Using Consensus Development Methods

Scientific bias may also arise when the opinions of work group experts are not adequately represented. The work group members may have differences in professional status or scientific knowledge. Some work group members dominate discussions more than others. Because of these differences and other social processes that emerge in group decision making, ensuring that information is shared and opinions are adequately represented can be challenging. Consensus development methods can help ensure that all expert perspectives are shared and that bias is counterbalanced. Consensus methods that might be considered include the Delphi method, the Nominal Group process, and the Glaser approach. These methods structure group interaction in ways that bring consensus on recommendation statements; for example, by using an iterative process to solicit views through questionnaires, note cards, or written documents, reflect views back to work group members systematically, and formulate final written recommendations. Regardless of the method used, systematic ways of gathering expert opinion, views, and preferences for recommendations can help to reduce bias.

Appendix B. National Academies Committee Selection Criteria

  • An appropriate range of expertise for the task. The committee must include experts with the specific expertise and experience needed to address the study's statement of task. One of the strengths of the Academies is the tradition of bringing together recognized experts from diverse disciplines and backgrounds who might not otherwise collaborate. These diverse groups are encouraged to conceive new ways of thinking about a problem.
  • A balance of perspectives. Having the right expertise is not sufficient for success. It is also essential to evaluate the overall composition of the committee in terms of different experiences and perspectives. The goal is to ensure that the relevant points of view are, in the Academies' judgment, reasonably balanced so that the committee can carry out its charge objectively and credibly.
  • Screened for conflicts of interest. All provisional committee members are screened in writing and in a confidential group discussion about possible conflicts of interest. For this purpose, a "conflict of interest" means any financial or other interest which conflicts with the service of the individual because it could significantly impair the individual's objectivity or could create an unfair competitive advantage for any person or organization. The term "conflict of interest" means something more than individual bias. There must be an interest, ordinarily financial, that could be directly affected by the work of the committee. Except for those rare situations in which the Academies determine that a conflict of interest is unavoidable and promptly and publicly disclose the conflict of interest, no individual can be appointed to serve (or continue to serve) on a committee of the institution used in the development of reports if the individual has a conflict of interest that is relevant to the functions to be performed.
  • Point of View is different from Conflict of Interest. A point of view or bias is not necessarily a conflict of interest. Committee members are expected to have points of view, and the Academies attempt to balance these points of view in a way deemed appropriate for the task. Committee members are asked to consider respectfully the viewpoints of other members, to reflect their own views rather than be a representative of any organization, and to base their scientific findings and conclusions on the evidence. Each committee member has the right to issue a dissenting opinion to the report if he or she disagrees with the consensus of the other members.
  • Other considerations. Membership in the NAS, NAE, or NAM and previous involvement in Academies studies are taken into account in committee selection. The inclusion of women, minorities, and young professionals are additional considerations.

Specific steps in the committee selection and approval process are as follows:

  • Staff solicit an extensive number of suggestions for potential committee members from a wide range of sources, then recommend a slate of nominees.
  • Nominees are reviewed and approved at several levels within the Academies; a provisional slate is then approved by the president of the National Academy of Sciences, who is also the chair of the National Research Council.
  • The provisional committee list is posted for public comment in the Current Projects System on the Web.
  • The provisional committee members complete background information and conflict-of-interest disclosure forms.
  • The committee balance and conflict-of-interest discussion is held at the first committee meeting.
  • Any conflicts of interest or issues of committee balance and expertise are investigated; changes to the committee are proposed and finalized.
  • Committee is formally approved.
  • Committee members continue to be screened for conflict of interest throughout the life of the committee.

Relevant features of NICE

I. Committee Membership

In terms of participation in committees, NICE also differs from other panels in that it includes lay members and public at-large.  Lay members are defined as those with personal experience of using health or care services, or from a community affected by an established or soon to be considered guideline. In developing the guidelines, the Committee is the independent advisory group that considers the evidence and develops the recommendations, taking into account the views of stakeholders. It may be a standing Committee working on many guideline topics, or a topic-specific Committee put together to work on a specific guideline.  NICE also advocates flexibility in calling for participation in the Committee.  If needed for a topic, the Committee can co-opt members with specific expertise to contribute to developing some of the recommendations.  For example, members with experience of integrating delivery of services across service areas may also be recruited, particularly where the development of a guideline requires more flexibility than “conventional organisational boundaries” permit.  If the guideline contains recommendations about services, NICE could call upon individuals with a commissioning or provider background in addition to members from  practitioner networks or local authorities.

II. Evidence    

The NICE approach towards evaluating clinical evidence differs from other approaches. In addition to clinical evidence, the committee is implored to also take into account other factors, such as the need to prevent discrimination and to promote equity. Similarly, NICE recognizes that not all clinical research could or should result in implementation; therefore, NICE has added an indication as to whether a procedure should only be tested in further research or that it be put forward for implementation.  Factors that might prevent research from being implemented in practice would be evidence that the committee considers to be insufficient at the current time. A 'research only' recommendation is made if the evidence shows that there are important uncertainties which may  be resolved with additional evidence (presumably from clinical trials or real world settings).Evidence may also indicates the intervention is unsafe and/or not efficacious, and the committee will make a recommendation, under those conditions, not to use the procedure.  

III. Economic Evidence

An important feature in the NICE framework is its use of economic evidence in guidelines development.  There are two primary considerations in drawing conclusions from economic studies for a given intervention.  The first is that the methodology is sufficiently strong to avoid the possibility of double-counting costs or benefits.  NICE recommends that the way consequences are implicitly weighted should be recorded openly, transparently and as accurately as possible. Cost–consequences analysis then requires the decision-maker to decide which interventions represent the best value using a systematic and transparent process.  A related process is that an incremental cost-effectiveness ratio (ICER) threshold be used whenever possible and that interventions with an estimated negative net present value (NPV) should not be recommended unless social values outweigh costs.

The second consideration NICE put forward on using economic evidence in translating research to clinical practice/policy concerns cost-minimization procedures.  The committee took care to avoid blindly choosing interventions with the lowest costs by declaring that cost minimization can be used only when the difference in benefits between an intervention and its comparator is known to be small and the cost difference is large.  Given the criteria, NICE believes that cost-minimisation analysis is only applicable in a relatively small number of cases. 

In sum, economic evidence estimating the value of the intervention should be considered alongside clinical evidence, but judgment by social values (policy) should also be taken into account to avoid choosing intervention  merely because it is offered at the lowest cost.

IV. Producing Guidelines from Evidence

The final step in translating research evidence into practice and policy guidelines is drafting recommendations.  Because many people read only the recommendations, the wording must be concise, unambiguous and easy to translate into practice by the intended audience. As a general rule, the committee recommends that each recommendation or bullet point within a recommendation should contain only one primary action and be accessible as much as possible to a wide audience.

An important guideline explicitly stated by NICE is to indicate levels of uncertainty in the evidence.  It is the only institution to have created a "Research recommendations process and methods guide," which details the approach to be used to identify key uncertainties and associated research recommendations.  In considering which research intervention or evidence to put forward for recommendation, the committee established guidelines that includes three levels of certainty:

1. Recommendations for activities or interventions that should (or should not) be used
2. Recommendations for activities or interventions that could be used
3. Recommendations for activities or interventions that must (or must not) be used.

Bibliography

  • CDC (2012).  Reducing Scientific Bias from Expert Opinion in Guidelines and Recommendations Development.
  • Grade Working Group (2015).  From Evidence to Recommendations:  Transparent and Sensible.  Retrieved from:  http://www.gradeworkinggroup.org/.
  • Guyatt, G.H. (2008).  GRADE: An Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations.  BMJ 336.
  • Ministry of Health of British Columbia (2014).  Guidelines and Protocols Advisory Committee Handbook: Developing Clinical Practice Guidelines and Protocols for British Columbia.
  • National Academies of Sciences, Engineering & Medicine (2015).  Our Study Process: Ensuring Independent, Objective Advice.  Washington, D.C.  Retrieved from: http://www.nationalacademies.org/studyprocess/.
  • Quaseem, A. Towards International Standards for Clinical Practice Guidelines.  Presentation delivered by the Chair of the Guidelines for International Network, Scottish Charity.
  • UK National Institute for Health and Care Excellence (2015).  The Manual on Developing NICE Guidelines.
  • Institute of Education Sciences (2015).  What Works Clearinghouse.  Retrieved from:  http://ies.ed.gov/ncee/wwc/Publications_Reviews.aspx?f=All%20Publication%20and%20Product%20Types,3.

Recent Articles

EBP Quarterly

Understanding the Criminal Pathways of Victimized Youth

Mariah Robles, University of New Haven Photo by Jeremy Bishop on Unsplash In a contemporary study of 64,329 high-ri...
EBP Quarterly

The Price of Punishment: Exclusionary Discipline in Connecticut PreK-12 Schools

Jessica R. Morgan, University of New Haven Photo by Patrick Hendry on Unsplash Suspensions and expulsions are not o...
EBP Quarterly

Breaking the Cycle of Absenteeism: Strategies for Prevention

Sudeshna Das, University of New Haven In the United States, juvenile crime is an area of major concern. Research on ...
Monthly Publication of the Evidence-Based Professionals Society

Evidence-Based Professionals' Monthly - March 2024

| EBP Monthly
Spring into action! You Can Come Out Now...   FEATURED SAMHSA Releases New Data on Recovery from Substance Use...
Quarterly for Evidence-Based Professionals

Quarterly for Evidence-Based Professionals - Volume 8, Number 3

| EBP Quarterly
The EBP Quarterly - Volume 8, Number 3 Spring into action! This issue of the EBP Quarterly features three (3) in-d...
CBT Day

Unlock the Power of Cognitive Behavioral Therapy (CBT): Elevate Your Practice!

| Events
Key Information Cognitive Behavioral Therapy (CBT) Masterclass: Core & Advanced Skills March 22, 2024 Learn Mo...
optimism

MI Days-2.0

| News & Announcements
JOIN THE 30-DAY "MI DAYS" CHALLENGE! Motivational Interviewing (MI) Skills Days Proven System For Building & Su...
Case Management Days

Join us for "Case Management (CM) Days" Spring 2024!

| Events
As an active professional in our field we invite you to upcoming Case Management, Trauma, Individual & Family Eng...
Monthly Publication of the Evidence-Based Professionals Society

Evidence-Based Professionals' Monthly - February 2024

| EBP Monthly
We LOVE being there for you!   FEATURED OJJDP Celebrates 50 Years of the Juvenile Justice and Delinquency Prev...