Skip to site content

Chapter 8 - Program Evaluation

Part 4 - Staff Services/Special Programs

Title Section
Introduction 4-8-1
    Purpose 4-8-1A
    Scope 4-8-1B
    Authorities 4-8-1C
    Policy 4-8-1D
    Principles 4-8-1E
    Definitions 4-8-1F
    Evaluation Training and Support 4-8-1G
    Evaluation and Regulatory Compliance 4-8-1H
    Meaningful Use of Evaluation Findings 4-8-1I

4-8.1  INTRODUCTION.

  1. Purpose.  This chapter establishes the policy and procedures for planning, funding, and using information from program evaluations to assess the impact of Indian Health Service (IHS) health care services, as well as functions related to the delivery of IHS health care services.  This policy applies to programs operated by IHS and by IHS grantees, if specified in a grant program’s funding announcement. 
  2. Scope.  A number of statutes, regulations, and memoranda direct IHS to use evaluative information (i.e., data, evidence, etc.) in the ongoing management of federal programs.  This chapter clarifies the definition and use of program evaluation to meet these requirements, including but not limited to: evaluation planning; evaluation funding and support; training and support for effective evaluations; and the appropriate use of evaluation findings and evidence for use in management decision making and for establishing or revising policies and strategic goals and objectives.
  3. Authorities.
    1. The Snyder Act of 1921 (25 United States Code (USC) § 13)
    2. The Transfer Act (42 USC § 2001)
    3. The Indian Health Care Improvement Act (25 USC § 1601, et seq.)
    4. The Government Performance and Results Act of 2010 (Public Law 103-62)
  4. Policy.  The IHS is committed to conducting and using well-designed, rigorous evaluations on a routine basis to enable programs to adhere to performance and accountability mandates, validate outcomes, and improve program effectiveness. It is IHS policy to use program evaluation to determine the accessibility and quality of the health care services it delivers.  The IHS also uses program evaluation to assess the manner and extent to which federal programs achieve intended objectives and use evaluative information to make management decisions.
  5. Principles.
    1. Rigor.  Evaluations should use the most rigorous methods that are appropriate and feasible within statutory, budget, and other constraints. Rigor is required for all types of evaluations, including impact and outcome evaluations, implementation and process evaluations, descriptive studies, and formative evaluations.  Rigor requires ensuring that inferences about cause and effect are well founded (internal validity); requires clarity about the populations, settings, or circumstances to which results can be generalized (external validity); and requires the use of measures that accurately capture the intended information (measurement reliability and validity).
    2. Relevance.  Evaluation priorities should take into account legislative requirements and Congressional interests, and should reflect the interests and needs of leadership, specific agencies, and programs; program office staff and leadership; and IHS partners such as states, territories, tribes, and grantees; the populations served; researchers; and other stakeholders. Evaluations should be designed to address IHS's diverse programs, customers, and stakeholders; and IHS should encourage diversity among those carrying out the evaluations.
    3. Transparency.  Unless otherwise prohibited by law, information about evaluations and findings from evaluations should be broadly available and accessible, typically on the Internet. This includes identifying the evaluator, releasing study plans, and describing the evaluation methods.  The IHS will release results of all evaluations that are not specifically focused on internal management, legal, or enforcement procedures or that is not otherwise prohibited from disclosure.  Evaluation reports will present all results, including favorable, unfavorable, and null findings.  The IHS will release evaluation results timely (usually within two months of a report's completion) and will archive evaluation data for secondary use by interested researchers (e.g., public use files with appropriate data security protections).
    4. Independence and Impartiality.  To ensure its credibility, the evaluation process will be independent from any process involving program policy making, management, or activity implementation.  The evaluation function will be located separately from other management functions so that it is free from undue influence and so that unbiased and transparent reporting is assured.
    5. Ethics.  The IHS-sponsored evaluations will be conducted in an ethical manner and safeguard the dignity, rights, safety, and privacy of participants.  Evaluations will comply with both the spirit and the letter of relevant requirements such as regulations governing research involving human subjects.
  6. Definitions.  The following definitions are applicable to this Chapter.
    1. Accountability.  The responsibility of program managers and staff to provide evidence to stakeholders, as well as authorizing and funding agencies, that a program is effective and in conformance with its expectations and requirements.
    2. Activities.  The actual events or actions that take place as a part of the program.
    3. Data Collection Method.  The way facts about a program and its outcomes are amassed.  Data collection methods often used in program evaluations include literature search, file review, natural observations, surveys, expert opinion, and case studies.
    4. Evaluation (program evaluation).  The systematic collection of information about the activities, characteristics, and outcomes of programs (which may include interventions, policies, and specific projects) to make judgments about that program, improve program effectiveness, and/or inform decisions about future program development.
    5. Evaluation Design.  The logic model or conceptual framework used to depict a program’s theory of change and how program resources are expected to lead to the program’s intended outcomes.  The evaluation design drives evaluation planning by focusing on evaluation questions.
    6. Evaluation Plan.  A written document describing the overall approach that will be used to guide an evaluation, including why the evaluation is being conducted, how the findings will likely be used, and the design and data collection sources and methods.  The plan specifies what will be done, how it will be done, who will do it, and when it will be done.
    7. Experimental (or randomized) Designs.  Designs that aim to establish causal attribution by ensuring the initial equivalence of a control group and a treatment group through random assignment.  Some examples of experimental or randomized designs are randomized block designs, Latin square designs, fractional designs, and the Solomon four-group.
    8. Funded Recipient.  Grantees or others receiving IHS program funding to carry out specific prevention or intervention activities.  
    9. Impact.  The effect that interventions or programs have on people, organizations, or systems to influence health.  Impact is often used to refer to effects of a program that occur in the medium or long term with an emphasis on ones that can be directly attributed to program efforts.
    10. Large Program.  Any program with a budget that exceeds $1 million within a given year or cohort. 
    11. Logic Model.  A visual representation showing the sequence of related events connecting the activities of a program with the program’s desired outcomes and results.
    12. Outputs.  The direct products of program activities; immediate measures of what the program did.
    13. Outcomes.  The results of program operations or activities; the effects triggered by the program.  (For example, increased knowledge, changed attitudes or beliefs, reduced tobacco use, reduced morbidity and mortality.)
    14. Program.  Any set of related activities, broadly defined, undertaken to achieve an intended outcome.  It encompasses environmental, system, and media initiatives; preparedness efforts; and research, capacity, and infrastructure efforts.
    15. Stakeholders.  People or organizations that are invested in the program or that are interested in the results of the evaluation or what will be done with results of the evaluation.
  7. Evaluation Training and Support.
    1. This chapter establishes an agency-wide evaluation work group.  The work group will be led by the Office of Public Health Support (OPHS), Division of Planning, Evaluation, and Research (DPER) with meeting support provided via DPER staff.
      1. Membership:
        1. Office of Public Health Support
        2. Office of Management Services
        3. Office of Clinical and Preventive Services
        4. Office of Tribal Self-Governance
        5. Office of Direct Service and Contracting Tribes
        6. Office of Urban Indian Health Programs
        7. Other Offices as determined by the IHS Director
      2. Responsibilities:
        1. Advise, support and monitor program evaluation efforts at IHS.
        2. Review program evaluation plans to assure they are appropriate for the program and follow established program evaluation ethics and best practice.
        3. Integrate program evaluation efforts into routine IHS practice:
          1. Develop standard processes to use evaluation results to improve program development, implementation and monitoring.
          2. Incorporate evaluation criteria into the grant award process.
    2. The DPER will work with the Division of Regulatory Affairs, and others, as appropriate, in the creation of an expedited clearance process for evaluation-related data collections under the Paperwork Reduction Act.
    3. The DPER will create and maintain IHS evaluation internet/intranet sites that provide evaluation resources for IHS personnel, partners, and stakeholders, including but not limited to:
      1. Examples of position descriptions for evaluation positions;
      2. Evaluation training materials and opportunities;
      3. Links to key federal evaluation resources;
      4. Repository of internal evaluation activities and materials including webinars, podcasts, and planning/workgroup meeting notes; and
      5. Previous and existing evaluation projects (from IHS and related/similar public health programs).
  8. Evaluation and Regulatory Compliance.
    1. The IHS Offices and large programs will ensure that sufficient evaluation capacity and resources are made available to:  assess program effectiveness; identify opportunities for program improvement; and inform future management decision-making.  Where appropriate, IHS Offices and large programs will consult with Tribes to identify and prioritize programs for evaluation, consistent with the Tribal Consultation Policy.
    2. Beginning in FY 2019, a program falling within the threshold of this policy, either with or without an active evaluation component, shall initiate evaluation activities in compliance with this policy in preparation for its next funding cycle/cohort.
      1. Programs shall use program resources to cover the costs of evaluation planning, implementation, and analysis.  For planning purposes, the industry standard, at the low end is 5-10% of program funding.  The cost, or range of costs, for any individual program evaluation will be determined by a variety of factors, including, but limited to, the following:
        1. Resource availability:  Staff may be available or may need to be brought in via a contract or some other vehicle.
        2. Data availability and collection:  While program data is generally available, data specific to the program evaluation may or may not be available.  New data (types, sources, collection, etc.) increase time and costs.  Including program evaluation data and collection into the original program plan can minimize this.
        3. Nature of outcomes:  Measuring longer-term or more complex program impacts will require longer or more complex evaluation designs.
        4. Evaluation rigor/quality:  As evaluation plans increase rigor or move towards seeking causal relationships (i.e., research), the evaluation can become more complex.
        5. Purpose of evaluation:  Evaluations can be designed for program improvement or more complex internal/external accountability or cost-benefit models.  These differences can increase complexity and costs.
        6. Working with partners:  May involve multiple data systems or different chains of command.  Can also effect language used for dissemination.
        7. Dissemination plans:  Different dissemination products for different audiences at different or variable frequencies can increase costs.
        8. Evaluator's familiarity with program:  Evaluators new to the program may require time to learn the program function, structure and goals.  This may also initially require more program input into the evaluation design.
        9. Paperwork Reduction Act.  Reviews require time and resources.
        10. Cybersecurity.  Can require dealing with data security, access or systems match issues.  All take time and may limit who can do the required work.
      2. Evaluation planning activities will be started as soon as reasonable during program planning and development.  Best practice suggests this planning should occur as soon as the program’s mission and goals are established. 
        1. This defines the scope, data requirements, and use of the evaluation at the national level.  The program planning office can then identify and assess possible cost factors for the evaluation and identify the resources needed to complete it.
        2. This planning process also should include identifying and clarifying the program outcomes and evaluation expectations to include in the program’s funding announcement. 
          1. Prospective grantees should be aware of and agree to these expectations before applying for and receiving the award. 
          2. Evaluation expectations could include: evaluation design, program logic model, baseline data needs, data collection plan and instruments, evaluation costs, technical assistance available, identity of evaluation provider and data use/dissemination plan.
      3. Programs will work with DPER on evaluation planning and implementation and consult as needed for regulatory and policy compliance, contract oversight, and guidance for grantees pre-/post-award.
      4. Programs may choose any mechanism to complete the evaluation, including:
        1. Partnering with DPER to use the Evaluation Services Indefinite Delivery/Indefinite Quantity contract
        2. An independent/separate contract
        3. Program resources/staff
        4. Other resources/staff
    3. The IHS Offices and large programs will use evaluative information to help inform the annual budget justification process, including addressing the following requirements:
      1. The OMB Circular A-11 budget submission process, i.e., thorough discussion of the evidence and examples of innovation for a given program.
      2. The OMB Circular A-123 language regarding improvement of the accountability and effectiveness of Federal programs and operations.
      3. Performance measurement requirements:  Government Performance and Results Act, Government Performance and Results Modernization Act.
      4. Planning, Performance and Program Integrity management improvement and similar initiatives, specifically relating to:
        1. Ensuring that evaluation plans are demonstrative of the organization’s strategic goals and objectives;
        2. Tracking data trends and illustrating evaluation findings and other evidence for use in decision-making and program improvement; and
        3. Identifying and addressing areas of risk that limit the impact of programs.
      5. Examining program efforts to achieve health impact, apart from and in addition to the evaluations of funded recipients’ performance.
      6. Ensuring evaluation findings are timely and relevant, so as to maximize their use in the organization’s planning, performance reporting, budgeting, and priority-setting processes.
      7. Ensuring that evaluation findings are easily accessible to users, major constituencies, and stakeholders.
      8. Ensuring that new public health programs or major health initiatives present an evaluation plan/approach that includes evaluations across the lifecycle of the effort so that findings can be deployed for program improvement even in early stages.
      9. Involving DPER evaluation staff early in the development of new Funding Announcements and large contracts to ensure that evaluation findings inform program improvement and accountability.
      10. Coordinating and communicating evaluation activities across the IHS organizational units with overlapping or complementary missions.
      11. Ensuring a process for tracking how evaluation findings are used to improve program planning, administration, implementation and oversight and outlining how evaluation findings will effect program decisions and modifications.
  9. Meaningful Use of Evaluation Findings.
    1. Headquarters Responsibilities:
      1. Identify how the funded effort contributes to agency and Departmental strategic plans and government-wide management priorities.
      2. Indicate which populations are disproportionately affected by the health issue and whether they are being addressed or targeted by the funded program, with special attention to vulnerable populations and people with disabilities.
      3. Match evaluation designs and methods to the size and scope of the funded initiative, purpose of the evaluation, and capability of the funded recipients.
      4. Use a logic model or other method of presentation to present a uniform set of outputs and short, intermediate, and long-term outputs and outcomes within the funding announcement.
      5. Specify outcome and supporting measures within the funding announcement.
      6. Provide standards, definitions, and format(s) for reporting results.
    2. Funded Recipients’ Responsibilities:
      1. Be aware of, and agree to, program outcomes and evaluation expectations as described within a program’s funding announcement (if applicable).
      2. Provide clarity on resources, activities, outputs, and short, intermediate, and long-term outcomes of their health promotion effort (using a logic model or other method of presentation).
      3. Develop plans for dissemination and use of evaluation findings to maximize program improvement for health impact (including how the dissemination effort will be evaluated).
      4. Provide the number and capabilities of staff assigned to evaluation and performance measurement (although the number and type may vary with the level of grantee capacity).
      5. Develop plans for the engagement of stakeholders in helping shape evaluation and measurement design.
      6. Provide clarity on evaluation and measurement design and data collection sources and methods.