HRSA Health Workforce Training Program
EVALUATION TOOLKIT
Health Workforce Training Program
Evaluation Toolkit
Introduction
The goal of the HRSA Health Workforce Training Pro-
grams is to train clinicians to deliver high-quality care.
This toolkit suggests ways to track trainee outcomes
and your program’s ability to meet the Three Part Aim
goals of improving patient experience and access, low-
ering cost, and raising quality of health care services.
We believe evaluation is the key to the sustainability. As
we build the workforce of the future, it is important that
programs construct evaluations that clearly measure
long-term outcomes on trainees and patients.
Who should use this resource?
This toolkit should be used by the health workforce
grant evaluation planning and implementation team.
Evaluation is best done as a collaborative effort
among stakeholders, including those involved in data
collection and evaluation decisions.
When should it be used?
This toolkit is designed for grantees in the grant-
planning phase and in the evaluation process after a
program award. The toolkit can be accessed by:
1)
Downloading the entire toolkit as a PDF le.
2) Accessing modules individually to address
specic questions, depending on your phase
of evaluation.
Addressing the Three Part Aim Plus Provider
Well Being
HRSAs funding announcement for the Primary Care
Training Enhancement program states the goal of
“working to develop primary care providers who are
well prepared to practice in and lead transforming
healthcare systems aimed at improving access, quality
of care and cost effectiveness.”
Better
Health
Reduced
Health
Disparities
Lower Cost
Through
Improvement
Better
Care
THREE PART AIM
THREE PART AIM
Reducing
Costs
Provider
Well Being
Patient
Experience
THREE PART AIM PLUS PROVIDER WELL BEING
Population
Health
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
The National Quality Strategy promoted by the Department of Health and Human Services is an overarching plan
to align efforts to improve quality of care at the national, State, and local levels. Guiding this strategy is the Three
Part Aim which is to provide better care, better health/healthy communities and more affordable care.
1
Recently,
there has been discussion of adding a fourth aim, “provider well being”, which adds improving the work life of
clinicians and staff to the goals.
2
The 2014 Clinical Prevention and Population Health Curriculum Framework, developed through consensus
of educators, created a framework for integration of the Three Part Aim into health professional education.
3
These guidelines acknowledge that going forward more educational content should focus on population health.
Elements of population health have been integrated across accrediting bodies such as the American Association
of Colleges of Nursing and the American Association of Medical Colleges.
The engagement of the health care workforce is of paramount importance in achieving the primary goal of the
Three Part Aim Plus Provider Well Being—improving population health. Health workforce programs should
assess the ways they are preparing future clinicians to provide services that improve patient experience,
population health, cost effectiveness, and provider well-being. This toolkit provides examples for health
workforce grantees to consider as they evaluate the ability of their programs to achieve the Three Part Aim Plus
Provider Well Being.
A note on language
HRSA health workforce programs support a variety of schools and health professionals. Funded programs serve
a range of health professional students and have a wide variety of designs. For this reason, we strive to use
terminology that applies across programs. Throughout this guide the term trainee will be used to apply to the
student or learner regardless of his/her profession or level of education.
1 https://www.amia.org/sites/amia.org/les/Report-Congress-National-Quality-Strategy.pdf
2 Bodenheimer T, Sinksy C. From Triple to Quadruple Aim: Care of the Patient Requires Care of the Provider. Annals of Family Medicine. 2014: 12(6): 573-576.
3 Paterson MA, Falir M, Cashman SB, Evans C, Garr D. Achieving the Triple Aim: A Curriculum Framework for Health Professions Education. Am J Prev
Med.2016:49(2):294-296.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: INTRODUCTION | PAGE 2
INTRODUCTION: Why is engaging stakeholders important to your health workforce
training evaluation?
Stakeholders can help—or hinder—your health workforce training evaluation before it is conducted, while it is
being conducted, and after the results are collected. Stakeholder roles include:
Responsibility for day-to-day implementation of health workforce training program activities.
Advocating or approving changes to the health workforce training program that the evaluation
may recommend.
Continuation and funding or expansion of the health workforce training program.
Generating support for the health workforce training program.
MODULE 1
Engaging Stakeholders for your
Health Workforce Training Program Evaluation
STEP 1: Who are the health workforce training
program evaluation stakeholders and how do
you identify them?
Stakeholders are all of the people who care about
the program and/or have an interest in what happens
with the program. There are 3 basic categories of
stakeholders:
1. Those interested in the program operations.
2. Those served or affected by the health workforce
training programs.
3. Those who will make decisions based on
evaluation ndings to improve, enhance, or
sustain the health workforce training program.
To identify stakeholders, you need to ask:
Who cares about the health workforce training
program and what do they care about?
Which individuals or organizations support the
program?
Which individuals or organizations could be
involved that aren’t aware of the program?
Use the Identifying Key Stakeholders worksheet listed
in the resources section (example on page 2).
Use the following checklist to involve key stakeholders
throughout the health workforce training program
evaluation process.
Identify stakeholders using the three broad categories
(those affected, those involved in operations, and those
who will use the evaluation results).
Identify any other stakeholders who can improve
credibility, implementation, and advocacy, and make
funding decisions.
Engage individual stakeholders and/or representatives of
stakeholder organizations.
Create a plan for stakeholder involvement and identify
areas for stakeholder input.
Target selected stakeholders for regular participation in
key activities, including writing the program description,
suggesting evaluation questions, choosing evaluation
questions, and disseminating evaluation results.
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 1 | PAGE 2
PCTE Program example
INNOVATION
Team rounding in the nearby hospital and a special weekly clinic session with medical, pharmacy, and social service appointments for
the recently discharged. The rounding interdisciplinary team will include trainees (medical students, residents, and social work students)
as well as attending physician/preceptors.
OBJECTIVE
Reduce readmissions for high risk patients with multiple chronic diseases, thus decreasing Medicaid spending.
Identifying Key Stakeholders example
CATEGORY STAKEHOLDERS
1 Who is affected by the program?
Medical students
Residents
Health center administration
Social work students
Clinical preceptors
State Medicaid
2 Who is involved in program operations?
Faculty directors and teaching staff
Alumni ofce
Health center administration
Junior faculty/fellows
Senior faculty
Health system leadership
3 Who will use evaluation results?
Program leadership
Clinical training sites
Grants and development ofce
HRSA
Program Partners (i.e. Schools of Social Work)
Peers in the medical education eld
Which of these key stakeholders do we need to:
Increase credibility of our
evaluation
Implement the interventions
that are central to this
evaluation
Advocate for institutionalizing
the evaluation ndings
Fund/authorize the continuation
or expansion of the program
Alumni ofces
Peers in the medical education eld
Clinical preceptors
Faculty
Medical students
Residents
Clinical preceptors
State Medicaid ofce
Program leadership
State Medicaid ofce
Health care system/hospital
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 1 | PAGE 3
STEP 2: What to ask stakeholders?
You must understand the perspectives and needs of your stakeholders to help design and implement the health
workforce training evaluation. Ask them the following questions:
Who do you represent and why are you interested in the health workforce training program?
What is important about the health workforce training program?
What would you like the health workforce training program to accomplish?
How much progress would you expect the health workforce training program to have made at this time?
What are critical evaluation questions at this time?
How will you use the results of this evaluation?
What resources (i.e., time, funds, evaluation expertise, access to respondents, and access to policymakers)
could you contribute to this evaluation effort?
The answers to these questions will help you synthesize and understand what program activities are most
important to measure, and which outcomes are of greatest interest. Use the What Matters to Stakeholders
worksheet listed in the resources section to identify activities and outcomes. An example is listed below.
What Matters to Stakeholders example
STAKEHOLDERS What activities and/or outcomes of this program matter most to them?
Medical students/residents Being prepared for residency/being prepared for practice
Alumni ofce Retention and long term engagement of medical students
Program leadership
Retention of medical students
Engaging students in selecting primary care
Exposure of all students to working in underserved settings
Health center administration Reducing unnecessary readmissions
State Medicaid Reducing spending due to unnecessary readmissions
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 1 | PAGE 4
TOOL 1.1
Identifying Key Stakeholders
CATEGORY STAKEHOLDERS
1 Who is affected by the program?
2 Who is involved in program operations?
3 Who will use evaluation results?
Which of these key stakeholders do we need to:
Increase credibility of our
evaluation
Implement the interventions
that are central to this
evaluation
Advocate for institutionalizing
the evaluation ndings
Fund/authorize the continuation
or expansion of the program
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 1 | PAGE 5
TOOL 1.2
What Matters to Stakeholders?
STAKEHOLDERS What activities and/or outcomes of this program matter most to them?
MODULE 2
Describe the Program
INTRODUCTION: Describe your health workforce training program
The purpose of this module is to fully describe your health workforce training program. You will want to clarify
all the components
and intended outcomes of the health workforce training program to help you focus your
evaluation on the most important questions.
STEP 1: Describe your health workforce training program and develop SMART objectives
Think about the following components of your health
workforce training program:
Need. What problem or issue are you trying to
solve with the health workforce training program?
Targets. Which groups or organizations need to
change or take action?
Outcomes. How and in what way do these targets
need to change? What specic actions do they need
to take?
Activities. What will the health workforce training
program do to move these target groups to
change and take action?
Outputs. What capacities or products will be
produced by your health workforce training
program’s activities?
Resources and inputs. What resources or inputs
are needed for the activities to succeed?
Relationship between activities and outcomes.
Which activities are being implemented to
produce progress on which outcomes?
Stage of development. Is the health workforce
training program just getting started, is it in the
implementation stage, or has it been underway
for a signicant period of time?
Using a logic model can help depict the program
components. Also known as a program model,
theory of change, or theory of action, a logic model
illustrates the relationship between a program’s
activities and its intended outcomes. The logic model
can serve as an “outcomes roadmap” and shows
how activities, if implemented as intended, should
lead to the desired outcomes.
A useful logic model:
Identies the short-, intermediate-, and long-
term outcomes of the program and the pathways
through which the intervention activities produce
those outcomes.
Shows the interrelationships among components
and recognizes the inuence of external
contextual factors on the program’s ability to
produce results.
Helps guide program developers, implementers,
and evaluators.
SMART objectives
As you think about developing objectives within
your logic model, the SMART objectives framework
can help you write objectives that are clear, easily
communicated, and measurable.
The acronym stands for:
S Specic: What exactly are we going to do?
M Measurable: How will we know we have
achieved it?
A Agreed upon: Do we have everyone engaged
to achieve it?
R Realistic: Is our objective reasonable with the
available resources and time?
T Time-bound: What is the time frame for
accomplishment?
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
Example SMART objectives for
a health workforce training program:
The program will mentor ve primary care residents’ provision of team-based care over the course of a year. Their team-based
care competency will be measured by a self-assessment tool in months 1 and 12 of the program.
The program will expose all medical trainees to enhanced competency in social determinants of health including screening for
health literacy and barriers to care; participating in collaborative visits with pharmacists and behavioral health care providers;
and referring to social workers for non-medical barriers. Trainees will be exposed to these approaches in a four-week module
and knowledge of these approaches will be measured through participation in a minimum of ve screenings, ve collaborative
visits, and ve referrals.
STEP 2: Develop a logic model
A useful logic model is simple to develop if you have identied the following information for your health
workforce training program.
Inputs: Resources crucial to implementation of the health workforce training program.
Activities: Actual events or actions done by the health workforce training program.
Outputs: Direct products of the health workforce training program activities, often measured in countable
terms. For example, the number of trainees who participate in a complex care management team meeting
or the number of community providers who participate in population health forums.
Outcomes: The changes that result from the health workforce training program’s activities and outputs.
Consider including outcomes that measure your program’s success in stages (e.g., short-term: increased
number of trainees who have knowledge of population health management tools; intermediate-term:
increase in patients at clinical preceptor sites who have proactive patient education visits for chronic
disease management; long-term: number of graduates who opt to work in a primary care setting that uses
population health data for patient outreach and screening).
Stage of development: Programs can be categorized into three stages of development: planning,
implementation, and maintenance/outcomes achievement. The stage of development plays a central role in
setting a realistic evaluation focus in the next step. A program in the planning stage will focus its evaluation
differently than a program that has been in existence for several years.
Basic logic model components
INPUTS ACTIVITIES OUTPUTS
SHORT-TERM
EFFECTS/
OUTCOMES
INTERMEDIATE
EFFECTS/
OUTCOMES
LONG-TERM
EFFECTS/
OUTCOMES
Methodology for logic model development
To stimulate the creation of a comprehensive list of these components, use one of the three following methods.
1. Review any information available on the health workforce training program—whether from mission/vision statements,
strategic plans, or key informants—and extract items that meet the denition of activity (something the program and its staff
does) and of outcome (the change you hope will result from the activities).
2. Work backward from outcomes. This is called “reverse” logic modeling and is usually used when a program is given
responsibility for a new or large problem or is just getting started. There may be clarity about the “big change” (most distal
outcome) the program is to produce, but little else. Working backward from the distal outcome by asking “how to” will help
identify the factors, variables, and actors that will be involved in producing change.
3. Work forward from activities. This is called “forward” logic modeling and is helpful when there is clarity about activities but
not about why they are part of the program. Moving from activities to intended outcomes by asking, “So then what happens?”
helps elucidate downstream outcomes of the activities.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 2 | PAGE 2
Use the identifying components worksheet listed in the resources section to help you develop a logic model for
your health workforce training program. An example from the University of South Alabama’s health workforce
training program is listed below.
Identifying components example
ACTIVITIES
What will the program and staff do?
OUTCOMES
What are the desired outcomes of the
program?
SEQUENCING
When are these outcomes expected
(short, intermediate, long term)?
1
Improve practice performance in caring
for complex patients
Increased number of complex patients
under care management.
Increased number of patients screened for
substance abuse.
Increased number of patients seen in a
group ofce setting.
Reduction in unnecessary admissions for
health system.
Short-term:
Increased number of complex patients
under care management.
Increased number of patients screened for
substance abuse.
Increased number of patients seen in a
group ofce setting.
Intermediate-term:
Reduced number of unnecessary
admissions for health system.
Long-term:
Care delivered by graduates and learners
measured by well-being and other
markers above 80th percentile.
2
Provide modular education for all
learners on population health, care of
complex patients, and improved patient
engagement.
Increased number of residents who have
knowledge of team-based care of complex
patients.
Increased number physicians who have
extensive team-based population health.
Reduced number of ED visits.
Care delivered by graduates and learners
measured by well-being and other
markers above 80th percentile.
Short-term:
Increased number of residents who have
knowledge of team-based care of complex
patients.
Increased number physicians who have
extensive team-based population health.
Intermediate-term:
Reduced number of ED visits.
Long-term:
Care delivered by graduates and learners
measured by well-being and other
markers above 80th percentile.
3
Provide intense educational opportunity
for medical students regarding value-
based care.
Increased number of students in value-
based care track.
Increased number of students interested
in value-based care.
Residency graduates taking leadership
positions in primary care.
Short-term:
Increased number of students in value-
based care track.
Intermediate-term:
Increased number of students interested
in value-based care.
Long-term:
Increased residency graduates taking
leadership positions in primary care.
Used with permission from the University of South Alabama
Once you have the information outlined in the table, you can develop the sample logic model for your program.
The University of South Alabama’s logic model is shown on page 5 as an example.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 2 | PAGE 3
STEP 3: Using and updating your logic model
A logic model provides a critical framework for evaluators and implementers to monitor a program over time.
It is not a static tool. Tracking indicators for each step in the logic model helps determine whether resources
are sufcient and whether activities are being implemented according to plan. This process identies areas
for program renement, mid-course corrections, and/or technical assistance to support ongoing program
implementation.
Examples of the types of information that may provide mid-term feedback to change program implementation:
Student focus groups on experience in working with complex patients indicate that they want more
experience to feel condent in their skills.
Patient surveys on care coordination approach identies that patients would like better introduction and
understanding of roles among their care team.
Clinical process tracking data on number of patients screened for substance use shows improvement at
one of the ve clinical preceptor sites, and no change at the four remaining clinical sites.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 2 | PAGE 4
Caring for the Complex Patient in the PCMH — University of South Alabama
SITUATION
Need: To improve poor health of
population through improved care
coordination and engagement
while better training physicians,
mental health providers, and
others to deliver team based care
Desired Result: High performing
care delivery and training
platform, modular educational
program focused on improving
social determinants through
improved patient engagement and
team based care
Enabling “protective” Factors:
Existing population based focus of
residency
Limiting “risk” factors:
Incorporation of medical students
and mental health students
Strategies and best practices:
Use of modular learning activities;
certication approach; pipeline
approach
INPUTS
What we invest (resources)
Clinical practice staff
Family medicine faculty
Mental health faculty
Pharmacy faculty
Patient time
Curriculum time
Medical student LEAP
experience
COM III and IV time
Residency population health
rotation
Mental health time
Post graduate physician and
pharmacy time
OUTPUTS
Activities
What we do
Improved practice performance
regarding complex patients
Modular education for all
learners on population
health, care of the complex
patient, and improved patient
engagement
Simulated team based care
delivery training
Intense educational opportunity
for medical students regarding
value- based care
Faculty development
Service delivery
Evidence of Program Delivery
# of complex patients under
care management
# of patients screened for
substance abuse
# of patients seen in group
ofce setting
# of team home visits made
# of residents and students
with training in population and
care of the complex patient
# of students in value-based
care track
# of students engaged in team
based care of complex patients
OUTCOMES-IMPACT
Short term results
(1-4 years)
Change in # of residents with
knowledge of team based care
of complex pts
Change in # of students with
experience in team based care
of complex patient
Change in # of patients
screened for substance abuse
Change in physicians with
extensive team based
population health experience
Long term results
(5-7 years)
Reduction in unnecessary
admissions for USA Health
System
Reduction in ED visits for SA
Health System
Decrease in admissions within
the last 2 weeks prior to death
in patients cared for by USA
Health System
Increase in non- rvu to family
physicians in lower Alabama
Increased student interest in
value based care
Ultimate impact
(8+ years)
Increased interest amongst
entering students who are
seeking training in value based
training
USA Residency graduates
successfully seeking
leadership positions in primary
care
Care delivered by graduates
and learners as measured by
wellbeing and other markers
above 80th percentile.
ASSUMPTIONS
Mental health care delivery in a primary care setting
will be accepted by patients and reimbursed by payers
Learners will nd simulations engaging and will
value improving resource utilization as an equivalent
clinical skill
Regional care organization will value improved clinical
outcomes over volume based metrics in local market
EXTERNAL FACTORS
Payment migrating to value on national level will
continue, sparking student interest
Need for enhanced primary care workforce, mental
health workforce, and team-based focus will be seen
by learners
EVALUATION
1. Learner satisfaction with the educational offerings
2. Learner acquisition of skills necessary to manage complex patients
3. Learner participation in team based activities
4. Graduates undertaking team based care in underserved environment
upon graduation
5. Mental health graduates seek opportunities in primary care setting
upon graduation
6. Reduction in hospitalizations for patients under the care of USAFM and
subsequently USA Health
7. Improvements in patient health attributable to improved primary care
Used with permission from the University of South Alabama
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 2 | PAGE 5
TOOL 2.1
Components of your logic model
ACTIVITIES
What will the program and staff do?
OUTCOMES
What are the desired outcomes of the
program?
SEQUENCING
When are these outcomes expected
(short, intermediate, long term)?
1
2
3
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 2 | PAGE 6
MODULE 3
Focus Evaluation Design
INTRODUCTION
The purpose of this module is to guide development of the evaluation purpose, questions, and ÿndings. There
may be evaluation questions that you will not have time or resources to answer in a single grant cycle. How do
you prioritize? Now that you have developed your logic model and clearly deÿned your program, the next step is
to focus the scope of your evaluation design.
STEP 1: Determine your health workforce training program stage of development
Identifying the stage of development of the program and/or its components will help you prioritize evaluation
questions and approach. Health workforce training programs vary signiÿcantly in their stage of development and
longevity. If your program is established, the emphasis of the evaluation might be to provide evidence of the
program’s contributions to its long-term goals. If you have a new program, you might prioritize improving or ÿne-
tuning operations.
Program Development Stage Overview
PROGRAM COMPONENT STAGE EVALUATION PURPOSE WHAT TO MEASURE
PLANNING STAGE
(ÿrst year of program)
Determine best structure and design. Process questions on how consistently
program components were implemented,
and which practices facilitated
implementation.
IMPLEMENTATION STAGE
(approximately 2–5 years into program)
*Some programs may be ready to assess
maintenance in year 3, others later.
Program is fully operational (i.e., no
longer a pilot) and available to all
intended trainees.
Implementation process and outcomes.
MAINTENANCE STAGE
(3 or more years into program)
Measuring program results. Short- and long-term outcomes.
Depending on your program’s development stage you may want to include formative evaluation questions as
part of your evaluation plan. For all Primary Care Training Enhancement (PCTE) evaluation plans, HRSA has
asked grantees to measure long-term effects of the program- in particular on graduates’ ability to support a
transformed health care delivery system and the Three Part Aim plus provider well being (more information on
using the Three Part Aim plus provider well being to frame your evaluation is on page 4 of this module).
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofÿce of the Director, Ofÿce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
Prioritizing evaluation questions by stage of program development
For example, let’s say as part of your health workforce training
you are building a mentorship program and quality improvement
project between community preceptors and trainees. Thinking
through three stages of program development—planning,
implementation, and maintenance—will help you prioritize your
evaluation questions.
In a new program planning stage, formative evaluation questions
may be process-oriented, e.g., “Was the preceptor orientation
sufcient? Is there a better way to structure collaboration
with and support of the preceptors? Should we require three
structured meetings between preceptor mentors and trainees, or
should they be allowed to create custom schedules?”
In the implementation stage, the key questions might be,
“How many quality improvement projects were completed?
How did trainees and preceptors rate the program? What
effects did the quality improvement projects have on clinical
performance in the preceptor sites?”
In the maintenance stage, the program can begin to look at
long-term outcomes of the projects. Include questions such as,
“Did trainees apply what they learned to their clinic work? Did
they take a leadership role in quality improvement in a primary
care setting?”
Approaches to measurement of long-term outcomes
Measuring the long-term effects of your program on graduates can be done with some creativity and
persistence. The graduate outcomes HRSA would like to see for the health workforce training program include
placement in underserved areas, working with vulnerable and underserved populations, and leadership of
graduates in supporting the transformation of the health care delivery system and achievement of the Three
Part Aim. Tools for measurement include surveys of graduates and use of publicly available datasets, and
for graduates who remain within your regional health system, locally available data. The following are some
approaches you can consider for measuring long-term outcomes.
1. Revising your post-graduate survey to include questions on primary care leadership and practicing in
reformed health care settings.
Sample questions:
Do you lead quality improvement efforts at your organization?
Is the practice you work in PCMH-certied?
Do you use a population health management or panel management tool to risk-stratify your patients?
Do you receive information on cost of care as a participant in an accountable care organization or
managed care plan?
2. Using publicly available data as a proxy for graduate outcomes. Public datasets can provide information
on whether graduates are working in a setting that has embraced elements of a reformed health care
system, and provide information on clinical quality and patient experience at that setting. Some of this
information may be provided at the practice level, and some at the provider level.
If the practice site of your graduate is known, you can nd out if the practice is PCMH-certied through
NCQA site: http://reportcards.ncqa.org/#/practices/list.
In some states and regions, primary care practice quality information is publicly available. Examples
include the state of Massachusetts Health Compass (HealthCompassma.org) which publishes both
patient experience and clinical quality data at the practice level. GetBetterMaine.org publishes provider-
level data on clinical quality and patient experience. Because these data sources are not uniformly
available across states or providers, ease of use will depend on the geographic dispersion of your
graduates. Other public information may be available in your region based on state or regional health
reform efforts.
A resource of a sample tracking sheet for long-term outcomes is provided in Module 4: Gather Credible
Evidence. For more guidance on long-term trainee tracking see:
Morgan, P., Humeniuk, K. M., & Everett, C. M. (2015). Facilitating Research in Physician Assistant Programs:
Creating a Student-Level Longitudinal Database. The Journal of Physician Assistant Education: The Ofcial
Journal Of The Physician Assistant Education Association, 26(3), 130–135.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 2
STEP 2: Assess program intensity
Consider the depth of the program intervention and its potential effect on trainee or patient clinical outcomes.
A short-term shallow intervention is unlikely to affect results, trainee learning, or patient clinical outcomes,
regardless of stage and maturity of implementation. Questions to think about include: How many trainees will it
affect? Over what period of time? What is the level of exposure and intensity?
Consider the previous example of a preceptor program including a mentor and quality improvement project. The
health workforce training program has given trainees the option to choose a quality improvement project with a
four-month timeline. One trainee chooses adult diabetes management, one focuses on adolescent substance
use screening, one on healthy eating counseling for children, another on eating counseling for adults, and the
remaining two on child immunization rates. In this situation there is not a single clinical outcome that can assess
impact across all trainees, nor is four months likely an adequate time to see a clinical impact. However, the
programs that are focused on counseling or screening could assess process measure improvements in those
areas.
STEP 3: Write priority evaluation questions
Consider the stage of development and intensity of the program. What outcomes are reasonable to expect and
measure? Write the three most important evaluation questions.
STEP 4:
Assess constraints
The following questions will help you determine if the priority evaluation questions can be answered during your
grant period.
1. How long do we have to conduct the evaluation?
2. What data sources do we have access to already?
3. Will new data collection be required?
a. If yes, do we have people with skills and time to collect data?
b. Are there any technical, security, privacy, or logistical constraints to the data?
STEP 5: Finalize evaluation questions
Return to your logic model and nalize the evaluation questions for this grant cycle. You may have identied
questions that can be put aside for future evaluation cycles or grant opportunities.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 3
RESOURCES
Evaluation frameworks
Evaluation frameworks can provide an overall structure and vision for your evaluation. Two frameworks to consider
in developing your evaluation are how to use the Three Part Aim to assess program elements in preparing trainees
for health system transformation, and the RE-AIM framework to understand the program implementation process
and context for replication and sustainability. More detail on these two frameworks is below.
Addressing the Three Part Aim plus Provider Well Being through evaluation
HRSAs funding announcement for the health workforce training program states the goal of “working to develop
primary care providers who are well prepared to practice in and lead transforming healthcare systems aimed at
improving access, quality of care and cost effectiveness.”
1
THREE PART AIM
Better
Health
Reduced
Health
Disparities
Lower Cost
Through
Improvement
Better
Care
THREE PART AIM PLUS PROVIDER WELL BEING
Reducing
Costs
Provider
Well Being
Patient
Experience
Population
Health
The National Quality Strategy promoted by the Department of Health and Human Services is an overarching plan
to align efforts to improve quality of care at the national, State, and local levels. Guiding this strategy is the Three
Part Aim which is to provide better care, better health/healthy communities and more affordable care.1 Recently,
there has been discussion of expanding to add provider well being, which incorporates improving the work life of
clinicians and staff to the goals. PCTE programs should assess the ways that they are preparing future clinicians to
provide services that improve patient experience, population health, cost effectiveness, and provider well-being.
The table on pages 6 and 7 includes examples of evaluation approaches. The Three Part Aim plus provider well
being’s focus on provider experience and assessing provider resiliency has been added to these resources,
based on health workforce training programs’ feedback and interest. The next module (Module 4: Gather
Credible Evidence) will provide examples of related measures and indicators to consider within your evaluation.
RE-AIM Framework
The RE-AIM framework is a structured approach to identify critical and contextual elements related to translating
evidence-based practices into real-world settings. It can provide a systematic approach for understanding how a
program is “translated” to the health workforce training program, to what extent the experience of your program
could be generalized to other primary care training programs, and how successes and challenges can inform
future projects and initiatives.
More information on RE-AIM can be found at www.re-aim.org.
1 Paterson MA, Falir M, Cashman SB, Evans C, Garr D. Achieving the Triple Aim: A Curriculum Framework for Health Professions Education. Am J Prev
Med.2016:49(2):294-296.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 4
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 5
Health workforce training RE-AIM Example
The multi-disciplinary program includes primary care residents from pediatrics, internal medicine, and family
medicine. The program includes symposiums inviting community providers and is open to medical students
and other trainees to encourage networking across disciplines and cross learning. Trainees participate in quality
improvement projects of six months at a clinical site to enhance skills and apply knowledge on population health
management and quality improvement.
In this example there are two separate activities within the grant period that could be looked at through the RE-AIM
Framework. Below are example questions that may be used to frame the evaluation.
Example: Health workforce training RE-AIM
R
Reach
SYMPOSIUM
Who participates in the primary care symposium? What types of interactions between
trainees occur?
QUALITY IMPROVEMENT PROJECTS
Which patients are included in trainee quality improvement projects?
SYMPOSIUM
Were the learning objectives for the primary care symposium met?
E
Efcacy/
Effectiveness
QUALITY IMPROVEMENT PROJECTS
What were the clinical operational and/or clinical results of the trainee quality
improvement projects?
Were trainee skills to lead quality improvement projects enhanced?
A
Adoption
SYMPOSIUM AND QUALITY IMPROVEMENT PROJECTS
How representative were the trainee participants of all trainees in primary care?
I
Implementation
SYMPOSIUM
If the symposium model is used again, are there any changes to format or curriculum that
should be considered?
QUALITY IMPROVEMENT PROJECTS
Were there differences in how trainees were supported on their quality improvement projects?
Were there any adaptations to the trainee quality improvement program during the grant period?
If yes, why? What was learned?
M
Maintenance
SYMPOSIUM
What resources or collaboration will be needed to sustain the symposium model in
future years?
QUALITY IMPROVEMENT PROJECTS
What was the reception of the clinical preceptor sites on including trainees as quality
improvement leaders? Is there clinical practice support to continue the program?
Summary of RE-AIM Framework Components
R
Reach
Characteristics of those reached by the program intervention and those who are not reached;
how representative of the general population are they?
E
Efcacy/
Effectiveness
Extent to which an intervention resulted in desirable outcomes (e.g., improved learning of key
concept, mastering of skills, patient improvement).
A
Adoption
Who is/is not participating in the intervention (trainees, faculty, etc.), and how representative
of the program are they?
I
Implementation
How was it done? Fidelity to model, changes, and why. Consistency and costs of implementation.
M
Maintenance Sustainability and institutionalization of model.
Addressing the Three Part Aim plus provider well being through evaluation
THREE PART AIM PLUS
PROVIDER WELL BEING
COMPONENTS
APPROACH DESCRIPTION EXAMPLES SAMPLE MEASURES
Population health-reduced Capitalize on health care Many states and regions State Innovation Model Data on clinical quality,
cost enhancement initiatives in are collecting data from Grants (SIM) cost of care (e.g., total
your state and region. practices as part of their
health care enhancement
initiatives. Consider
how these efforts might
Delivery System Reform
Incentive Payment
Program (DSRIP)
cost of care for Medicaid
enrollees by claims).
provide data for your
Transforming Clinical
evaluation efforts.
Practice Initiatives (TCPCi),
also known as Practice
Transformation Networks
(PTN)
Population health-reduced Use clinical measures Are you working with All FQHCs must report Clinical quality measures
cost reported by precepting clinics that are part of an the UDS clinical quality of immunizations, cancer
sites to funders. ACO or FQHC? measures. These screenings, chronic
You might use their quality
metrics to assess the
clinical quality of your
health workforce training
program participants.
measures are reported at
the clinic level, but your
health center partner may
be able to share provider-
level data.
disease care.
ACO participation may
provide clinics with
monthly data including
utilization from claims and
clinical quality.
Population health Patient-centered
medical home (PCMH)
transformation efforts
provide speciÿc
information on practice-
level quality of care
and an organizational
assessment of the training
environment.
Programs might assess
the number of clinical
training sites that have
achieved recognition
status
-OR-
Assess progress in
attainment of speciÿc
core elements of PCMH
recognition.
The NCQA PCMH
recognition standards or
alternatively, the Safety
Net Medical Home PCMH
assessment.
Note: NCQA PCMH
standards are updated
regularly. Consider which
will be used by your
practice and evaluation
process.
The NCQA PCMH program
is divided into 6 standards
that align with core
components of primary
care:
PCMH 1: Enhance
access and continuity
PCMH 2: Identify
and manage patient
populations
PCMH 3: Plan and
manage care
PCMH 4: Provide
self-care support and
community resources
PCMH 5: Track and
coordinate care
PCMH 6: Measure and
improve performance
Patient experience Use existing patient
experience surveys
whenever possible.
Many practices use
patient experience
surveys; some can
separate results by
provider. This allows
provider-speciÿc results
CAHPS (Consumer
Assessment of Healthcare
Providers and Systems)
PAM (Patient Activation
Measure)
Communication between
provider and patient.
to compare trainee patient
experience ratings to
clinic averages and other
benchmarks.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 6
Addressing the Three Part Aim plus provider well being through evaluation, continued
THREE PART AIM PLUS
PROVIDER WELL BEING
COMPONENTS
APPROACH DESCRIPTION EXAMPLES SAMPLE MEASURES
Patient experience/access Clinic operational data
can be abstracted from
standard reports or
designed for evaluation
purposes.
Improving patient
access to acute care
appointments.
Use training logs to assess
continuity of care with a
single provider or team.
N/A Wait-time for 3rd next
available appointment.
% of patient appointments
with assigned care team.
Provider resiliency Assessing student
resiliency during the
program can mark their
preparedness for primary
care and heighten
awareness of resiliency
for trainees and program.
Are you providing speciÿc
resiliency training or
are you interested in
understanding trainee
capacity for resiliency?
There is interest in
measuring provider
resilience in primary care
but there are no standards
in validated tools.° The
Professional Quality of Life
Scale (ProQOL) is the most
commonly used measure
of negative and positive
effects of helping those
who experience suffering
and trauma.
Job satisfaction, self-
fulÿllment, anxiety, stress,
and compassion. As a
1-page assessment tool
there is low burden in
use and distribution.
The sensitivity of such
questions requires careful
administrative structuring
to protect respondent
privacy.
2 Robertson HD, Elliott AM, Burton C, Iversen L, Murchi P, Porteous T, and Matheson C. Resilience of primary healthcare professionals: a systematic
review. British Journal of General Practice. June 2016. 66(647).
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 3 | PAGE 7
MODULE 4
Gather Credible Evidence
Now that you have developed a logic model for your health workforce training program, chosen an evaluation
focus, and selected your evaluation questions, your next task is to gather the evidence. You want credible
data to strengthen the evaluation judgments and the recommendations that follow. You should consider the
following questions:
What data will be collected? What are the data indicators that you will use for your evaluation?
Who will collect the data, or are there existing sources you can use? How will you collect and access
the data? What are the data collection methods and sources?
What are the logistics for your evaluation? When will you collect the data (i.e., what is the timeframe)?
How will the data be entered and stored? How will the security and conÿdentiality of the information be
maintained? Will you collect data on all (trainees), or only a sample?
How much data (quantity) do you need to collect to answer your evaluation questions?
What is the quality of your data? Are your data reliable, valid, and informative?
How often will data be analyzed? What is the data analysis plan?
STEP 1: Select your health workforce training
data indicators
Process indicators focus on the activities to be
completed in a speciÿc time period. They enable
accountability by setting speciÿc activities to be
completed by speciÿc dates. They say what you
are doing and how you will do it. They describe
participants, interactions, and activities.
Outcome indicators express the intended results or
accomplishments of program or intervention activities
within a given time frame. They most often focus
on changes in policy, a system, the environment,
knowledge, attitudes, or behavior. Outcomes can be
short-, intermediate-, or long-term.
Consider the following when selecting indicators for
your health workforce training evaluation.
There can be more than one indicator for each
activity or outcome.
The indicator must be focused and measur
e an
important dimension of the activity or outcome.
The indicator must be clear and speciÿc about
what it will measure.
The change measured by the indicator should
represent progress toward implementing the
activity or achieving the outcome.
Example health workforce training
program indicators
PROGRAM COMPONENT INDICATOR
Simulated team-based care
delivery training
PROCESS: Number of
trainings
OUTCOME: Increased trainee
knowledge based on semi-
annual survey assessment
Faculty development on
interdisciplinary learning
PROCESS: Number of staff
trained
OUTCOME: Number
of presentations and
publications by faculty
with research including
interdisciplinary teams.
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofÿce of the Director, Ofÿce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
STEP 2: Select your data collection methods and sources
Now that you have determined the activities and
outcomes you want to measure and the indicators you
will use to measure progress on them, you need to
select data collection methods and sources.
Consider whether you can use existing data
sources (secondary data collection) to measure your
indicators, or if you will need to collect new data
(primary data collection).
Secondary Data Collection
Existing data collection is less time consuming and
human resource intensive than primary data collection.
Using data from existing systems has the advantages
of availability of routinely collected data that has
been vetted and checked for accuracy. However, you
will have less exibility in the type of data collected,
and accessing data from existing systems may be
costly. Examples of existing data sources that may be
relevant for health workforce training evaluation:
1.
Student tracking systems such as eValue that
show demographics of patients that trainees have
seen, and the health conditions of those patients.
2. Traditional and non-traditional sources for
surveying graduates. Traditional surveys
distributed through the alumni ofce, or (non-
traditional) LinkedIn or Facebook groups.
3. Existing clinical data sources reported by
organization. For safety-net clinics, this could
be clinical performance measures reported
through the Uniform Data System (UDS) to
HRSA. These measures include chronic disease
management and preventive health indicators
for cancer screening, immunizations, behavioral,
and oral health, and are reported on an annual
basis for all patients within the health center
organization. Consider other secondary sources
available based on health care enhancement
and payment based on value. Examples include
measures being reported as part of participation
in an accountable care organization, or for
some organizations participating in CMS-
funded practice transformation efforts, such as
Comprehensive Primary Care Initiative (CPCi).
Clinics that are part of a Medicaid Managed
Care organization may receive summary claims
data or clinical feedback on patient use of the
hospital and emergency room.
4. Patient satisfaction surveys from the Consumer
Assessment of Healthcare Providers and
Systems (CAHPS), or other sources such as the
Midwest Clinicians’ Networks’ surveys specic
to behavioral health and employee satisfaction.
Primary Data Collection
The benet of primary data collection is that you can
tailor it to your health workforce training evaluation
questions. However, it is generally more time
consuming to collect primary data. Primary data
collection methods include:
Surveys: personal interviews, telephone
interviews, instruments completed by respondent
received through regular or e-mail.
Group discussions/focus groups.
Observation.
Document review, such as medical records,
patient diaries, logs, minutes of meetings, etc.
Quantitative versus Qualitative Data
You will also want to consider whether you will collect
quantitative or qualitative data or a mix of both.
Quantitative data are numerical data or information
that can be converted into numbers. You can use
quantitative data to measure your SMART objectives
(for more on developing SMART objectives, see
Module 3). Examples:
Number of trainees.
Percent of trainees who have graduated.
Average number of trainees who pass boards on
rst attempt.
Ratio of trainees to faculty.
Qualitative data are non-numerical data that can help
contextualize your quantitative data by giving you
information to help you understand why, how, and
what is happening with your health workforce training
program. For example, you may want to get the opinions
of faculty, trainees, and clinic staff on why something is
working well or not well. Examples include:
Meeting minutes to document program
implementation.
Interviews with trainees, providers, faculty,
or patients.
Open-ended questions on surveys.
Trainee writing, essays, or journal entries.
Focus groups with former or current students.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 2
Mixed Methods
Sometimes a single method is not sufÿcient to measure an activity or outcome because what is being measured
is complex and/or the data method/source does not yield reliable or accurate data. A mixed-methods approach
will increase the accuracy of your measurement and the certainty of your health workforce training evaluation
conclusions when the various methods yield similar results. Mixed-methods data collection refers to gathering
both quantitative and qualitative data. Mixed methods can be used sequentially or concurrently. An example
of sequential use would be conducting focus groups (qualitative) to inform development of a survey instrument
(quantitative), and conducting personal interviews (qualitative) to investigate issues that arose during coding
or interpretation of survey data. An example of concurrent use of mixed methods would be conducting focus
groups or open-ended personal interviews to help afÿrm the response validity of a quantitative survey. For more
information on using mixed-methods approaches to evaluation, see “Recommendations for a Mixed-Methods
Approach to Evaluating the Patient-Centered Medical Home.”
1
Matrix of potential evaluation areas and publicly available tools/measures
TOPIC TOOL NAME BRIEF DESCRIPTION CONSIDERATIONS FOR USE
Team-based care Team Development Measure Measures a clinical team’s Appropriate for variety of
Developed and distributed
by PeaceHealth, a nonproÿt
health care system with
medical centers, critical
access hospitals, clinics, and
laboratories in Alaska, Oregon,
and Washington.
development level. Can
be used as a performance
measure to promote quality
improvement in team-based
health care. Levels determined
by measuring ÿrmness of
components on a team.
student types.
Publicly available. Authors
request permission for use.
Population health Patient Centered Medical
Home Assessment-A
Developed by the MacColl
Center for Health Care
Innovation at the Group Health
Research Institute and Qualis
Health for the Safety Net
Medical Home Initiative.
Helps sites understand current
level of “medical homeness”
and identiÿes opportunities
for improvement. Helps sites
track progress in practice
transformation if completed at
regular intervals.
Assess practice-level progress
on providing a population
health approach to primary
care delivery.
Integration of primary care
and behavioral health
Site Self-Assessment
Developed by the Maine
Health Access Foundation.
Measures integration of
behavioral health and primary
care at site level.
Could be used at practice-site
level.
Community health Methods and Strategies
for Community Partner
Assessment
Developed for the Health
Professions Schools in Service
to the Nation program.
Assesses program
engagement with community
partners who provide service
learning opportunities for
trainees.
May be useful for PCTE
programs that engage
community health partners for
student learning in community
health programs (e.g.,
housing, food security, and
legal advocacy).
1 Goldman R.E., Parker D., Brown J., Eaton C., Walker J., & Borkan J. Recommendations for a Mixed°Methods Approach to Evaluating the Patient°Centered
Medical Home. Annals of Family Medicine, 2015;13(2):168°75.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 3
Example Data Indicators and Data Sources Worksheet
Use the following worksheet to identify the indicators and the data methods/sources for each component of your evaluation.
LOGIC MODEL COMPONENTS
IN EVALUATION FOCUS
INDICATOR(S) OR
EVALUATION QUESTIONS
DATA METHOD(S)/SOURCE(S)
1
Enhanced trainee knowledge and
conÿdence in addressing social
determinants of health.
Are trainees able to address social
determinants of health?
Is the patient experience improved as a
result of provider training?
Trainee journal re°ections on ability to
meet patient needs, before and after
program implementation.
Patient satisfaction surveys with questions
on ability of care team to help them
overcome housing/food/other barriers.
2
Interdisciplinary training enhances
communication between trainees and
learning to work as a team caring for
patients with chronic conditions.
Do trainees opt to work in settings with
interdisciplinary teams?
Are the clinical outcomes improved for
patients with chronic disease?
Graduate survey incorporates questions
on team-based care.
Comparison of chronic disease indicators
in interdisciplinary team patient panels
with those at clinical sites without
interdisciplinary teams.
RESOURCES
Patient Experience Surveys
CAHPS: Consumer Assessment of Healthcare Providers and Systems. Available through the Agency for
Healthcare Research and Quality
Midwest Clinicians Network: Surveys of patient experience in medical, behavioral, and oral health and staff
satisfaction.
Secondary Clinical Data Sources
Uniform Data System (UDS): Clinical quality measures collected and reported by health centers.
CMS Primary Care Transformation initiatives may be a source of data if your clinical sites are participating.
Consider information from the Primary Care Transformation Initiative and Multi-payer Advanced Primary
Care Practice Demonstration and the Transforming Clinical Practice Initiative, which is supporting more than
14,000 clinical practices through September 2019.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 4
TOOL 4.1
Data Indicators and Data Sources Worksheet
Use the following worksheet to identify the indicators and the data methods/sources for each component of your
evaluation.
LOGIC MODEL COMPONENTS
IN EVALUATION FOCUS
INDICATOR(S) OR
EVALUATION QUESTIONS
DATA METHOD(S)/SOURCE(S)
1
2
3
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 5
TOOL 4.2
Data Collection Worksheet
Use the following worksheet to identify the data collection methods and sources, how data will be collected,
and by whom.
DATA COLLECTION
METHOD/SOURCE
FROM WHOM WILL THESE
DATA BE COLLECTED
BY WHOM WILL THESE DATA
BE COLLECTED AND WHEN
SECURITY OR
CONFIDENTIALITY STEPS
1
2
3
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 6
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 4 | PAGE 7
TOOL 4.3
Long-Term Trainee Tracking Worksheet
Sample trainee tracking template
Graduate name
Year
Program
Location
of practice
Gender
Practice Type (hospital
afliate, FQHC, free
clinic, private practice)
Percent of patients
who are on Medicaid/
uninsured
Leadership roles
Clinical quality
information
Years of practice at
current site
MODULE 5
Justify Conclusions/Data Interpretation and Use
Why is this important?
Data interpretation is typically the role of the
“researcher/evaluator” but involving stakeholders can
lead to a deeper understanding of the ndings, and
more effective use of the data. If stakeholders agree
that the conclusions are justied, they will be more
inclined to use the evaluation results for program
improvement. This module considers a process to
interpret health workforce training data in collaboration
with stakeholders.
STEP 1: Analyze and synthesize ndings
Data analysis will be guided by the evaluation plan
developed from your logic model and evaluation
framework (detailed in Modules 2 and 3).
The analysis phase includes the following tasks:
Organize and classify the quantitative and
qualitative data collected. This includes the steps
of cleaning data and checking for errors.
Tabulate the data into counts and percentages for
each indicator.
Summarize data and include stratication if
appropriate. At the trainee level you may stratify
by trainee type, cohort, or practice site. For
clinical data you may stratify by provider team,
practice site, or patient demographics.
Compare results with appropriate information.
Depending on your evaluation design you may
make comparisons over time using the same
indicator, or may compare locations, practices, or
cohorts of trainees. You may also compare results
to established targets or benchmarks.
If using mixed-methods analysis, take important
ndings from one source and compare to other
sources.
Present the results in an easily understandable
manner, and tailor it to your audience.
Mixed-Methods Example
If you were asking these questions: Do trainees feel prepared
to provide care to complex patients in a team-based
environment? Do patients feel care is coordinated across team
members?
A mixed-methods approach could pair the results from patient
focus groups with results from trainee surveys on providing
care in an interdisciplinary team-based environment. These
results might also be paired with clinical outcomes for
the patients such as patient blood pressure or depression
screening scores.
Mixed-Method Analysis Example within
health workforce training program:
Transformed Primary Care through
Addressing Social Determinants of Health
A health workforce training program has decided to focus
on preparing students to address social determinants of
health. The metric of interest for this program is assessing
improvement of housing status, as the safety-net clinic has
a large uninsured and transient population. As part of the
program, evaluators are collecting data through a patient
satisfaction survey, through focus groups with trainees at
the beginning and end of the program, and through chart
abstraction of the EHR. For the mixed-method analysis, they
planned a pre-post quantitative analysis of the number of
clinical training site patients who have “unstable housing”
status. The focus groups with trainees provided information
on resident experience in assessing and supporting patients
without housing by connecting them to social work staff
as part the interdisciplinary team. This was combined with
data from surveys on patient experience accessing care and
services. The combination of data sources will inform the
quantitative data on “improved housing status.” If success
is not as high as expected, the data from the student focus
groups may indicate barriers, and the data from patients may
provide information on ways the patients received assistance
in improving access to housing. If housing status was not
improved, patient feedback might indicate if other support
was provided.
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
STEP 2: Setting program standards
Articulate the values that will be used to consider a program “successful, “adequate, or “unsuccessful.
Program standards are the metrics by which the evaluation results will be assessed after completion of program
data analysis. Using the example of a program that is addressing social determinants of health, consider whether
the result of a 5 percent or a 50 percent increase in patients who have stable housing is a meaningful result. The
purpose of including stakeholders in setting benchmarks is to understand what the users of evaluation ndings
consider meaningful. A faculty member, student, and patient may have different interpretations of whether
increasing the percent of patients who have stable housing is successful at a 5 versus 50 percent level. Including
stakeholders in developing the benchmark at the outset of the evaluation will set the team up for consensus on
interpretation of ndings at the end of the analysis.
Think about what informs the choice of
benchmarks. In addition to the value and
interpretation of results by stakeholders,
consider the external context that may inform the
development of the benchmark.
What is the average performance at similar
practices/organizations?
Are there standards that the clinic is being held
to by external funders?
Are there preset institutional goals for the metric?
What is realistic to achieve in the timeframe of
the evaluation?
What is the approach if there is no benchmark?
Not all evaluation metrics will have an external
benchmark or even a baseline for which to compare
results. In cases where there is no external benchmark,
consider whether data collected from multiple clinical
sites within the organization can be a reference point.
For example, if using a provider or trainee satisfaction
survey that was tailored to the organization,
comparison with other organizations may not be
available but comparison across departments or sub-groups may provide insights to the data. When benchmark
data is not available, conversation with stakeholders becomes a more important way to build consensus on what
is meaningful change during the project period, and what can be achieved with time and resources available.
Example benchmarks per objective
OBJECTIVE 1: Develop skills to implement, evaluate, and
teach practice transformation and population health
among trainees.
Prog
ram standards:
100 percent of trainees will complete a practice
transformation or population health project.
Trainees will rate their satisfaction with the program
components an average of 7 on a 10 point Likert scale.
Trainees will have improved one clinical measure during
the population health project.
OBJECTIVE 2: Evaluate quality and cost of care within the
clinical training environments used by the trainees.
Program standards:
Improve practice-level measures for two clinical quality
measures over a 2-year period.
Review use and cost data for 20 percent of patients in
clinical training environments and include as part of
trainee data review for population health.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 5 | PAGE 2
STEP 3: Interpretation of ndings and making judgements/recommendations
Judgments are statements about a program’s merit, worth, or signicance that are formed when you compare
ndings against one or more selected program standards. As you interpret data and make recommendations, be
sure to:
Consider issues of context.
Assess results against available literature and results of similar programs.
If multiple methods have been employed, compare different methods for consistency in ndings.
Consider alternative explanations.
Use existing standards as a starting point for comparisons.
Compare actual with intended outcomes.
Document potential biases.
Examine the limitations of the evaluation.
The interpretation process is also aided by review of ndings with stakeholders. Presenting the summarized
data to stakeholders helps validate the conclusions and may offer new insights to the results. Most importantly it
creates buy-in of the ndings and any action steps to follow.
Engaging patients in interpretation
Patient perspectives and satisfaction are one component of assessing ability to meet the Three Part Aim. Not
all health workforce training programs will include patient experience data, but those that do may be curious
about how to involve patients in data interpretation. Sharing results with patients may be part of your project
plan to include diverse stakeholder perspectives. Information might be shared through live presentation at a
patient advisory group or patient advisory council meeting. Alternatively, summary results could be included in
an infographic and posted at clinics or included in a patient newsletter. Although the latter option would limit
direct feedback, it conveys that the organization values communication with patients, as well as its research
and quality improvement efforts. For more information on patient advisory groups see the Patient and Family
Advisory Council Getting Started Toolkit.
Interpretation Guide
Example: Transformed Primary Care through Addressing Social Determinants of Health
Outcome of interest: Assessing trainees’ role in addressing social determinants of health through improved housing status of patients.
POINTS TO CONSIDER IN
INTERPRETATION OF DATA
SPECIFIC EXAMPLE FROM A HEALTH WORKFORCE TRAINING PROGRAM
Consider limitations to the data.
Check data for errors.
The housing status data are pulled from an EHR. Consider limitations such as:
Are patients included if the status is left blank?
Are only those patients who saw a physician included? For example, if patients came
in for lab tests or immunizations only, were they excluded?
Was the analysis limited to subgroups (e.g., cases with complete data, patients receiving
medical services)?
Ensure that your ndings and interpretation are limited to the data available and are not
overstated.
Consider issues of context when
interpreting data.
Were there changes in housing availability at local shelters or other policy changes that
would affect the ability of increasing stable housing during the time period of study?
Were there changes in the relationship with the local housing director, and collaborative
meetings with community partners that would affect how trainees interacted with clinic to
support housing for patients during the program period?
Assess results against available
literature and results of similar
programs.
Are there studies on the ability of interdisciplinary primary care teams to address unstable
housing?
Is there related literature that might be useful for reference? For example, similar studies
conducted in other practice arrangements, other medical settings, etc.?
If multiple methods have been
employed, compare different
methods for consistency in ndings.
How does patient reporting of housing status compare in the EHR to information collected
through a log maintained by practice social workers? To what extent do the results from
the EHR and provider logs tell a similar or different story about patient housing status?
Consider alternative explanations. If ndings are different between EHR and social work log, explore underlying reasons.
Use existing standards as a starting
point for comparisons.
Use standards for discussion but consider how the patient population or the program may
be different from the standard. In the case of housing, standards may not be available,
but the program may compare health outcomes between those with unstable housing to
those that have achieved stable housing in the program.
Compare actual with intended
outcomes.
If program goal was improvement of 50 percent, but 10 percent was achieved, use
the mixed-method analysis of patient survey and student focus groups to explain the
difference. Explore any unintended outcomes of the program.
Document potential biases. For example, noting that students only worked with women and children because of the
clinic hours.
Examine the limitations of the
evaluation.
Document the time frame, sample size, missing data, and resource constraints that may
limit data interpretation.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 5 | PAGE 3
RESOURCES
Patient and Family Advisory Council: Getting Started Toolkit. Created by Meghan West and Laurie Brown,
Skunks Team. BJC Healthcare. Available at: http://c.ymcdn.com/sites/www.theberylinstitute.org/resource/
resmgr/webinar_pdf/pfac_toolkit_shared_version.pdf
AHRQ Health Information Technology Evaluation Toolkit. Available at: https://healthit.ahrq.gov/sites/default/
les/docs/page/health-information-technology-evaluation-toolkit-2009-update.pdf
IDRE Statistical Consulting Group Web site. Available at: http://www.ats.ucla.edu/stat/
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 5 | PAGE 4
TOOL 5.1
Justify Conclusions Worksheet
Source: CDC Program Evaluation for Public Health Programs: Self-Study Guide
QUESTION RESPONSE
1 Who will analyze the data (and who will coordinate
this effort)?
2 How will data be analyzed and displayed?
3 Against what standards will you compare your
interpretations in forming your judgments?
4 Who will be involved in making interpretations and
judgments and what process will be employed?
5 How will you deal with conicting interpretations
and judgments?
6 Are your results similar to what you expected? If
not, why do you think they are different?
7 Are there alternative explanations for your results?
8 How do your results compare with those of similar
programs?
9 What are the limitations of your data analysis
and interpretation process (e.g., potential biases,
generalizability of results, reliability, validity)?
10 If you used multiple indicators to answer the same
evaluation question, did you get similar results?
11 Will others interpret the ndings in an appropriate
manner?
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 5 | PAGE 5
MODULE 6
Use and Share Lessons
The ultimate purpose of program evaluation is to
use the information to improve programs. Now that
you have analyzed your data, you want to use the
evaluation results to demonstrate the effectiveness of
your health workforce training program, identify ways
to improve the program, modify program planning,
demonstrate accountability, and justify funding.
Follow these ve steps to ensure that you are
using your program data results effectively and
communicating the lessons.
STEP 1: Make recommendations
Recommendations are actions that should be
considered in response to an evaluation. Your
recommendations will depend on the audience and the
purpose of the health workforce training evaluation.
If you have identied and engaged key audiences as
outlined in Module 1, you will maximize the chances
that your recommendations will be relevant and useful
to them.
STEP 2: Prepare recommendations
Thoughtful preparation of recommendations can help:
Strengthen your ability to translate new
knowledge about your health workforce training
program into appropriate action.
Discuss how potential ndings might affect
decision making of the health workforce training
program.
Explore positive and negative implications of
potential results and identify different options for
program improvement.
STEP 3: Gather feedback
Gathering feedback of evaluation ndings will create an
atmosphere of trust among all your stakeholders. At the
early stages in your evaluation, gathering and sharing
feedback will keep everyone informed about how the
program is being implemented and how the evaluation
is going. As the evaluation progresses and preliminary
results become available, sharing feedback will ensure
that all stakeholders can comment on evaluation
decisions. Valuable feedback can be obtained by
holding discussions and routinely sharing interim
ndings, provisional interpretations, and draft reports.
Recommendations may be shared in preliminary
fashion and revised based on stakeholder feedback.
Uses of program evaluation data
Describe program performance and outcomes.
Compare outcomes to previous years.
Compare actual outcomes with intended outcomes.
Support realistic goal forming in the future.
Support program planning in the future.
Focus attention on important issues.
Application to health workforce
training programs
Better engage faculty in the program.
Justify use of resources to administration.
Engage and expand clinical preceptors and sites.
Grant or research opportunities.
Educate students on data use
STEP 4: Follow-up
Follow-up refers to the support users need after
receiving evaluation results and beginning to reach
and justify their conclusions. Active follow-up with your
stakeholders can achieve the following:
Remind users of the intended purposes of the
health workforce training program evaluation.
Help to prevent misuse of results by ensuring
that evidence is applied to the intended
questions, not extrapolated to new questions
(unless appropriate).
Prevent lessons from becoming lost or ignored
in the process of making complex or political
decisions.
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
STEP 5: Disseminate results and lessons
Dissemination involves communicating the evaluation
processes, results, and lessons to relevant audiences
in a timely, unbiased, and consistent manner. You
should tailor your report timing, style, tone, message
source, vehicle, and format to your audiences.
Methods of getting the information to your audiences
include:
• Mailings.
• Web sites.
• Community forums.
• Personal contacts.
• Listserves.
• Organizational newsletters.
Meetings and conferences.
Scholarly and professional publications.
Use the Communications Plan Worksheet listed in the
resources section (example below) to help you identify
your health workforce training audience and the most
effective formats and channels for disseminating
results to them.
If you develop a formal evaluation report to discuss
your health workforce training evaluation ndings, it
must clearly, succinctly, and impartially communicate
major components of the evaluation. The report should
be written so that it is easy to understand and not
lengthy or technical. You should also consider oral
presentations tailored to various audiences.
TIP: Consider using a data dashboard as a way to effectively
communicate evaluation results to leadership and your
stakeholders. A great dashboard can showcase actionable
information and focus a user’s attention on the most
important information on the page. This document provides
information on how to develop an effective data dashboard.
Case example, by audience
Audience: health workforce training faculty
Purpose of evaluation: Assess mentorship program impact
on trainee prociency and skills to lead a population health
quality improvement project.
Recommendation: Trainees want to be able to discuss
population health quality improvement project results with
interdisciplinary team. Include a facilitation skill module in
next year’s mentorship program.
Audience: Preceptor clinical sites
Purpose of evaluation: Assess mentorship program and
trainee population health quality improvement project on
clinical outcome of increased colorectal cancer screening.
Recommendation: Implement process improvements
identied through trainee quality improvement project that
demonstrated higher colorectal cancer screening rates.
Evaluation report outline
• Executive Summary
Background and Purpose
Program background and rationale
Program purpose and activities
Key evaluation questions
• Evaluation Methods
– Design
– Sampling procedures
Measures or indicators
Data collection procedures
Data processing procedures
– Analysis
• Results
Discussion and Recommendations
• Limitations
• Conclusion
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 6 | PAGE 2
Example: Communication Plan Worksheet
I need to communicate to this AUDIENCE: The FORMAT that would be most This CHANNEL would be most effective:
appropriate:
Faculty Short PowerPoint presentation Spring faculty meeting
TOOL 6.1
Communication Plan Worksheet
I need to communicate to this AUDIENCE: The FORMAT that would be most
appropriate:
This CHANNEL would be most effective:
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 6 | PAGE 3
MODULE 7
Special Topics of Health Workforce Training Programming
Overview
Health workforce training grantees focus their enhanced training programs in a range of areas. Some of the most
common are interdisciplinary training, integrated behavioral health, addressing social determinants of health, and
population health. Some examples of the types of programs and evaluation approaches to each are described
here, based on existing funded Primary Care Training Enhancement grantee programs. This module also
provides tools and resources within these areas that may be helpful. Finally, this module includes an evaluation
checklist to ensure you are ready for your health workforce training evaluation and provides corresponding
resources to support your evaluation.
Enhanced training topics and sample evaluation questions and methods
Interdisciplinary training
PROGRAM OBJECTIVE EVALUATION QUESTIONS EVALUATION APPROACHES
To prepare interdisciplinary teams of
health professionals to test PCMH program
innovations.
To what extent are trainees comfortable with
PCMH concepts?
Trainee focus groups or questionnaire
evaluations on PCMH core competency topics.
What elements of PCMH do preceptor sites
have in place?
Use of a PCMH self-assessment site level tool.
What are the clinical outcomes related to
PCMH at the preceptor sites?
Care coordination assessment from patient
CG-CAHPS survey.
To prepare trainees to practice in high
functioning multi-disciplinary teams.
Will trainees show an increased level of
knowledge, attitude, and skills in working
with team members of other disciplines?
Trainee assessments using a readiness scale
for interprofessional learning.
Do patients report higher satisfaction with
care from interdisciplinary team?
Care coordination assessment from patient
CG-CAHPS survey.
Integrated behavioral health
PROGRAM OBJECTIVE EVALUATION QUESTIONS EVALUATION APPROACHES
Expose trainees to models of integrated
behavioral health and primary care.
Do trainees trained in integrated behavioral
health models have greater interest in
practicing in primary care?
Trainee tracking of post-graduate training and
employment through graduate surveys.
Do preceptor sites of trainees advance in their
development of integrated care programs?
Organizational level practice/site assessment
of the components of integrated health using
the MeHAF Site Self-Assessment tool or the
Integrated Practice Assessment Tool (IPAT).
Do patients have increased access to
integrated care?
Practice level assessments of wait time for
behavioral health appointments.
What is the impact of integrated behavioral
health on cost?
Data from Medicaid managed care on patient
utilization of services.
ADAPTED FROM: U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Ofce of the Director, Ofce of Strategy
and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention,
2011. Available at: http://www.cdc.gov/eval/framework/index.htm
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 2
Addressing social determinants of health
PROGRAM OBJECTIVE EVALUATION QUESTIONS EVALUATION APPROACHES
To prepare graduates to provide
health education at the appropriate
educational level.
How skilled are trainees in delivering
health education?
Trainee assessment of skills through
observation.
Patient report of their experience in care
and trainee skills through use of patient
survey such as CG-CAHPS.
Patient knowledge of medication risks
assessed through survey of patients.
Comparison of emergency department
utilization among patients with patient
education to those without patient
education.
Population health and quality improvement
PROGRAM OBJECTIVE EVALUATION QUESTIONS EVALUATION APPROACHES
Enhance skills of multi-disciplinary
trainees in population health and quality
improvement.
Do patients who receive care by trainees
and graduates of program experience
higher levels of quality of care?
Select one to three clinical quality
measures to assess at the trainee/
preceptor level.
Track one to three clinical outcomes for
graduates that choose to work within
the medical center system, and compare
their clinical outcomes to non-graduates.
Compare emergency department
and inpatient utilization of patients
empaneled with trained graduates
compared to non-graduates of the
program using Medicaid managed care
data.
Provide trainees with the knowledge,
skills, and professional development
required to champion quality
improvement and patient safety
practices.
Are trainees exposed to and have
experience in working in a team based
environment that focuses on quality
improvement?
Assessment of trainee preceptor
environment for team based training
using the Teamworks Perceptions
Questionnaire.
What improvements in quality are
achieved by health workforce training
trainee quality improvement projects?
Assessment of progress in trainee
projects through selection of clinical
measures appropriate to their project and
tracking these measures over the quality
improvement period.
Ensure trainees are trained on tools
leveraging health IT to support screening,
risk assessment, and use of patient
registries.
Are trainees more adept at using
population health management tools?
Focus groups with trainees on
their experience in leading quality
improvement projects.
Are trainees exposed to a preceptor site
utilizing data driven population health
approaches to care?
Practice level assessment using the
Analytics Capacity Assessment.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 3
Matrix of interdisciplinary training and evaluation tools
Interprofessional Education
TITLE SOURCE DESCRIPTION
TRAINING TOOLS
National Center for Interprofessional
Practice and Education
National Center for Interprofessional
Practice and Education
The National Center supports evaluation, research,
data, and evidence that ignites the eld of
interprofessional practice and education and
leads to better care, added value, and healthier
communities.
EVALUATION TOOLS
National Center for Interprofessional
Practice and Education-Assessment
and Evaluation
National Center for Interprofessional
Practice and Education
The National Center for Interprofessional Practice
and Education has a robust library of resources for
evaluation. A few of the resources are highlighted
here as examples, but please see their library for
more than 35 different instruments.
Assessing Health Care Team
Performance: A Review of Tools and
the Evidence Supporting Their Use
National Center for Interprofessional
Practice and Education
A review of tools to assess health team work
performance.
Authors: Marlow S, Lacerenza C, Iwig C, Salas E.
Teamwork Perceptions
Questionnaire (T-TPQ)
The Agency for Healthcare Research
and Quality
TeamSTEPPS perceptions questionnaire is
from the TeamSTEPPS® Instructor manual and
assesses team functioning, leadership, situation
monitoring, mutual support, and communication.
TEAMSTEPPS® is a teamwork system designed
for health care professionals to address patient
safety and develop an evidenced based teamwork
system.
Authors: Department of Defense Patient Safety
Program in collaboration with the Agency for
Healthcare Research and Quality
Interprofessional Socialization and
Valuing Scale (ISVS-21)
The Agency for Healthcare Research
and Quality
The ISVS-21 is a self-report instrument designed
to measure interprofessional socialization among
students and health practitioners and their
readiness to function in interprofessional teams.
Items were developed to capture respondent
beliefs, attitudes, and behaviors at baseline and at
post-intervention time periods.
Authors: King G, Orchard C, Khalili H, Avery L.
Readiness for Interprofessional
Learning Scale (RIPlS)
National Center for Interprofessional
Practice and Education
This is a 19-item tool with a ve point scale to
assess interprofessional students attitudes towards
interpofessional learning. It is designed to capture
changes in perceptions and attitudes in the
domains of teamwork and collaboration, negative
and positive professional identity, and roles and
responsibilities.
Authors: Parsell G, Bligh J.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 4
Behavioral Health Integration
TITLE SOURCE DESCRIPTION
TRAINING TOOLS
SAMHSA –HRSA Center for
Integrated Health Solutions
SAMHSA-HRSA Center for Integrated
Health Solutions
This center provides a range of resources for
the development of integrated primary care and
behavioral health (substance use and mental
health). This includes information on workow,
Health IT, billing, and screening tools.
EVALUATION TOOLS
MeHAF Site Self- Assessment The Maine Health Access Foundation This tool was developed to assess levels of
integration achieved at the clinic or practice level.
It is based on the MacColl Institute ACIC. The tool
focuses on two domains: 1) integrated services
and patient and family services; and 2) practice/
organization. Each domain has nine characteristics
that you rate on a scale of 1 to 10 depending on
the level of integration or patient-centered care
achieved.
Author: Maine Health Access Foundation
The Integrated Practice Assessment
Tool (IPAT)
SAMHSA-HRSA Center for Integrated
Health Solutions
This tool is a practice level assessment of
integration based on the SAMHSA/HRSA Integrated
Solutions framework “A Standard Framework for
Levels of Integrated Healthcare”. The assessment
uses a decision tree rather than scored
assessment metric.
Author: Wasmonsky J, Auzier A, Romero PW, and
Heath B
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 5
Population Health
TITLE SOURCE DESCRIPTION
TRAINING TOOLS
Population Health Management:
Concepts for Health Centers
The HITEQ Center This is a 4-module PowerPoint presentation
intended as background to introduce the eld
of population health management. It provides
an overview of population health concepts, and
discusses the role of the social determinants and
population health management within the general
population.
Authors: The HITEQ Center
Building a Data-Driven Culture The Center for Care Innovations
(CCI)
The Center for Care Innovations (CCI) offers
a series of videos to share how to guide the
development of a data driven organization, where
staff at all levels embrace the use of the data to
support providing population health.
Authors: The Center for Care Innovations
EVALUATION TOOLS
Safety Net Medical Home –Patient
Centered Medical Home assessment
The Commonwealth Fund This publicly available self-assessment tool of
PCMH assesses progress at the clinic or practice
site level. It includes topics of importance for
safety-net providers such as interpretation and
covers six domains: Access and Communication,
Patient Tracking and Registry, Care Management,
Test and Referral Tracking, Quality Improvement,
and External Coordination.
Authors: University of Chicago and The
Commonwealth Fund
Analytics Capacity Assessment The Center for Care Innovations
(CCI)
This organizational level assessment helps a
practice/clinic understand its current capacity to
use data and analytics, a foundation for population
health. The tool scores organizations into four
domains: reactive, responsive, proactive, and
predictive.
Authors: Center for Care Innovation (CCI)
ACES: Ambulatory Care Experience
Survey
The Agency for Healthcare Research
and Quality
The ACES survey is distributed to patients and
families to assess their experience in care,
including experience with primary care provider
interactions and organizational features of
care. It includes questions on interpersonal
communication, creating proactive plan of care,
and information transfer across care settings.
Authors: Safran D, Karp M, Coltin K, Chang H, Li A,
Ogren J, Rogers W.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 6
TOOL 7.1
Evaluation Capacity and Readiness checklist
The following checklist will support you in planning and preparing to begin your evaluation work. Please see the
related modules for tools, resources, and guidance to support you in each area of evaluation.
CHECKLIST RELATED MODULES AND RESOURCES
1. Do you have an evaluator on staff?
2. Do you have dedicated time for
evaluation activities?
3. Is a logic model in place and has it
been developed and vetted with the
evaluation team and other stakeholders?
Module 1: Engaging Stakeholders for Your
Primary Care Training and Enhancement
Evaluation
Module 2: Describe the Program
4. Have you dened your evaluation
questions?
Module 3: Focus Evaluation Design
5. Have you dened the methods and data
sources for each evaluation question?
Module 4: Gather Credible Evidence
6. Have you conrmed the tools for the
assessment of competency at trainee
level? Have you identied tools to assess
capacity at the organizational level?
Module 7: Special Topics
7. Have you developed a timeline and
assigned team roles and responsibilities
for data collection?
Module 4: Gather Credible Evidence
Modules 5 and 6 will support you in analysis of your evaluation ndings and sharing your results with
stakeholders.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 7
Supplemental Bibliography
The health workforce training Program has assembled a list of peer reviewed journal articles focusing on
health professional education and measurement of access, quality and cost.
PROGRAM AND
SOURCE
(article author, year)
TRIPLE AIM ELEMENTS
EVALUATED
TARGET GROUPS &
SETTING
FOCUS OF
INNOVATION/
INTERVENTION
RESULTS
Evaluation/Program Impact
Discussed
Access/ Pt
Experience
Quality of
Care
Cost/
Utilization
NY Hospital
Medical Home
Program
Angelotti, 2015
Residents (IM,FM,
Peds);
156 Outpatient
sites statewide; 118
residency programs
PCMH transformation
of residency clinics
(Plan-Do-Study-Act,
coaching, resources,
website) via state
Medicaid waiver
All sites achieved PCMH
recognition; Improved colorectal
and breast cancer screening
rates; 8/17 clinical measure
composite scores signicantly
improved.
I3 POP
Collaborative
(NC, SC, VA)
Donahue, 2015
Residents in 27 PC
residency programs
Pragmatic learning
collaborative
for practice
transformation
focused on Triple Aim
improvements
Baseline data; ability to report
core measures was associated
with having a patient registry
and having faculty involved in
data management; variance
between health care systems’
use of identical software
products; reporting very difcult
during EMR transitions; little
commonality in data acquisition
Northwestern U
Medical School
Henschen, 2015
Medical students
during clerkship
(n=69)
Education-centered
Medical Home
curriculum
ECMH students had more
continuity of care experiences,
higher satisfaction, more
condence in QI skills, higher
patient-centeredness.
Pennsylvania
Acad. of Family
Physicians
Residency
Collaborative
Losby, 2015
4
Residents of 24
programs over 3
years
PCMH/Chronic Care
Model learning
collaborative; RCQI
, peer-to-peer
guidance and TA via
faculty mentors
Signicant increases in PCMH
components, related to number
of live learning sessions
done; positively attributed
collaborative participation to
transformation efforts; process
measure increases (retinal &
foot exams; smoking cessation,
self-management)
Oregon Health &
Science University
White, 2014
5
Residents and staff
in FM clinic
Practice
transformation
with enhanced
care coordination,
care managers,
readmission reports
Reduced readmission rates
in transformed practice (27%
to 7%) compared to variable,
nonsignicant trend in control
practices; interaction between
groups showed signicant
difference.
Los Angeles
County/U Southern
California
Hochman, 2013
6
Residents in IM safety
net clinic
PCMH intervention
designed with patient/
staff input
PCMH clinic had increased
patient & resident satisfaction,
increased hospital admissions,
no difference in ED visits.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 8
PROGRAM AND
SOURCE
(article author, year)
TRIPLE AIM ELEMENTS
EVALUATED
TARGET GROUPS &
SETTING
FOCUS OF
INNOVATION/
INTERVENTION
RESULTS
Evaluation/Program Impact
Discussed
Access/ Pt
Experience
Quality of
Care
Cost/
Utilization
Northwestern U
Medical School
O’Neill, 2013
7
Medical students
(n=202) in 13 clinics
QI curriculum and
teams of students
in clinics adopting
PCMH principles;
panels of “high risk”
patients
Students improved self-ratings
of multiple QI skills; Teams
used performance data for QI;
Students provided range of
PCMH services/roles (phone
outreach, care coordination,
health behavior coaching,
identication of quality measure
decit); Quality performance
high for many items; improved
for chlamydia screening, diabetic
eye exams, asthma care
Rockford Rural
Medical Education
(RMED) Program
MacDowell, 2013
8
Medical students
(13-20/yr) in RMED
curriculum
Selected students
(from rural areas)
trained with rural
primary care
preceptors and rural-
focused curriculum
RMED graduates more likely to
provide primary care, choose
FM and be practicing in rural
location
Free Clinics of
Henderson County,
NC (P4 site)
Crane, 2012
9
Rural-track FM
residents and
interprofessional
team
Drop in group medical
appointments with
residents and team
for low income,
uninsured patients
(high ED utilizers)
ED use decreased signicantly;
hospital charges reduced from
$116 to $23 per patient/month.
Assessing Care
of the Vulnerable
Elderly (ACOVE)
Holmboe, 2012
10
IM & FM residency
programs (41); 20
intervention
21 control
Multicomponent,
web-based QI tool
to improve care of
older adults; practice
improvement module
(PIM) of Am Board
of IM
Poor baseline levels of elderly
care measures;
Signicant improvement in
documenting surrogate decision
maker, end of life preferences
and fall risk assessment w/
intervention.
Preparing the
Personal Physician
for Practice (P4)
Carney, 2011
11
14 FM residency
programs nationwide
(334 residents, 24
clinics)
Various residency
transformation
innovations over 6
years (2007-2012)
Descriptive paper with high level
outline of overall P4 Project. (no
specic results) Appendix with
innovations, hypotheses and
study measures listed by site.
I3 Collaborative
(NC, SC)
Newton, 2011
12
Residents (N=252)
and faculty (n=92)
from 10 FM residency
programs
Regional QI
collaborative focused
on improving diabetes
and CHF care
Signicant improvement in
diabetic foot exams & HbA1c
testing; for CHF, signicant
improvement in beta blocker
and ACE use, self-management
rates; 38% reduction in
hospitalizations resulting in
estimated cost reduction of $3.6
million quarterly (156 fewer
admissions @ $23K/admission
average cost)
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 9
PROGRAM AND
SOURCE
(article author, year)
TRIPLE AIM ELEMENTS
EVALUATED
TARGET GROUPS &
SETTING
FOCUS OF
INNOVATION/
INTERVENTION
RESULTS
Evaluation/Program Impact
Discussed
Access/ Pt
Experience
Quality of
Care
Cost/
Utilization
I3 PCMH
Collaborative
(NC, SC, VA)
Reid, 2011
13
Residents & faculty
in 25 primary care
teaching practices in
3 states
20-month learning
collaborative
focused on practice
transformation and
PCMH recognition
48% achieved PCMH recognition
or submitted applications;
overall positive responses
concerning role of collaborative
in transformation
Am. Osteopathic
Assoc. Clinical
Assessment
Program (AOA-CAP)
Shubrook, 2011
14
Osteopath. FM
residents from 52
programs
Standardized database
for measurement
and performance
improvement across
residency programs
Composite process of care
scores improved with repeated
participation but no signicant
change in intermediate clinical
measures
National Academic
Chronic Care
Collaborative
(ACCC) and
California ACCC
(CACCC)
Stevens, 2010
15
Residents (57 teams)
in safety net clinics,
41 were focused on
diabetes
Chronic Care Model
(CCM) Learning
Collaborative and
curriculum changes,
practice redesign,
RCQI involving
diabetes, COPD,
asthma, HCV
Substantial CCM-related
learning; inconsistent
improvement in clinical and
process measures
U of California San
Francisco
Janson, 2009
16
Residents (120 IM),
students (39 NP, 35
pharmacy)
Interprofessional
teams, Improving
Chronic Illness Care
(ICIC) Model for
patients with type 2
diabetes, group visits
Intervention patients had more
frequent process measures
(HbA1c, LDL, BP, microalbumin,
smoking, foot exams), more
planned GM visits, learners
rated themselves higher on ICIC
accomplishment, preparation
and success.
Maine Medical
Center Chronic
Care Collaborative
Greene, 2007
17
Pedi, IM, FM residents
(41)
Chronic Care Model
(CCM) training
for asthma care,
supported by RWJ
grant
Residents reported access to
CCM elements (ED use reduced
43% in CCM pts;
47% reduction in pediatric
asthma charges; 36% reduction
in adult asthma charges
Healthy Steps for
Young Children
Niederman, 2007
18
Pediatric residents Healthy Steps (HS)
practice model; home
visits, “specialist”
co-practitioner,
continuity of care
(COC) emphasis
HS had greater COC indices,
more health maintenance visits;
no difference in duration of care;
No difference in quality of
preventive services or diagnoses
of interest.
Trend toward better
documentation of diagnoses in
HS group.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 10
PROGRAM AND
SOURCE
(article author, year)
TRIPLE AIM ELEMENTS
EVALUATED
TARGET GROUPS &
SETTING
FOCUS OF
INNOVATION/
INTERVENTION
RESULTS
Evaluation/Program Impact
Discussed
Access/ Pt
Experience
Quality of
Care
Cost/
Utilization
U of Alabama
School of Medicine,
Birmingham
Houston, 2006
19
Resident s (130 IM,
78 Peds) in continuity
clinics, urban safety
net
Public Health
Achievable
Benchmarks
Curriculum (ABC) with
multifaceted feedback
IM group: 4/6 measures
increased signicantly more
than controls
(pneumovax, screening for
CRC, lipids, smoking cessation
referral)
Peds group: 2/6 measures
increased signicantly more
than controls (parental smoking
cessation referral, car restraints)
New York Upstate
Medical U Rural
Medical Education
(RMED) Program
Smucny, 2005
20
Medical students
(n=132) who
graduated from NY
RMED curriculum
1990-2003
Rural-focused
curriculum with
36 week clinical
experience in rural
communities;
community programs
& projects involved;
local hospitals provide
housing; stipends
given pre-2001
RMED graduates were more
likely to be in rural location
(26% vs. 7% non-RMED)
and had signicantly higher
USMLE step 2 scores. 50%
characterized their practice
setting as “rural” and 67% were
very satised there (no plans to
move).
Hospital administrators
identied many benets of
RMED to their facility, staff
and community, including
recruitment, retention, quality of
care advantages.
HEALTH WORKFORCE TRAINING PROGRAM EVALUATION TOOLKIT: MODULE 7 | PAGE 11
1 Angelotti M, Bliss K, Schiffman D, et al. Transforming the Primary Care
Training Clinic: New York State’s Hospital Medical Home Demonstration
Pilot. J Grad Med Educ. 2015;7(2):247-252.
2 Donahue KE, Reid A, Lefebvre A, Stanek M, Newton WP. Tackling the
triple aim in primary care residencies: the I3 POP Collaborative. Fam Med.
2015;47(2):91-97.
3 Henschen BL, Bierman JA, Wayne DB, et al. Four-Year Educational and
Patient Care Outcomes of a Team-Based Primary Care Longitudinal
Clerkship. Acad Med J Assoc Am Med Coll. 2015;90(11 Suppl):S43-S49.
4 Losby JL, House MJ, Osuji T, et al. Initiatives to Enhance Primary Care
Delivery: Two Examples from the Field. Health Serv Res Manag Epidemiol.
2015;2.
5 White B, Carney PA, Flynn J, Marino M, Fields S. Reducing hospital
readmissions through primary care practice transformation. J Fam Pract.
2014;63(2):67-73.
6 Hochman ME, Asch S, Jibilian A, et al. Patient-centered medical home
intervention at an internal medicine resident safety-net clinic. JAMA Intern
Med. 2013;173(18):1694-1701.
7 O’Neill SM, Henschen BL, Unger ED, et al. Educating future physicians to
track health care quality: feasibility and perceived impact of a health care
quality report card for medical students. Acad Med J Assoc Am Med Coll.
2013;88(10):1564-1569.
8 MacDowell M, Glasser M, Hunsaker M. A decade of rural physician
workforce outcomes for the Rockford Rural Medical Education (RMED)
Program, University of Illinois. Acad Med J Assoc Am Med Coll.
2013;88(12):1941-1947.
9 Crane S, Collins L, Hall J, Rochester D, Patch S. Reducing utilization by
uninsured frequent users of the emergency department: combining case
management and drop-in group medical appointments. J Am Board Fam
Med JABFM. 2012;25(2):184-191.
10 Holmboe ES, Hess BJ, Conforti LN, Lynn LA. Comparative trial of a
web-based tool to improve the quality of care provided to older adults in
residency clinics: modest success and a tough road ahead. Acad Med J
Assoc Am Med Coll. 2012;87(5):627-634.
11 Carney PA, Eiff MP, Green LA, et al. Preparing the personal physician
for practice (p4): site-specic innovations, hypotheses, and measures at
baseline. Fam Med. 2011;43(7):464-471.
12 Newton W, Baxley E, Reid A, Stanek M, Robinson M, Weir S. Improving
chronic illness care in teaching practices: learnings from the I3
collaborative. Fam Med. 2011;43(7):495-502.
13 Reid A, Baxley E, Stanek M, Newton W. Lessons From the I3 PCMH
Collaborative. Fam Med. 2011;43(7):487-494.
14 Shubrook JH, Snow RJ, McGill SL. Effects of repeated use of the American
Osteopathic Association’s Clinical Assessment Program on measures
of care for patients with diabetes mellitus. J Am Osteopath Assoc.
2011;111(1):13-20.
15 Stevens DP, Bowen JL, Johnson JK, et al. A multi-institutional quality
improvement initiative to transform education for chronic illness care
in resident continuity practices. J Gen Intern Med. 2010;25 Suppl
4:S574-S580.
16 Janson SL, Cooke M, McGrath KW, Kroon LA, Robinson S, Baron RB.
Improving chronic care of type 2 diabetes using teams of interprofessional
learners. Acad Med J Assoc Am Med Coll. 2009;84(11):1540-1548.
17 Greene J, Rogers VW, Yedidia MJ. The impact of implementing a chronic
care residency training initiative on asthma outcomes. Acad Med J Assoc
Am Med Coll. 2007;82(2):161-167.
18 Niederman LG, Schwartz A, Connell KJ, Silverman K. Healthy Steps for
Young Children program in pediatric residency training: impact on primary
care outcomes. Pediatrics. 2007;120(3):e596-e603.
19 Houston TK, Wall T, Allison JJ, et al. Implementing achievable benchmarks
in preventive health: a controlled trial in residency education. Acad Med J
Assoc Am Med Coll. 2006;81(7):608-616.
20 Smucny J, Beatty P, Grant W, Dennison T, Wolff LT. An evaluation of the
Rural Medical Education Program of the State University Of New York
Upstate Medical University, 1990-2003. Acad Med J Assoc Am Med Coll.
2005;80(8):733-738.