Checklist Validation

Checklists accepted for inclusion in the Evaluation Checklist Project collection should meet the following criteria.

Checklist Validation Process

    1. Solicit expert peer reviews of the draft checklist.
    2. Revise checklist based on expert feedback.
    3. Solicit additional expert peer reviews if needed.
    4. Prepare a checklist for field testing.
    1. Post checklist in the field test section of the checklist website.
    2. Recruit field testers.
    3. Revise checklist based on field test results.

The validation process for the evaluation checklists has evolved over time. When the project first started in 1999, checklist authors received feedback on their checklists from the Evaluation Checklist Project director and obtained peer feedback at their own discretion. Beginning in 2003, checklists submitted to the site were subject to a blind peer-review process, similar to that used by academic journals. As part of the improvement and expansion of the project catalyzed by funding from the Faster Forward Fund in 2017, a field testing phase was added to ensure checklists are usable and have tangible benefits when used in real-world conditions.

Criteria for Evaluation Checklists

Checklists accepted for inclusion in the Evaluation Checklist Project collection should meet the following criteria. These criteria were derived in part from Bichelmeyer (2003), Centers for Disease Control and Prevention (2015), Degani and Weiner (2016), Scriven (2007), Stufflebeam (2000), and Willmore (2006). Complete citations can be found in the project charter.

  • Appropriateness of evaluation content
    • The checklist addresses one or more specific evaluation tasks (e.g., a discrete task or an activity that cuts across multiple tasks).
    • The checklist clarifies or simplifies complex content to guide the performance of evaluation tasks.
    • Content is based on credible sources, including the author’s experience.
    • Content is consistent with the program evaluation standards (Yarbrough, Shulha, Hopson, & Caruthers, 2011) and the American Evaluation Association’s Guiding Principles for Evaluators (2013) and Statement on Cultural Competence in Evaluation (2011).
    • Content does not overtly favor one evaluation approach over others unless the checklist is intended to support the application of a particular evaluation approach. 
  • Clarity of purpose
    • A succinct title clearly identifies what the checklist is about.
    • A brief introduction orients the user to the checklist’s purpose, including the following:
      • The circumstances in which it should be used
      • How it should be used (including caveats about how it should not be used if needed)
      • Intended users 
  • Completeness and relevance
    • All essential aspects of the evaluation task(s) are addressed.
    • All content is pertinent to what users need to do to complete the task(s). 
  • Organization
    • Content is presented in a logical order, whether conceptually or sequentially.
    • Content is organized in sections labeled with concise, descriptive headings.
    • Complex steps or components are broken down into multiple smaller parts. 
  • Clarity of writing
    • Content is focused on what users should do, rather than questions for them to ponder.
    • Everyday language is used, rather than jargon or highly technical terms.
    • Verbs are direct and action-oriented.
    • Terms are precise.
    • Terms are used consistently.
    • Definitions are provided where the terms are used but might not be obviously known.
    • Sentences are concise.
  • References and sources
    • Sources used to develop the checklist’s content are cited.
    • Additional resources are listed for users who wish to learn more about the topic.
    • A preferred citation for the checklist is included (at the end or beginning of the checklist).
    • The author’s contact information is included.

Checklist Reviewers

The Evaluation Center at Western Michigan University relies on qualified individuals to review checklists that are offered for use in evaluation projects.

  • Reviewers (2003 to present)
    • Peter Airasian, Boston College
    • Marvin Alkin, the University of California at Los Angeles
    • James Altschuld, Ohio State University
    • Barbara Bichelmeyer, Indiana University
    • Penny Billman, REGS Consulting
    • Katrina Bledsoe, College of New Jersey
    • Valerie Caracelli, U.S. Government Accountability Office
    • Scott Chaplowe, Children's Investment Fund Foundation
    • Donald Compton, Centers for Disease Control and Prevention
    • Leslie Cooksy, Sierra Health Foundation
    • Katharine Cummings, Western Michigan University
    • Sue Funnell, Performance Improvement Pty Ltd, Australia
    • Jennifer Greene, University of Illinois
    • Gary Henry, Georgia State University
    • Rodney Hopson, Duquesne University
    • Jerry Horn (emeritus), Western Michigan University
    • Kylie Hutchinson, Community Solutions
    • Steve Jurs (emeritus), University of Toledo
    • Jean King, University of Minnesota
    • Al Koller, Brevard Community College
    • Goldie MacDonald, Centers for Disease Control and Prevention
    • Krystin Martens, Western Michigan University
    • Wes Martz, Kadant, Inc.
    • Sandra Mathison, University of British Columbia
    • Howard Mzumara, Indiana University-Purdue University Indianapolis
    • Kathryn Newcomer, George Washington University
    • Cynthia Phillips, National Science Foundation
    • Hallie Preskill, University of New Mexico
    • Sheila Robinson, Custom Professional Learning
    • Todd Rogers, University of Alberta, Canada
    • Daniela Schroeter, Western Michigan University
    • Michael Scriven, Claremont Graduate University
    • M. F. Smith, Evaluator’s Institute
    • Nick Smith, Syracuse University
    • Wendy Tackett, iEval
    • Boris Volkov, University of Minnesota
    • Stephanie Wilkerson, Magnolia Consulting
    • Caroline Wylie, Educational Testing Services
  • Field Testers (2017 to present)
    • Christina Bierring, Independent Consultant
    • Martha Brown, RJAE Consulting
    • Courtney Coleman, Centers for Disease Control and Prevention
    • Fraser Dalgleish, Florida Atlantic University
    • Cassandra Davis, Centers for Disease Control and Prevention
    • Melissa Demetrikopoulos, Institute for Biomedical Philosophy
    • Nora Douglas, New Mexico Digital Inclusion Network
    • Pamela Eddy, William & Mary
    • Bolaji Fapohunda, Alliance for Population Health International
    • Ann Gillard, The Whole in the Wall Gang Camp
    • Andrea Gregg, Pennsylvania State University
    • Aric Gregg, University of Wisconsin Stout
    • Brittnee Hawkins, Centers for Disease Control and Prevention
    • Melissa Jennings, Centers for Disease Control and Prevention
    • Teresa Kinley, Centers for Disease Control and Prevention
    • Melissa Kovacs, FirstEval
    • Michelle Leander-Griffith, Centers for Disease Control and Prevention
    • Shelley Maberry, Maberry Consulting
    • Nancy Marker, University of Hawaii
    • Elizabeth Peery, Magnolia Consulting
    • Ben Reid, Impact Allies
    • Matthew Roberts, University of Minnesota
    • Mike Rudibaugh, Lake Land College
    • Karen Snyder, Free the Slaves
    • Leonard Sterry, Global Dynamics
    • Jessica Weitzel, Via Evaluation
    • Manjari Wijenaike, Independent Consultant

Checklist Field Testing

The Evaluation Checklists Project invites the review and field testing of the  Checklist for Effective Communication in Interviews and Focus Groups.

The purpose of field testing is to ensure checklists are usable and have tangible benefits when used in real-world conditions. Feedback provided by field testers is used by checklist authors to improve their checklists before they are finalized.


To Participate in Field Testing:

  1. Email to let her know you would like to field test the checklist. She will send you a copy of the checklist.
  2. Use the checklist to prepare for conducting an interview or focus group.
  3. Submit your feedback using the link that will be provided to you with the checklist.