Evaluation Cafe 2005-06

The Evaluation Center at Western Michigan University maintains a directory of past Evaluation Cafe presentations for instructional purposes.

Search for videos by title Past events index

"Theory-Free Evaluation"

Date: Sept. 14, 2005
Presenter: Dr. Michael Scriven, professor of philosophy and associate director of The Evaluation Center, and director of the interdisciplinary Ph.D. in Evaluation program, WMU.
Abstract: Theories come into evaluation in two ways: once at the metalevel, via theories (aka models) of evaluation and once at the program level, as program logic or program theories. Scriven critically looks at both these levels and argued for a new assessment of each, on the grounds of performance.

"Building Capacity for Evaluation and Providing Evaluation Services in Changing Organizations: Lessons from the Field"

Date: Sept. 22, 2005
Presenters
: Dr. Gary Miron, principal research associate and chief of staff, and Barbara Wygant, project manager, The Evaluation Center, WMU.
Abstract: The presentation included a brief review of contractual services provided by The Evaluation Center to diverse and changing organizations over the past few years. A number of issues and concerns raised from our experience will be discussed, including:

  • Why and how flexible contractual arrangements can be made.
  • The importance of separating technical assistance and actual evaluation services.
  • Means and importance of ensuring client ownership.
  • Working in diverse communities.
  • Tools and tips regarding communication of findings to clients and key stakeholders.
  • "Closing the Deal: Tips for Working Effectively with Evaluation Clients."

"Closing the Deal: Tips for Working Effectively with Evaluation Clients"

Date: Sept. 29, 2005
Presenter: Dr. Jerry Horn, principal research associate, The Evaluation Center, WMU.
Abstract: As we initiate and continue discussions on prospective evaluation projects, it is important for us to understand the interests, concerns and issues from the client’s perspective. During this session, Horn presented some ideas and practices that have worked over the years and facilitated discussion among participants about how we can be more effective in the process of closing the deal for evaluation arrangements and contracts.

View the September 29 presentation slides

"Making a Causal Argument for Training Impact"

Date: Oct. 6, 2005
Presenter: Dr. Robert Brinkerhoff, professor and coordinator of graduate programs in human resources development, WMU.
Abstract: When training outcomes and value are defined as the results of improved performance (e.g., an increase in sales or an increase in productivity), a multitude of other causes enters the picture and clouds claims that training had impact. But clients for evaluation of training programs want proof that training did or did not have impact. The Success Case method attempts to make such claims by making a beyond reasonable doubt argument.

Download the October 6 handout

"Market Pricing for Evaluation Contracting"

Date: Oct. 13, 2005
Presenter: Dr. Carl Hanssen, senior research associate, The Evaluation Center, WMU.
Abstract: This session presented a method for developing evaluation pricing using a market-based approach. The basis for this approach is predicated on the idea that university-based research units should seek, where possible, to over-recover its costs for conducting evaluation and research contracts. Over-recovered funds, then, can be used to fund research, scholarship and other development activities. Key components of the model that will be presented include market-based billing rates, allowances for other direct costs, allowances for indirect costs and developing a parallel budget that reflects actual costs.

View the October 13 presentation slides

"The Malcolm Baldrige National Quality Award Business Evaluation Process: Using the Evaluation Criteria to Improve Business Outcomes"

Date: Oct. 20, 2005
Presenter: Dr.Keith Ruckstuhl, organizational development consultant (internal), Pfizer Global Manufacturing, Kalamazoo, Michigan.
Abstract: The Malcolm Baldrige National Quality Award, established in 1987, was developed to promote practices that enhance the quality and competitiveness of American companies, and the evaluation process has continued to evolve over time. The evaluation process is not prescriptive in nature, but rather compares the applicant’s self-reported critical characteristics against their business processes to determine alignment with the applicant’s described needs. Finally, the applicant’s business results are reviewed to determine the effectiveness of those business processes at achieving critical outcomes. Ruckstuhl reviewed the Baldrige evaluation framework and process, discussed methods to learn the evaluation framework, and described working with different business units at a prior employer to help them develop applications for a state quality award (which used the Baldrige evaluation process) and the impact it had on business results in different corporate cultures.

View the October 20 presentation slides

"Measuring the Impact of Electronic Business Systems at the Defense Logistics Agency: Lessons Learned From Three Evaluations"

Date: Nov. 3, 2005
Presenter: Dr. Jonathan A. Morell, senior policy analyst, Altarum, Ann Arbor, Michigan.
Abstract: The consequences of deploying three electronic business systems at the Defense Logistics Agency were evaluated: Central Contractor Registration, Electronic Document Access and DoD Emall. Findings were presented, with an emphasis on lessons learned about evaluating the impact of IT systems that are inserted into complex, changing organizations. Lessons fall into five categories: metrics, methodology, logic models, adaptive systems and realistic expectations. Interactions among these categories were also discussed.

Download the November 3 article

"Goal-Free Evaluation"

Date: Nov. 10, 2005
Presenter: Brandon Youker, student in the interdisciplinary Ph.D. in evaluation program, WMU.
Abstract: Youker conducted a goal-free evaluation of a local middle school summer enrichment program. The GFE was a supplement to a goal-based evaluation by two other evaluators on the evaluation team. The combination of the GFE and GBE approaches resulted in a more comprehensive evaluation than either would have provided on its own. GFE methodology was discussed, in addition to the strengths, weaknesses and challenges related to synthesizing and combining the two evaluation methodologies.

View the November 10 presentation slides

"Identifying Relevant Evaluative Criteria: Lessons Learned from the Evaluation of a Middle School Enrichment Program"

Date: Nov. 17, 2005
Presenters: Daniela Schroeter and Chris Coryn, students in the interdisciplinary Ph.D. in evaluation program, WMU.
Abstract: The evaluation of a local middle school summer enrichment program consisted of two simultaneous, independent evaluations: one goal-based and the other goal-free. Central to any credible evaluation is the process of identifying relevant evaluative criteria or dimensions of merit—the attributes by which evaluators determine how good, effective or valuable a program is. While the goal-based evaluation emphasized evaluative criteria centering on program goals and desires, a long list of value-oriented criteria were also identified using Scriven’s (2005) Key Evaluation Checklist. The most important lesson learned from this evaluation was the usefulness of goal-free evaluation as a supplementary, albeit crucial, mechanism for supporting and informing goal-based evaluation; particularly in identifying criteria of merit related to program side effects, side impacts and unintended outcomes.

"The Comparative Method and the Schema of Cost-Effectiveness in the Evaluation of Foreign Aid"

Date: Dec. 1, 2005
Presenter: Dr. Paul Clements, associate professor of political science and director of the Masters of Development Administration program, WMU.
Abstract: The structural conditions of foreign aid present particularly severe challenges of accountability and learning. Due to the diversity and complexity of the tasks, the intense competition for resources and the real human costs of failure, effective management in this field is particularly difficult. The primary role for evaluation under these circumstances, Clements argued, is to sustain an effective orientation to cost-effectiveness in the allocation of resources, largely in the area of program design. While rigorous evaluation designs may be needed to generate valid data on program impacts, it is the comparative method and a consistent orientation to cost-effectiveness that are essential to support the judgments on which effective foreign aid depends.

"Reconsidering Qualitative Research Methods and Reinstating Evaluation as the Heart of the Social Sciences"

Date: Jan. 18, 2006
Presenter: Dr. Michael Scriven, professor of philosophy, associate director of The Evaluation Center, and director of the interdisciplinary Ph.D. in evaluation program, WMU.
Abstract: It’s often said, with some justification, that economics is too important to be left to the economists, and it’s clear that qualitative methods of inquiry are too important to be left to the authors of most books about them. Those texts load their treatment with epistemological larding that is not in fact implied, and is certainly repellent to most scientists. Scriven commented briefly on the usual basic entries in the qualitative research stakes and discussed more about the three most notably crippled treatments: causation, comprehension and evaluation. The total impact of this reconsideration is what could be called the revaluing of the social sciences, making them both more valuable and more securely based.

"Evaluator Skills: What’s Taught vs. What’s Sought"

Date: Jan. 25, 2006
Presenters: Dr. Carolyn Sullins, senior research associate, The Evaluation Center; Daniela Schroeter, student in the interdisciplinary evaluation Ph.D. program; and Christine Ellis, master's student in educational studies, WMU.
Abstract: What roles, competencies and skills do employers look for when hiring evaluators? Are they in line with what is emphasized in graduate programs in evaluation? Conducted were a literature review, pilot surveys for both job candidates and employers and an analysis of AEA Job Bank entries. The preliminary findings were discussed during a Think Tank presentation at the AEA/CES conference in October 2005. This presentation highlighted the findings from the various measures, presented perspectives from the Think Tank and discussed further direction for the ongoing study.

View the January 25 presentation slides

"Enhancing Disaster Resilience Through Evaluation"

Date: Feb. 1, 2006
Presenter: Dr. Liesel Ritchie, senior research associate, The Evaluation Center, WMU.
Abstract: National and international attention has been focused on events surrounding natural and technological disasters. This session presented various perspectives on social impacts of disasters, exploring ways in which the evaluation community might contribute to this substantial and growing body of research. Questions for discussion included:

  • What approaches, practices, concepts and theories from evaluation might be employed to enhance disaster preparedness, response, recovery and community resilience?
  • How might experiences from the field of evaluation improve understanding of and ability to address challenging contexts in which disaster-related evaluations are conducted, as well as use of evaluation findings in these settings?

View the February 1 presentation slides

"Strengthening Capacity for Evaluation in the Context of Developing Countries"

Date: Feb. 8, 2006
Presenter: Dr. Gary Miron, principal research associate and chief of staff, The Evaluation Center, WMU.
Abstract:
Knowledge is said to be universal. The same could be said of the available stock of evaluation methods, techniques and experiences accumulated in the industrialized nations. Advocating indigenous evaluation and research does not necessarily mean disregarding the knowledge available. Yet a problem often arises in the interpretation of the experiences accrued in the industrialized nations and in the application of the methods and techniques utilized in drastically different climates. Most approaches, designs and methods are universally applicable; however, adjustments and adaptations must be made from country to country and from context to context. There are many directions and paths that the developing countries might take in their quest for more effective and appropriate means of evaluation and planning their educational programs. The promotion of South-South cooperation and the development of a better understanding of the practical experiences and genuine conditions and obstacles present in the context of developing countries are obvious first steps.

View the February 8 presentation slides

"Evaluating a Nonprofit Organization: Methodological and Management Strategies from the Foods Resource Bank Evaluation"

Date: Feb. 15, 2006
Presenters: Thomaz Chianca and John Risley, students in the interdisciplinary Ph.D. in evaluation program, WMU.
Abstract: The presenters discussed their experience conducting an organizational evaluation of Foods Resource Bank, a nonprofit dealing with U.S.-based donors (growing projects) and international recipients (food security programs). They discussed the Success Case Method component of the evaluation. The SCM addressed questions including:

  • What are the key factors affecting the performance of field coordinators supporting local projects?
  • What critical, contextual aspects must be understood and possibly leveraged, to help projects succeed?
  • Why are some projects more successful than others?

This presentation explored the advantages and limitations of integrating SCM into a broader evaluation design and reflects on how SCM can successfully be adapted to an organizational evaluation context. The presentation also discussed management strategies and issues addressed during this evaluation including:

  • How to increase participation.
  • Issues faced while reporting evaluation findings.
  • The use of evaluation advisory committees.
  • Working with organization staff.

Bev Abma, executive director for programming for Foods Resource Bank, discussed the institutional perspective on our evaluation.

View the February 15 presentation slides

"Keys to Global Executive Success"

Date: Feb. 22, 2006
Presenter: Dr. Jennifer Palthe, assistant professor of management, WMU.
Abstract: This study extended previous research on cross-cultural adjustment through a field study of 196 American business executives on assignment in Japan, Netherlands and South Korea. The results demonstrated the relative importance of learning orientation, self-efficacy, parent and host company socialization, work, and non-work variables, on three facets of cross-cultural adjustment (work, interaction and general). While past research has consistently shown that family adjustment is by far the strongest predictor of cross-cultural adjustment, this study reveals that socialization at the host company, previously proposed yet unmeasured, may be as strong a predictor. Implications for practice and directions for future research were offered.

"Standards for Educational Evaluation: The Case of Propriety"

Date: March 8, 2006
Presenter: Dr. Arlen Gullickson, director of The Evaluation Center, WMU.
Abstract: The Program, Personnel and Student Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation provide guidelines for ensuring that evaluations meet the standards of utility, feasibility, propriety and accuracy. Gullickson, chair of the Joint Committee on Standards for Educational Evaluation, presented a brief history and overview of the standards.

View the March 8 presentation slides
Download the March 8 handout

"Doing, Knowing and Being: Integrating Assumptions and Actions in Evaluation Practice"

Date: March 15, 2006
Presenter: Dr. Eileen Stryker, president, Stryker and Endias Inc., Research, Planning and Evaluation Services, Kalamazoo, Michigan
Abstract: Evaluation methods (doing) are grounded in epistemological (knowing) and ontological (being) assumptions. This was a discussion of our journeys toward integrating our most deeply held beliefs about the nature of truth and reality with the ways we practice evaluation. How do we deepen conversations about method and design with stakeholders so as to reveal and negotiate shared and differing assumptions? Examples from Stryker's practice included:

  • If an educational program rests on a belief that learning requires positive emotional experiences, and the client wants an evaluation consistent with that belief:
    • What about negative findings?
    • What would the evaluation practice look like?
    • Would you take the contract?
  • If reality is essentially connected, and knowing is essentially grounded in love, what does evaluation practice look like?
  • When working across cultures (and when are we not working across cultures?) how do we discover and cross boundaries of language, religion and power, and negotiate evaluation issues across worldviews shaped by differing experiences?

"Project Versus Cluster Evaluation"

Date: March 22, 2006
Presenter: Dr. Teri Behrens, director of evaluation, W.K. Kellogg Foundation.
Abstract: The W. K. Kellogg Foundation has a long-standing view that the evaluation of its grantmaking is for the purpose of learning. After several years of funding intentional clusters of projects, foundation staff initiated a new approach to evaluating groups of related projects that came to be known as cluster evaluation. In more recent years, WKKF has begun funding strategic initiatives designed to create systems change. Behrens will discuss the implications for evaluation of this more strategic approach to grantmaking and will share a preliminary typology of types of change efforts.

View the March 22 presentation slides

"Collaborative Evaluations: A Step-by-Step Model for the Evaluator"

Date: March 29, 2006
Presenter: Dr. Liliana Rodriguez-Campos, assistant professor of educational studies, WMU.
Abstract: Rodriguez-Campos’s presentation was based on her book of the same title. She presented the Model for Collaborative Evaluations which emerged from a wide range of collaboration efforts that she conducted in the private sector, nonprofit organizations and institutions of higher education. MCE has six major components:

  • Clarify the expectations.
  • Encourage best practices.
  • Ensure open communication.
  • Establish a shared commitment.
  • Follow specific guidelines.
  • Identify the situation.

Rodriguez-Campos outlined key concepts and methods to help master the mechanics of collaborative evaluations. Practical tips for real-life applications and step-by-step suggestions and guidelines on how to apply this information were shared.

Download the March 29 handout

"Synthesizing Multiple Evaluative Statements into a Summative Evaluative Conclusion"

Date: April 5, 2006
Presenter: P. Cristian Gugiu, student in the interdisciplinary Ph.D. in evaluation program, WMU.
Abstract: The synthesis of multiple evaluative statements into a summative conclusion is a task often overlooked or avoided by evaluators, perhaps owing to its complexity. Despite its apparent complexity, however, synthesis should be a critical component of all evaluations. Synthesis is the process of combining evaluative conclusions derived from measures of performances across several dimensions of the evaluand into an overall rating and conclusion. This paper will discuss the strengths and weakness of two methodological approaches (i.e., qualitative-weight-and-sum versus quantitative-weight-and-sum) that can be used to synthesize micro-evaluative statements into a macro-evaluative conclusion. Finally, the paper introduced the concept of summative confidence to highlight the need for determining the degree of confidence with which a summative statement can be delivered. Detailed illustrative examples from an actual evaluation was provided for each concept.

View presentation slides

"Lessons Learned: A Value-Added Product of the Project Life Cycle"

Date: April 9, 2006
Presenter: Rebecca Gilman, senior principal consultant, Keane.
Abstract: In the 1990s, lessons learned may have been gathered during a meeting at the end of a project and often were used for a single purpose such as an explanation for project overage or failed deliverables. Often, organizations ignored lessons learned and we saw failures repeated. Project and business environments are becoming more diverse, complex and continually changing. Today, lessons learned is evolving into a broader concept that may be used to refine and transform entire systems and organizations. What are lessons learned? How does Keane identify and work with them? What value do they provide for the project team, Keane, and our clients? These questions were explored in this presentation.

View the April 9 presentation slides

"Evaluation in Brazil"

Date: April 12, 2006
Presenter: Dr. Ana Carolina Letichevsky, coordinator, Department of Statistics, Cesgranrio Foundation, Brazil.
Abstract: This presentation included a brief review of evaluation in Brazil. It includes a summary of the evaluative processes in different areas (social, educational and corporate) as well as a description of the structure of an evaluative Brazilian nonprofit organization, the Cesgranrio Foundation. The main challenges for Brazilian evaluators are the following:

  • To adapt evaluative approaches and methodologies to the Brazilian context.
  • To ensure the use of the evaluation results for the improvement of the evaluative focus.
  • To implement the principles and standards that guide formal and professional evaluation, ensuring the quality of the evaluation.
  • To qualify a larger number of professionals to act in the area of evaluation (to develop evaluators).

"Some Rights of Those Researching and Those Researched"

Date: May 9, 2006
Presenter: Dr. Robert Stake, director of the Center for Instructional Research and Curriculum Evaluation, University of Illinois.
Abstract: Stake has been one of the most creative and productive evaluators throughout the development of the discipline since its emergence around 1960. Beginning as a measurement specialist, he went on to introduce the notion of responsive evaluation as a reaction to the standard social science model of hypothesis-testing. Stake has written one of the very rare treatises on the evaluation of arts education and two books on the case study method. One of the books extends the approach to multiple case studies. This presentation was particularly important to those interested in evaluation or experimental design, or some new problems about using human subjects in any area of research.