Monday, September 21, 2015

Engage stakeholders, describe the program, and focus the evaluation design

Engage Stakeholders

What distinguishes program evaluation from research is the engagement of stakeholders. In research, the researchers remain objective and authoritative in planning the research, choosing participants, and developing questionnaires/interview questions. In contrast, in evaluation, the evaluators must work collaboratively with various stakeholders to design the evaluation, develop questions based on merit, worth or significance, and focus on the intended uses by intended users. Engaging stakeholders, mainly primary stakeholders, must be done in an on-going basis from the beginning to the end of the evaluation to ensure the accuracy and utility of the evaluation.

CDC offers a very helpful checklist in stakeholder engagement:






















Identifying stakeholders is important step as it helps you design your evaluation samples and create questions to ask. CDC also offers a "Identifying Key Stakeholders" worksheet that you can fill your potential stakeholders. Based on your project, work with your group members to identifying your stakeholders based on these three categories:



Then, use the following worksheet to identify stakeholders that will increase the credibility, implement the interventions, advocate for changes, and fund/authorize the continuation or expansion of the program:












For each of the stakeholders identified, list all of the activities and/or outcomes that matter the most to them:





Describe the program 

In evaluation, program description is similar to literature review in research--but evaluation needs both, the program description as well as literature review. CDC offers a very nice checklist of program description that will be helpful for your own program description.





























Here is the checklist of what you should include in your program description:


















To help you frame your logic model, here are the worksheet to help guide your design:


Focus the evaluation design 

CDC also offers a helpful checklist for evaluation focus. This checklist is helpful for you to define your PURPOSE and to formulate your evaluation questions. It also reminds you the need to review evaluation questions with stakeholders, program managers, and program staff.







Tasks for this week: 

1. Reach out to primary stakeholders or service providers including program managers and program staff to obtain information about the program being evaluated. Talking with the program staff will help you understand your program better and it's also a chance for you to ask them for their input as well as to obtain available documents or reports of the program. What we did last week was talking to stakeholders who are not involved in the program directly. What you will need to do this week is to talk to those who are working on the program in their day-to-day basis.

2. Meet with me to discuss your evaluation design.

3. Draft your IRB Human Subject Application.

4. Develop measures/questionnaires/interview questions for your evaluation.

5. Finalize your program theory or results chain after you talk to program staff on the ground. 

Monday, September 14, 2015

Summative or Impact Evaluation

This week are discussing summative or impact evaluation. Impact evaluation seeks to determine if a program has an impact based on the intended outcomes. To measure the intended outcomes, it is important that the evaluator understand how the program operates or how the program conceptualizes its framework. And this is called "Theories of Change" or "Program Theory." As Gertlet et al. noted, "A theory of change is a key underpinning of any impact evaluation, given the cause-and-effect focus of the research. As one of the first steps in the evaluation designs, a theory of change can help specify the research questions" (p.22). It is the evaluator's task to identify the program theory or the theory of change.

A well operating program may not have a specific, explicitly stated theory, although they know what they are trying to achieve. Helping program staff or primary stakeholders to understand a clearly stated program operation with explicit operating framework will be helpful for the organization itself in terms of having a valid documentation with rigorous program theory necessary for their fund raising purposes, and for the evaluator in having a clearly defined program goals/objectives that they can base on to help them develop their evaluation criteria to determine the outcomes of the program being evaluated. Program theory development/identification can be done with discussion with program staff and other primary stakeholders.

There is an excellent article on how to establish a program theory published in the American Journal of Evaluation by Frans L. Leeuw (2003) entitled, "Reconstructing program theories: Methods available and problems to be solved." Leeuw offered three approaches to help uncover the mystery of a program theory, making the theory more explicit.

First approach is using the "policy-scientific" method which involves reviews of empirical as well as program documentation, evidence (pay attention to 'our goal is to improve...' or 'we argue that...' and a propositional statement such as "if, ...., then, ...."  The second approach is a strategic assessment method which refers to the process of uncovering all possible assumptions about the program in which group dialog is central to this approach. The third approach is "elicitation" method that involves a mental mapping process (a method used in cognitive psychology) where participants (stakeholders) are asked to provide their thoughts on the program model (or logic of the program) and then compare it to evidence from scientific organization studies.

Leeuw mentioned that the first approach is "best suited for ex post evaluations (after the fact) of programs and policies back by documentary evidence (i.e., often public policies), while the strategic assessment and elicitation approach appear to be more relevant for ex ante evaluations under other conditions" (p. 16). Leeuw's article offers step by step methods to help you digest program theory efficiently. This is the citation to the article:
Leeuw, F. L. (2003). Reconstructing program theories: methods available and problems to be solved. American Journal of Evaluation, 24(1), 5-20.
And this is the link to the article (you may need to be connected to the campus network to access to this article). Worth mentioned is the Figure provided in the paper which illustrates how you can use policy-scientific method to establish a program theory:


When a program does not have any explicit theory stated, the use of needs assessment is helpful. This can be done in a backward process where the evaluator asks the program staff and primary stakeholders about why the program is needed--why this program exists in the first place and who needed it, for what purpose, and outcomes to be accomplished. Discussion with primary stakeholders to gain a better understanding of the needs and theory that it generates is necessary. This needs assessment can also be useful when you describe the program--in your program description where you will write the background, mission, and activities of the program.

In addition to using needs assessment as theory development and identification, needs assessment is also relevant for both process and outcome evaluation. For process evaluation, the evaluator is interested in knowing if what was needed got implemented, whether those who needed it received the service, and whether the program staff has the capacity to responsible for the need to be implemented. In addition, need assessment enables the evaluator to understand if people need the offering services or if they think the services are relevant to their life based on the context in which they are situated. The response allows the evaluator to make recommendations for program improvement. For example, Weaker Student Program implemented by CFC in 2013, received less attention and participation by students and parents even though the teachers reminded them--the parents-- possible improvement of their children's participation in the program. In this case, parents may not see the real need of this program for their children. No evidence was gathered from the parents at the time--only students' data were collected. Needs assessment from parents via face to face interviews should be helpful in understanding program theory and actual need perceived by the beneficiaries. For outcome evaluation, the evaluator is interested in knowing if the needs impact participants' economic, social, psychological, or academic functioning. In other words, is there a correlation between the stated needs and all of these functional outcomes.

Need assessment can be done using different strategies including:
Personal observations on the resources and the needs in the community
Social indicators of need (via national survey data)
Community surveys of need--including attitudinal surveys
Service availability in the areas (duplicity check)
Key informants interviews and information (usually village chiefs or school principals)
Focus groups
Community forums--via parent-school meeting day or occasional events happened on the campus
For your own program evaluation, I also recommend doing organizational assessment by discussing with program staff. Organizational assessment allows the evaluator to understand:

1. Location and facilities of the program: infrastructure in different locations and staff delivery of the services.
2. Program personnel structure: who is doing what, and who make certain decisions. Here a diagram of the organizational structure is useful to be reported in your evaluation.
3. Values and interaction quality within the organization: interactions between staff members and between their clients.
4. Qualifications of the personnel
5. Frequency of meeting and communication
6. Staff training opportunities
7. Ongoing program monitoring: how do they know the program is running smoothly?

Gertlet et al. also discuss "the results chain" as part of program theory or theory of change. The results chain outlines 5 elements that serve as a map of the program to help evaluator design their evaluation and to select measures (or performance indicators) corresponding to the program activities. Below is the results chain taken from the Gertlet et al.'s chapter 2.



I suggest that you model your program based on this results chain flow chart.

Questions for Discussion
As an evaluator, how do you decide if different stakeholders have different theories of a given program?

Tasks for this week

1. Begin drafting your IRB application that includes filling out  (1) HUMAN SUBJECTS APPLICATION (New Studies) MS Word03.doc and (2) Consent Form Template (11-21-14).docx.

2. Try to identify program theory or results chain of the program you are evaluating. Understanding a program theory or results chain helps you develop appropriate measures to evaluate the outcomes. Leeuw's article helps you develop a program theory. In addition, try to put the program theory or results chain visually by drawing a flow chart of it. That will be also used in your program review and description section of your evaluation report.

3. You have spoken with Natalie and Lydia about your project last week. With their input, please draft the summary of your project and questions for clarifications and send to Jamie, CFC founder, for her input. The purpose is to make sure that all the primary stakeholders have been consulted and to make sure that everyone agrees on your project before you proceed. This has to be done ASAP.  

4. By now, you should have a complete understanding of the program operations and your project design.

5. Make sure that everyone of your team members has a shared understanding of the project.

Monday, September 7, 2015

Formative Evaluation

This week we will go through formative evaluation. It can be used for a program that has already been running as well as a program that is being created. It's more often used to evaluate the latter. Beyer (1995) defines it as "evaluating or assessing a product while that product is in the process of being created and shaped." Sometime it is called "improvement-oriented evaluation."Rather than making a judgement or determining the impact of the program (summative), formative evaluation enables ways in which a program can be improved. There is a metaphor for formative and summative statement: "when the cooks taste the soup, that's formative; when the guests taste the soup, that's summative."

Beyer discusses primarily on using formative evaluation for a product that is being developed before it is put into a regular use. For example, take a look at Apple's iPhone product with an antenna issue. If the formative evaluation was done well, Apple would have found out and fixed it before it went on sale (or before a regular use). Beyer offers four stages of which formative evaluation of a product or a program needs to be occurred:

  1. Design
  2. Prototype 
  3. Pilot
  4. Field test

What happens when a program has already been running or a product has gone on sale? In what way that formative evaluation can be used? Is it too late to evaluate formatively? If a program has already been running and a product has already gone on sale, why bother with formative evaluation? Some funders would just want to see the final outcomes of the program; how would you convince them that formative evaluation is needed?

Formative evaluation can also be used to evaluate a program while it is already in operation. According to Michael Patton, formative evaluation asks the following questions:

  1. What are the program's strengths and weaknesses? 
  2. To what extent are participants progressing toward the desired outcomes?
  3. Which types of participants make good progress and which types aren't doing so well?
  4. What kind of implementation problems have emerged and how are they being addressed?
  5. What's happening that wasn't expected?
  6. How are staff and clients interacting? 
  7. What are staff and participant perceptions of the program? 
  8. What do they like? dislike? want to change? 
  9. What are perceptions of the program's culture and climate?  
  10. How are funds being used compared to initial expectations? 
  11. How is the program's external environment affecting internal operations? 
  12. Where can efficiencies be realized? 
  13. What new ideas are emerging that can be tried out and tested?
What are your data sources? Who, what, and where are the best sources of this information? Beyer mentioned 2 things: people and well-established standards. Who are the people? 
  1. Experts 
  2. Users (e.g., intended beneficiaries) 
  3. Stakeholders (e.g., providers of services, teachers, parents, community members etc.) 
Class activities

Below are tasks for you to do as a group: 
  1. With your project team members, identify experts related to your project, locally or internationally. They can be anyone who can provide you feedback to the program activities and framework. How many of them and in what sub-areas? 
  2. Once identified, think about what kind of things you would like the experts to help with? List  all the things you would like them to help with. 
  3. Identify users of your project. Who are your primary and secondary users? 
  4. Identify stakeholders of your project. Who would you like to include and why? 
Tasks for you to do this week
  1. Schedule a Skype call with your primary stakeholders (Jamie and Natalie) 
  2. Prepare for questions to ask both of them 
  3. Start collecting all the information about the program you are evaluating--everything about the program because it allows you to have a comprehensive understanding of the whole program. Don't wait till you know all the things you are asked to evaluate. 
  4. Create an online shareable folder that your team members can upload the documents 
  5. Always look into the government policy documents related to your program as it gives you a big picture from the national level. It is required that you know what's going on with the national policy on your program. Here is the website to the Cambodian Ministry of Education: http://www.moeys.gov.kh/en/policies-and-strategies.html  
  6. Start writing up program description. For example, if you are working on your preschool program, then write all about the program background and overall activities.