Monday, November 30, 2015

How to encourage evaluation utilization?

Many times, evaluation is done after the report is sent to the users. Without ensuring that the results and recommendations are used by the organization, the evaluation may not mean anything. What would you do to ensure the utilization of the evaluation you have written?

Monday, October 26, 2015

Analyzing Qualitative Data

The Schutt, R. K. (2011)'s reading on qualitative data analysis offers some useful tools to help you with your data analysis during and after the data collection. There are a few things that I would like you to pay particular attention to as I believe they are helpful tools for your data collection and analysis.

The Exhibit 10.3 is very helpful one to be using during your data collection. Use this sheet right after each of your interview sessions to identify four things:
  1. What were the main issues or themes that struck you in this contact?
  2. Summarize the information you got (or failed to get) on each of the target questions you had for this contact.
  3. Anything else that struck you as salient, interesting, illuminating or important in this contact?
  4. What new (or remaining) target questions do you have in considering the next contact with this site?





















Also, the reading by Ellen Taylor-Powell & Renner (2003) provides a good step by step process in data analysis. Here is a summary to those steps: 



For details of the article above on how to analyze qualitative analysis, please click here.


Due next week, November 2nd: Method of Evaluation that includes Sampling, Planning, and Instruments.  

Monday, October 19, 2015

Class Updates: October 19th

I would like to bring your attention to assignment that is due is the following weeks:

October 26th: questionnaires or interview questions updates. This should be your final version of what you will be using for your study.

November 2nd: Method section. This section includes (1) sampling, (2) planning (or procedures), and (3) instruments (instruments are your questions that you develop--the one due on 26th, but this one will be in a description format).

November 9th: Analysis strategies (e.g., how do you plan to analyze your data, what techniques are you using? if you are analyzing focus group data, then you will need to follow the steps given in your reading materials and write your own). This section allows readers understand how you analyze your data. If no details are given, then your paper will present doubts to the readers regarding rigor of your data analysis. Refer to reading materials on October 26th to help you with the description and plan for analysis. We do not have actual class on November 9th, but you will use this time to work on giving feedback to your fellow groups' assignment (to be sent to you on 10th).

November 16th: Feedback assignment due. We do not have actual class this day, but you will work on data analysis with your group members who just returned from Cambodia.

November 23rd: Your data should already be analyzed, and you should have a preliminary findings presented.

November 30th: Final paper due.


Monday, October 5, 2015

Data collection techniques: Which one is right for your project?

This week and the last week, we cover methods of obtaining data, the very important part of your evaluation that leads us to findings. Data can be obtained from various techniques and tools and there is no limit to the number of techniques you are using. It depends on what you think will answer your evaluation questions, as well as what program evaluation sponsor wants. Discussion and negotiation between you as an evaluator and the program sponsor are ongoing process as a way to ensure credibility of your evaluation report. Ryan, Meiers, and Visser (2012) offer 11 techniques (page 83-156) that we talked in class in the past week. While these tools are written mainly for Needs Assessment, some of them can be used in both formative and summative evaluation such as document and data review, guided expert reviews, focus groups, interviews, performance observations. I would like to bring your attention to some of the techniques we will primarily use for our projects: interviews, focus group interviews, and surveys. 

Ryan, Meiers, and Visser (2012) provide some helpful guides to help you prepare for your interviews (page 110): 


There is also a very useful tool for interview protocol that you can use for your interview. I suggest using this template for when you conduct your interviews. 






For focus groups, here is the interview protocol: 





Beyer (1995)'s data gathering instruments for formative evaluation provides 12 techniques, some of which are overlapping with Ryan et al. (2012)'s Needs Assessment, such as interviews, focus groups, and observations. The techniques mentioned in Beyer are product development oriented which are not common in NGO programming. Think about which ones of them that are applicable to NGO programming. 

Gertler (2011)'s data collection technique offers a unique procedure in getting your data, primarily survey data, collected step by step. The article also provides strategies in developing indicators (or measures). Measures should be based on the results chain (or the program theory). In other words, questions should be asked based on program's input, activities, and output, making sure that we don't measure orange when the input and activities are apple. By now, you should already be familiar with your program description--input, activities and output. Remember, you will need to include your program theory in your program description in your paper. 

Gertler also recommends pilot testing the questionnaire to make sure that questions are worded appropriately and understandably by the surveyor and the respondents. It also allows us to know if it is too long or if the format is consistent throughout the questionnaire.  

Measures 

I recommend using measures that have already been established in social science literature when you are trying to assess academic or behavioral outcomes of the students or other users. A previous evaluation on STEM interests among secondary students, grades 8-12, we used measures in TIMSS's questionnaire. You can create ones on your own if you could not find any of them in the literature or if they are very specific to your topic.

While creating your own survey measures can be the easiest way to help answer your evaluation questions, however, the measures may be vulnerable to reliability and validity problems. Using well-established measures helps increase the credibility of your evaluation. At the same time, the evaluator must be aware that most well-established measures are based on certain context, mainly Western's. Therefore, the evaluator must take actions to examine the items in the measures and consult it with local experts to help you choose appropriate items. Posavac offers a few suggestions for choosing measures including:

1. Use multiple measures (usually this occurs as you have a single item question)
2. Use non-reactive measures (participants usually respond based on what you want to hear)
3. Use only variables relevant to the evaluation (interesting vs. the focus)
4. Use valid measures (fact-based vs. attitudinal measures)
5. Use reliable measures (well-established measures helps reduce this problem; Cronbach's alpha)
6. Use measures that can detect change (program effect vs. external factor --> control variables)
7. Use cost-effective measures

Examples of measures used in previous projects 

Let's see measures used in a previous evaluation on IT Certificate training program on students' computer skills and attitudes toward computer. The team examined several different outcomes including:

1. Academic performance
2. Students' attitudes toward computer
3. Students' attitudes toward the internet
4. Basic computer skills of the students
5. Students' current use of computer



Their independent variable was IT program participation status: participated in the program, failed the course, currently enrolled by passing the screening tests (but has not started), and failed to enrolled. Their control variables include number of siblings in college, education levels of father and mother, and student's future plan.

This one is an example from STEM education project. The team adopted scales from TIMSS student questionnaire. They broke it down by three separate sections of questions used in their study by type of methods they use: (1) student questionnaire, (2) focus group interviews, and (3) individual interviews.





Tasks for this week 

If your project involves survey, you may have already found ones (the ones you submitted for IRB review). I suggest that you look into your survey questions again and try to refine them, putting them into a very well-formatted survey. Make sure that you know the original sources of the measures you adopted from as you will cite them in your report. Make sure to present the evidence of reliability and validity of the measures. 

I will send my feedback of your first assignment (program description and literature review) a day or two after the class, for you to make the revision. The deadline to turn it in is October 19th. 
      

Monday, September 21, 2015

Engage stakeholders, describe the program, and focus the evaluation design

Engage Stakeholders

What distinguishes program evaluation from research is the engagement of stakeholders. In research, the researchers remain objective and authoritative in planning the research, choosing participants, and developing questionnaires/interview questions. In contrast, in evaluation, the evaluators must work collaboratively with various stakeholders to design the evaluation, develop questions based on merit, worth or significance, and focus on the intended uses by intended users. Engaging stakeholders, mainly primary stakeholders, must be done in an on-going basis from the beginning to the end of the evaluation to ensure the accuracy and utility of the evaluation.

CDC offers a very helpful checklist in stakeholder engagement:






















Identifying stakeholders is important step as it helps you design your evaluation samples and create questions to ask. CDC also offers a "Identifying Key Stakeholders" worksheet that you can fill your potential stakeholders. Based on your project, work with your group members to identifying your stakeholders based on these three categories:



Then, use the following worksheet to identify stakeholders that will increase the credibility, implement the interventions, advocate for changes, and fund/authorize the continuation or expansion of the program:












For each of the stakeholders identified, list all of the activities and/or outcomes that matter the most to them:





Describe the program 

In evaluation, program description is similar to literature review in research--but evaluation needs both, the program description as well as literature review. CDC offers a very nice checklist of program description that will be helpful for your own program description.





























Here is the checklist of what you should include in your program description:


















To help you frame your logic model, here are the worksheet to help guide your design:


Focus the evaluation design 

CDC also offers a helpful checklist for evaluation focus. This checklist is helpful for you to define your PURPOSE and to formulate your evaluation questions. It also reminds you the need to review evaluation questions with stakeholders, program managers, and program staff.







Tasks for this week: 

1. Reach out to primary stakeholders or service providers including program managers and program staff to obtain information about the program being evaluated. Talking with the program staff will help you understand your program better and it's also a chance for you to ask them for their input as well as to obtain available documents or reports of the program. What we did last week was talking to stakeholders who are not involved in the program directly. What you will need to do this week is to talk to those who are working on the program in their day-to-day basis.

2. Meet with me to discuss your evaluation design.

3. Draft your IRB Human Subject Application.

4. Develop measures/questionnaires/interview questions for your evaluation.

5. Finalize your program theory or results chain after you talk to program staff on the ground. 

Monday, September 14, 2015

Summative or Impact Evaluation

This week are discussing summative or impact evaluation. Impact evaluation seeks to determine if a program has an impact based on the intended outcomes. To measure the intended outcomes, it is important that the evaluator understand how the program operates or how the program conceptualizes its framework. And this is called "Theories of Change" or "Program Theory." As Gertlet et al. noted, "A theory of change is a key underpinning of any impact evaluation, given the cause-and-effect focus of the research. As one of the first steps in the evaluation designs, a theory of change can help specify the research questions" (p.22). It is the evaluator's task to identify the program theory or the theory of change.

A well operating program may not have a specific, explicitly stated theory, although they know what they are trying to achieve. Helping program staff or primary stakeholders to understand a clearly stated program operation with explicit operating framework will be helpful for the organization itself in terms of having a valid documentation with rigorous program theory necessary for their fund raising purposes, and for the evaluator in having a clearly defined program goals/objectives that they can base on to help them develop their evaluation criteria to determine the outcomes of the program being evaluated. Program theory development/identification can be done with discussion with program staff and other primary stakeholders.

There is an excellent article on how to establish a program theory published in the American Journal of Evaluation by Frans L. Leeuw (2003) entitled, "Reconstructing program theories: Methods available and problems to be solved." Leeuw offered three approaches to help uncover the mystery of a program theory, making the theory more explicit.

First approach is using the "policy-scientific" method which involves reviews of empirical as well as program documentation, evidence (pay attention to 'our goal is to improve...' or 'we argue that...' and a propositional statement such as "if, ...., then, ...."  The second approach is a strategic assessment method which refers to the process of uncovering all possible assumptions about the program in which group dialog is central to this approach. The third approach is "elicitation" method that involves a mental mapping process (a method used in cognitive psychology) where participants (stakeholders) are asked to provide their thoughts on the program model (or logic of the program) and then compare it to evidence from scientific organization studies.

Leeuw mentioned that the first approach is "best suited for ex post evaluations (after the fact) of programs and policies back by documentary evidence (i.e., often public policies), while the strategic assessment and elicitation approach appear to be more relevant for ex ante evaluations under other conditions" (p. 16). Leeuw's article offers step by step methods to help you digest program theory efficiently. This is the citation to the article:
Leeuw, F. L. (2003). Reconstructing program theories: methods available and problems to be solved. American Journal of Evaluation, 24(1), 5-20.
And this is the link to the article (you may need to be connected to the campus network to access to this article). Worth mentioned is the Figure provided in the paper which illustrates how you can use policy-scientific method to establish a program theory:


When a program does not have any explicit theory stated, the use of needs assessment is helpful. This can be done in a backward process where the evaluator asks the program staff and primary stakeholders about why the program is needed--why this program exists in the first place and who needed it, for what purpose, and outcomes to be accomplished. Discussion with primary stakeholders to gain a better understanding of the needs and theory that it generates is necessary. This needs assessment can also be useful when you describe the program--in your program description where you will write the background, mission, and activities of the program.

In addition to using needs assessment as theory development and identification, needs assessment is also relevant for both process and outcome evaluation. For process evaluation, the evaluator is interested in knowing if what was needed got implemented, whether those who needed it received the service, and whether the program staff has the capacity to responsible for the need to be implemented. In addition, need assessment enables the evaluator to understand if people need the offering services or if they think the services are relevant to their life based on the context in which they are situated. The response allows the evaluator to make recommendations for program improvement. For example, Weaker Student Program implemented by CFC in 2013, received less attention and participation by students and parents even though the teachers reminded them--the parents-- possible improvement of their children's participation in the program. In this case, parents may not see the real need of this program for their children. No evidence was gathered from the parents at the time--only students' data were collected. Needs assessment from parents via face to face interviews should be helpful in understanding program theory and actual need perceived by the beneficiaries. For outcome evaluation, the evaluator is interested in knowing if the needs impact participants' economic, social, psychological, or academic functioning. In other words, is there a correlation between the stated needs and all of these functional outcomes.

Need assessment can be done using different strategies including:
Personal observations on the resources and the needs in the community
Social indicators of need (via national survey data)
Community surveys of need--including attitudinal surveys
Service availability in the areas (duplicity check)
Key informants interviews and information (usually village chiefs or school principals)
Focus groups
Community forums--via parent-school meeting day or occasional events happened on the campus
For your own program evaluation, I also recommend doing organizational assessment by discussing with program staff. Organizational assessment allows the evaluator to understand:

1. Location and facilities of the program: infrastructure in different locations and staff delivery of the services.
2. Program personnel structure: who is doing what, and who make certain decisions. Here a diagram of the organizational structure is useful to be reported in your evaluation.
3. Values and interaction quality within the organization: interactions between staff members and between their clients.
4. Qualifications of the personnel
5. Frequency of meeting and communication
6. Staff training opportunities
7. Ongoing program monitoring: how do they know the program is running smoothly?

Gertlet et al. also discuss "the results chain" as part of program theory or theory of change. The results chain outlines 5 elements that serve as a map of the program to help evaluator design their evaluation and to select measures (or performance indicators) corresponding to the program activities. Below is the results chain taken from the Gertlet et al.'s chapter 2.



I suggest that you model your program based on this results chain flow chart.

Questions for Discussion
As an evaluator, how do you decide if different stakeholders have different theories of a given program?

Tasks for this week

1. Begin drafting your IRB application that includes filling out  (1) HUMAN SUBJECTS APPLICATION (New Studies) MS Word03.doc and (2) Consent Form Template (11-21-14).docx.

2. Try to identify program theory or results chain of the program you are evaluating. Understanding a program theory or results chain helps you develop appropriate measures to evaluate the outcomes. Leeuw's article helps you develop a program theory. In addition, try to put the program theory or results chain visually by drawing a flow chart of it. That will be also used in your program review and description section of your evaluation report.

3. You have spoken with Natalie and Lydia about your project last week. With their input, please draft the summary of your project and questions for clarifications and send to Jamie, CFC founder, for her input. The purpose is to make sure that all the primary stakeholders have been consulted and to make sure that everyone agrees on your project before you proceed. This has to be done ASAP.  

4. By now, you should have a complete understanding of the program operations and your project design.

5. Make sure that everyone of your team members has a shared understanding of the project.

Monday, September 7, 2015

Formative Evaluation

This week we will go through formative evaluation. It can be used for a program that has already been running as well as a program that is being created. It's more often used to evaluate the latter. Beyer (1995) defines it as "evaluating or assessing a product while that product is in the process of being created and shaped." Sometime it is called "improvement-oriented evaluation."Rather than making a judgement or determining the impact of the program (summative), formative evaluation enables ways in which a program can be improved. There is a metaphor for formative and summative statement: "when the cooks taste the soup, that's formative; when the guests taste the soup, that's summative."

Beyer discusses primarily on using formative evaluation for a product that is being developed before it is put into a regular use. For example, take a look at Apple's iPhone product with an antenna issue. If the formative evaluation was done well, Apple would have found out and fixed it before it went on sale (or before a regular use). Beyer offers four stages of which formative evaluation of a product or a program needs to be occurred:

  1. Design
  2. Prototype 
  3. Pilot
  4. Field test

What happens when a program has already been running or a product has gone on sale? In what way that formative evaluation can be used? Is it too late to evaluate formatively? If a program has already been running and a product has already gone on sale, why bother with formative evaluation? Some funders would just want to see the final outcomes of the program; how would you convince them that formative evaluation is needed?

Formative evaluation can also be used to evaluate a program while it is already in operation. According to Michael Patton, formative evaluation asks the following questions:

  1. What are the program's strengths and weaknesses? 
  2. To what extent are participants progressing toward the desired outcomes?
  3. Which types of participants make good progress and which types aren't doing so well?
  4. What kind of implementation problems have emerged and how are they being addressed?
  5. What's happening that wasn't expected?
  6. How are staff and clients interacting? 
  7. What are staff and participant perceptions of the program? 
  8. What do they like? dislike? want to change? 
  9. What are perceptions of the program's culture and climate?  
  10. How are funds being used compared to initial expectations? 
  11. How is the program's external environment affecting internal operations? 
  12. Where can efficiencies be realized? 
  13. What new ideas are emerging that can be tried out and tested?
What are your data sources? Who, what, and where are the best sources of this information? Beyer mentioned 2 things: people and well-established standards. Who are the people? 
  1. Experts 
  2. Users (e.g., intended beneficiaries) 
  3. Stakeholders (e.g., providers of services, teachers, parents, community members etc.) 
Class activities

Below are tasks for you to do as a group: 
  1. With your project team members, identify experts related to your project, locally or internationally. They can be anyone who can provide you feedback to the program activities and framework. How many of them and in what sub-areas? 
  2. Once identified, think about what kind of things you would like the experts to help with? List  all the things you would like them to help with. 
  3. Identify users of your project. Who are your primary and secondary users? 
  4. Identify stakeholders of your project. Who would you like to include and why? 
Tasks for you to do this week
  1. Schedule a Skype call with your primary stakeholders (Jamie and Natalie) 
  2. Prepare for questions to ask both of them 
  3. Start collecting all the information about the program you are evaluating--everything about the program because it allows you to have a comprehensive understanding of the whole program. Don't wait till you know all the things you are asked to evaluate. 
  4. Create an online shareable folder that your team members can upload the documents 
  5. Always look into the government policy documents related to your program as it gives you a big picture from the national level. It is required that you know what's going on with the national policy on your program. Here is the website to the Cambodian Ministry of Education: http://www.moeys.gov.kh/en/policies-and-strategies.html  
  6. Start writing up program description. For example, if you are working on your preschool program, then write all about the program background and overall activities.