This week and the last week, we cover methods of obtaining data, the very important part of your evaluation that leads us to findings. Data can be obtained from various techniques and tools and there is no limit to the number of techniques you are using. It depends on what you think will answer your evaluation questions, as well as what program evaluation sponsor wants. Discussion and negotiation between you as an evaluator and the program sponsor are ongoing process as a way to ensure credibility of your evaluation report. Ryan, Meiers, and Visser (2012) offer 11 techniques (page 83-156) that we talked in class in the past week. While these tools are written mainly for Needs Assessment, some of them can be used in both formative and summative evaluation such as document and data review, guided expert reviews, focus groups, interviews, performance observations. I would like to bring your attention to some of the techniques we will primarily use for our projects: interviews, focus group interviews, and surveys.
Ryan, Meiers, and Visser (2012) provide some helpful guides to help you prepare for your interviews (page 110):
There is also a very useful tool for interview protocol that you can use for your interview. I suggest using this template for when you conduct your interviews.
For focus groups, here is the interview protocol:
Beyer (1995)'s data gathering instruments for formative evaluation provides 12 techniques, some of which are overlapping with Ryan et al. (2012)'s Needs Assessment, such as interviews, focus groups, and observations. The techniques mentioned in Beyer are product development oriented which are not common in NGO programming. Think about which ones of them that are applicable to NGO programming.
Gertler (2011)'s data collection technique offers a unique procedure in getting your data, primarily survey data, collected step by step. The article also provides strategies in developing indicators (or measures). Measures should be based on the results chain (or the program theory). In other words, questions should be asked based on program's input, activities, and output, making sure that we don't measure orange when the input and activities are apple. By now, you should already be familiar with your program description--input, activities and output. Remember, you will need to include your program theory in your program description in your paper.
Gertler also recommends pilot testing the questionnaire to make sure that questions are worded appropriately and understandably by the surveyor and the respondents. It also allows us to know if it is too long or if the format is consistent throughout the questionnaire.
Measures
I recommend using measures that have already been established in social science literature when you are trying to assess academic or behavioral outcomes of the students or other users. A previous evaluation on STEM interests among secondary students, grades 8-12, we used measures in
TIMSS's questionnaire. You can create ones on your own if you could not find any of them in the literature or if they are very specific to your topic.
While creating your own survey measures can be the easiest way to help answer your evaluation questions, however, the measures may be vulnerable to reliability and validity problems. Using well-established measures helps increase the credibility of your evaluation. At the same time, the evaluator must be aware that most well-established measures are based on certain context, mainly Western's. Therefore, the evaluator must take actions to examine the items in the measures and consult it with local experts to help you choose appropriate items. Posavac offers a few suggestions for choosing measures including:
1. Use multiple measures (usually this occurs as you have a single item question)
2. Use non-reactive measures (participants usually respond based on what you want to hear)
3. Use only variables relevant to the evaluation (interesting vs. the focus)
4. Use valid measures (fact-based vs. attitudinal measures)
5. Use reliable measures (well-established measures helps reduce this problem; Cronbach's alpha)
6. Use measures that can detect change (program effect vs. external factor --> control variables)
7. Use cost-effective measures
Examples of measures used in previous projects
Let's see measures used in a previous evaluation on IT Certificate training program on students' computer skills and attitudes toward computer. The team examined several different outcomes including:
1. Academic performance
2. Students' attitudes toward computer
3. Students' attitudes toward the internet
4. Basic computer skills of the students
5. Students' current use of computer
Their independent variable was IT program participation status: participated in the program, failed the course, currently enrolled by passing the screening tests (but has not started), and failed to enrolled. Their control variables include number of siblings in college, education levels of father and mother, and student's future plan.
This one is an example from STEM education project. The team adopted scales from TIMSS student questionnaire. They broke it down by three separate sections of questions used in their study by type of methods they use: (1) student questionnaire, (2) focus group interviews, and (3) individual interviews.
Tasks for this week
If your project involves survey, you may have already found ones (the ones you submitted for IRB review). I suggest that you look into your survey questions again and try to refine them, putting them into a very well-formatted survey. Make sure that you know the original sources of the measures you adopted from as you will cite them in your report. Make sure to present the evidence of reliability and validity of the measures.
I will send my feedback of your first assignment (program description and literature review) a day or two after the class, for you to make the revision. The deadline to turn it in is October 19th.