Monday, November 30, 2015

How to encourage evaluation utilization?

Many times, evaluation is done after the report is sent to the users. Without ensuring that the results and recommendations are used by the organization, the evaluation may not mean anything. What would you do to ensure the utilization of the evaluation you have written?

Monday, October 26, 2015

Analyzing Qualitative Data

The Schutt, R. K. (2011)'s reading on qualitative data analysis offers some useful tools to help you with your data analysis during and after the data collection. There are a few things that I would like you to pay particular attention to as I believe they are helpful tools for your data collection and analysis.

The Exhibit 10.3 is very helpful one to be using during your data collection. Use this sheet right after each of your interview sessions to identify four things:
  1. What were the main issues or themes that struck you in this contact?
  2. Summarize the information you got (or failed to get) on each of the target questions you had for this contact.
  3. Anything else that struck you as salient, interesting, illuminating or important in this contact?
  4. What new (or remaining) target questions do you have in considering the next contact with this site?





















Also, the reading by Ellen Taylor-Powell & Renner (2003) provides a good step by step process in data analysis. Here is a summary to those steps: 



For details of the article above on how to analyze qualitative analysis, please click here.


Due next week, November 2nd: Method of Evaluation that includes Sampling, Planning, and Instruments.  

Monday, October 19, 2015

Class Updates: October 19th

I would like to bring your attention to assignment that is due is the following weeks:

October 26th: questionnaires or interview questions updates. This should be your final version of what you will be using for your study.

November 2nd: Method section. This section includes (1) sampling, (2) planning (or procedures), and (3) instruments (instruments are your questions that you develop--the one due on 26th, but this one will be in a description format).

November 9th: Analysis strategies (e.g., how do you plan to analyze your data, what techniques are you using? if you are analyzing focus group data, then you will need to follow the steps given in your reading materials and write your own). This section allows readers understand how you analyze your data. If no details are given, then your paper will present doubts to the readers regarding rigor of your data analysis. Refer to reading materials on October 26th to help you with the description and plan for analysis. We do not have actual class on November 9th, but you will use this time to work on giving feedback to your fellow groups' assignment (to be sent to you on 10th).

November 16th: Feedback assignment due. We do not have actual class this day, but you will work on data analysis with your group members who just returned from Cambodia.

November 23rd: Your data should already be analyzed, and you should have a preliminary findings presented.

November 30th: Final paper due.


Monday, October 5, 2015

Data collection techniques: Which one is right for your project?

This week and the last week, we cover methods of obtaining data, the very important part of your evaluation that leads us to findings. Data can be obtained from various techniques and tools and there is no limit to the number of techniques you are using. It depends on what you think will answer your evaluation questions, as well as what program evaluation sponsor wants. Discussion and negotiation between you as an evaluator and the program sponsor are ongoing process as a way to ensure credibility of your evaluation report. Ryan, Meiers, and Visser (2012) offer 11 techniques (page 83-156) that we talked in class in the past week. While these tools are written mainly for Needs Assessment, some of them can be used in both formative and summative evaluation such as document and data review, guided expert reviews, focus groups, interviews, performance observations. I would like to bring your attention to some of the techniques we will primarily use for our projects: interviews, focus group interviews, and surveys. 

Ryan, Meiers, and Visser (2012) provide some helpful guides to help you prepare for your interviews (page 110): 


There is also a very useful tool for interview protocol that you can use for your interview. I suggest using this template for when you conduct your interviews. 






For focus groups, here is the interview protocol: 





Beyer (1995)'s data gathering instruments for formative evaluation provides 12 techniques, some of which are overlapping with Ryan et al. (2012)'s Needs Assessment, such as interviews, focus groups, and observations. The techniques mentioned in Beyer are product development oriented which are not common in NGO programming. Think about which ones of them that are applicable to NGO programming. 

Gertler (2011)'s data collection technique offers a unique procedure in getting your data, primarily survey data, collected step by step. The article also provides strategies in developing indicators (or measures). Measures should be based on the results chain (or the program theory). In other words, questions should be asked based on program's input, activities, and output, making sure that we don't measure orange when the input and activities are apple. By now, you should already be familiar with your program description--input, activities and output. Remember, you will need to include your program theory in your program description in your paper. 

Gertler also recommends pilot testing the questionnaire to make sure that questions are worded appropriately and understandably by the surveyor and the respondents. It also allows us to know if it is too long or if the format is consistent throughout the questionnaire.  

Measures 

I recommend using measures that have already been established in social science literature when you are trying to assess academic or behavioral outcomes of the students or other users. A previous evaluation on STEM interests among secondary students, grades 8-12, we used measures in TIMSS's questionnaire. You can create ones on your own if you could not find any of them in the literature or if they are very specific to your topic.

While creating your own survey measures can be the easiest way to help answer your evaluation questions, however, the measures may be vulnerable to reliability and validity problems. Using well-established measures helps increase the credibility of your evaluation. At the same time, the evaluator must be aware that most well-established measures are based on certain context, mainly Western's. Therefore, the evaluator must take actions to examine the items in the measures and consult it with local experts to help you choose appropriate items. Posavac offers a few suggestions for choosing measures including:

1. Use multiple measures (usually this occurs as you have a single item question)
2. Use non-reactive measures (participants usually respond based on what you want to hear)
3. Use only variables relevant to the evaluation (interesting vs. the focus)
4. Use valid measures (fact-based vs. attitudinal measures)
5. Use reliable measures (well-established measures helps reduce this problem; Cronbach's alpha)
6. Use measures that can detect change (program effect vs. external factor --> control variables)
7. Use cost-effective measures

Examples of measures used in previous projects 

Let's see measures used in a previous evaluation on IT Certificate training program on students' computer skills and attitudes toward computer. The team examined several different outcomes including:

1. Academic performance
2. Students' attitudes toward computer
3. Students' attitudes toward the internet
4. Basic computer skills of the students
5. Students' current use of computer



Their independent variable was IT program participation status: participated in the program, failed the course, currently enrolled by passing the screening tests (but has not started), and failed to enrolled. Their control variables include number of siblings in college, education levels of father and mother, and student's future plan.

This one is an example from STEM education project. The team adopted scales from TIMSS student questionnaire. They broke it down by three separate sections of questions used in their study by type of methods they use: (1) student questionnaire, (2) focus group interviews, and (3) individual interviews.





Tasks for this week 

If your project involves survey, you may have already found ones (the ones you submitted for IRB review). I suggest that you look into your survey questions again and try to refine them, putting them into a very well-formatted survey. Make sure that you know the original sources of the measures you adopted from as you will cite them in your report. Make sure to present the evidence of reliability and validity of the measures. 

I will send my feedback of your first assignment (program description and literature review) a day or two after the class, for you to make the revision. The deadline to turn it in is October 19th. 
      

Monday, September 21, 2015

Engage stakeholders, describe the program, and focus the evaluation design

Engage Stakeholders

What distinguishes program evaluation from research is the engagement of stakeholders. In research, the researchers remain objective and authoritative in planning the research, choosing participants, and developing questionnaires/interview questions. In contrast, in evaluation, the evaluators must work collaboratively with various stakeholders to design the evaluation, develop questions based on merit, worth or significance, and focus on the intended uses by intended users. Engaging stakeholders, mainly primary stakeholders, must be done in an on-going basis from the beginning to the end of the evaluation to ensure the accuracy and utility of the evaluation.

CDC offers a very helpful checklist in stakeholder engagement:






















Identifying stakeholders is important step as it helps you design your evaluation samples and create questions to ask. CDC also offers a "Identifying Key Stakeholders" worksheet that you can fill your potential stakeholders. Based on your project, work with your group members to identifying your stakeholders based on these three categories:



Then, use the following worksheet to identify stakeholders that will increase the credibility, implement the interventions, advocate for changes, and fund/authorize the continuation or expansion of the program:












For each of the stakeholders identified, list all of the activities and/or outcomes that matter the most to them:





Describe the program 

In evaluation, program description is similar to literature review in research--but evaluation needs both, the program description as well as literature review. CDC offers a very nice checklist of program description that will be helpful for your own program description.





























Here is the checklist of what you should include in your program description:


















To help you frame your logic model, here are the worksheet to help guide your design:


Focus the evaluation design 

CDC also offers a helpful checklist for evaluation focus. This checklist is helpful for you to define your PURPOSE and to formulate your evaluation questions. It also reminds you the need to review evaluation questions with stakeholders, program managers, and program staff.







Tasks for this week: 

1. Reach out to primary stakeholders or service providers including program managers and program staff to obtain information about the program being evaluated. Talking with the program staff will help you understand your program better and it's also a chance for you to ask them for their input as well as to obtain available documents or reports of the program. What we did last week was talking to stakeholders who are not involved in the program directly. What you will need to do this week is to talk to those who are working on the program in their day-to-day basis.

2. Meet with me to discuss your evaluation design.

3. Draft your IRB Human Subject Application.

4. Develop measures/questionnaires/interview questions for your evaluation.

5. Finalize your program theory or results chain after you talk to program staff on the ground. 

Monday, September 14, 2015

Summative or Impact Evaluation

This week are discussing summative or impact evaluation. Impact evaluation seeks to determine if a program has an impact based on the intended outcomes. To measure the intended outcomes, it is important that the evaluator understand how the program operates or how the program conceptualizes its framework. And this is called "Theories of Change" or "Program Theory." As Gertlet et al. noted, "A theory of change is a key underpinning of any impact evaluation, given the cause-and-effect focus of the research. As one of the first steps in the evaluation designs, a theory of change can help specify the research questions" (p.22). It is the evaluator's task to identify the program theory or the theory of change.

A well operating program may not have a specific, explicitly stated theory, although they know what they are trying to achieve. Helping program staff or primary stakeholders to understand a clearly stated program operation with explicit operating framework will be helpful for the organization itself in terms of having a valid documentation with rigorous program theory necessary for their fund raising purposes, and for the evaluator in having a clearly defined program goals/objectives that they can base on to help them develop their evaluation criteria to determine the outcomes of the program being evaluated. Program theory development/identification can be done with discussion with program staff and other primary stakeholders.

There is an excellent article on how to establish a program theory published in the American Journal of Evaluation by Frans L. Leeuw (2003) entitled, "Reconstructing program theories: Methods available and problems to be solved." Leeuw offered three approaches to help uncover the mystery of a program theory, making the theory more explicit.

First approach is using the "policy-scientific" method which involves reviews of empirical as well as program documentation, evidence (pay attention to 'our goal is to improve...' or 'we argue that...' and a propositional statement such as "if, ...., then, ...."  The second approach is a strategic assessment method which refers to the process of uncovering all possible assumptions about the program in which group dialog is central to this approach. The third approach is "elicitation" method that involves a mental mapping process (a method used in cognitive psychology) where participants (stakeholders) are asked to provide their thoughts on the program model (or logic of the program) and then compare it to evidence from scientific organization studies.

Leeuw mentioned that the first approach is "best suited for ex post evaluations (after the fact) of programs and policies back by documentary evidence (i.e., often public policies), while the strategic assessment and elicitation approach appear to be more relevant for ex ante evaluations under other conditions" (p. 16). Leeuw's article offers step by step methods to help you digest program theory efficiently. This is the citation to the article:
Leeuw, F. L. (2003). Reconstructing program theories: methods available and problems to be solved. American Journal of Evaluation, 24(1), 5-20.
And this is the link to the article (you may need to be connected to the campus network to access to this article). Worth mentioned is the Figure provided in the paper which illustrates how you can use policy-scientific method to establish a program theory:


When a program does not have any explicit theory stated, the use of needs assessment is helpful. This can be done in a backward process where the evaluator asks the program staff and primary stakeholders about why the program is needed--why this program exists in the first place and who needed it, for what purpose, and outcomes to be accomplished. Discussion with primary stakeholders to gain a better understanding of the needs and theory that it generates is necessary. This needs assessment can also be useful when you describe the program--in your program description where you will write the background, mission, and activities of the program.

In addition to using needs assessment as theory development and identification, needs assessment is also relevant for both process and outcome evaluation. For process evaluation, the evaluator is interested in knowing if what was needed got implemented, whether those who needed it received the service, and whether the program staff has the capacity to responsible for the need to be implemented. In addition, need assessment enables the evaluator to understand if people need the offering services or if they think the services are relevant to their life based on the context in which they are situated. The response allows the evaluator to make recommendations for program improvement. For example, Weaker Student Program implemented by CFC in 2013, received less attention and participation by students and parents even though the teachers reminded them--the parents-- possible improvement of their children's participation in the program. In this case, parents may not see the real need of this program for their children. No evidence was gathered from the parents at the time--only students' data were collected. Needs assessment from parents via face to face interviews should be helpful in understanding program theory and actual need perceived by the beneficiaries. For outcome evaluation, the evaluator is interested in knowing if the needs impact participants' economic, social, psychological, or academic functioning. In other words, is there a correlation between the stated needs and all of these functional outcomes.

Need assessment can be done using different strategies including:
Personal observations on the resources and the needs in the community
Social indicators of need (via national survey data)
Community surveys of need--including attitudinal surveys
Service availability in the areas (duplicity check)
Key informants interviews and information (usually village chiefs or school principals)
Focus groups
Community forums--via parent-school meeting day or occasional events happened on the campus
For your own program evaluation, I also recommend doing organizational assessment by discussing with program staff. Organizational assessment allows the evaluator to understand:

1. Location and facilities of the program: infrastructure in different locations and staff delivery of the services.
2. Program personnel structure: who is doing what, and who make certain decisions. Here a diagram of the organizational structure is useful to be reported in your evaluation.
3. Values and interaction quality within the organization: interactions between staff members and between their clients.
4. Qualifications of the personnel
5. Frequency of meeting and communication
6. Staff training opportunities
7. Ongoing program monitoring: how do they know the program is running smoothly?

Gertlet et al. also discuss "the results chain" as part of program theory or theory of change. The results chain outlines 5 elements that serve as a map of the program to help evaluator design their evaluation and to select measures (or performance indicators) corresponding to the program activities. Below is the results chain taken from the Gertlet et al.'s chapter 2.



I suggest that you model your program based on this results chain flow chart.

Questions for Discussion
As an evaluator, how do you decide if different stakeholders have different theories of a given program?

Tasks for this week

1. Begin drafting your IRB application that includes filling out  (1) HUMAN SUBJECTS APPLICATION (New Studies) MS Word03.doc and (2) Consent Form Template (11-21-14).docx.

2. Try to identify program theory or results chain of the program you are evaluating. Understanding a program theory or results chain helps you develop appropriate measures to evaluate the outcomes. Leeuw's article helps you develop a program theory. In addition, try to put the program theory or results chain visually by drawing a flow chart of it. That will be also used in your program review and description section of your evaluation report.

3. You have spoken with Natalie and Lydia about your project last week. With their input, please draft the summary of your project and questions for clarifications and send to Jamie, CFC founder, for her input. The purpose is to make sure that all the primary stakeholders have been consulted and to make sure that everyone agrees on your project before you proceed. This has to be done ASAP.  

4. By now, you should have a complete understanding of the program operations and your project design.

5. Make sure that everyone of your team members has a shared understanding of the project.

Monday, September 7, 2015

Formative Evaluation

This week we will go through formative evaluation. It can be used for a program that has already been running as well as a program that is being created. It's more often used to evaluate the latter. Beyer (1995) defines it as "evaluating or assessing a product while that product is in the process of being created and shaped." Sometime it is called "improvement-oriented evaluation."Rather than making a judgement or determining the impact of the program (summative), formative evaluation enables ways in which a program can be improved. There is a metaphor for formative and summative statement: "when the cooks taste the soup, that's formative; when the guests taste the soup, that's summative."

Beyer discusses primarily on using formative evaluation for a product that is being developed before it is put into a regular use. For example, take a look at Apple's iPhone product with an antenna issue. If the formative evaluation was done well, Apple would have found out and fixed it before it went on sale (or before a regular use). Beyer offers four stages of which formative evaluation of a product or a program needs to be occurred:

  1. Design
  2. Prototype 
  3. Pilot
  4. Field test

What happens when a program has already been running or a product has gone on sale? In what way that formative evaluation can be used? Is it too late to evaluate formatively? If a program has already been running and a product has already gone on sale, why bother with formative evaluation? Some funders would just want to see the final outcomes of the program; how would you convince them that formative evaluation is needed?

Formative evaluation can also be used to evaluate a program while it is already in operation. According to Michael Patton, formative evaluation asks the following questions:

  1. What are the program's strengths and weaknesses? 
  2. To what extent are participants progressing toward the desired outcomes?
  3. Which types of participants make good progress and which types aren't doing so well?
  4. What kind of implementation problems have emerged and how are they being addressed?
  5. What's happening that wasn't expected?
  6. How are staff and clients interacting? 
  7. What are staff and participant perceptions of the program? 
  8. What do they like? dislike? want to change? 
  9. What are perceptions of the program's culture and climate?  
  10. How are funds being used compared to initial expectations? 
  11. How is the program's external environment affecting internal operations? 
  12. Where can efficiencies be realized? 
  13. What new ideas are emerging that can be tried out and tested?
What are your data sources? Who, what, and where are the best sources of this information? Beyer mentioned 2 things: people and well-established standards. Who are the people? 
  1. Experts 
  2. Users (e.g., intended beneficiaries) 
  3. Stakeholders (e.g., providers of services, teachers, parents, community members etc.) 
Class activities

Below are tasks for you to do as a group: 
  1. With your project team members, identify experts related to your project, locally or internationally. They can be anyone who can provide you feedback to the program activities and framework. How many of them and in what sub-areas? 
  2. Once identified, think about what kind of things you would like the experts to help with? List  all the things you would like them to help with. 
  3. Identify users of your project. Who are your primary and secondary users? 
  4. Identify stakeholders of your project. Who would you like to include and why? 
Tasks for you to do this week
  1. Schedule a Skype call with your primary stakeholders (Jamie and Natalie) 
  2. Prepare for questions to ask both of them 
  3. Start collecting all the information about the program you are evaluating--everything about the program because it allows you to have a comprehensive understanding of the whole program. Don't wait till you know all the things you are asked to evaluate. 
  4. Create an online shareable folder that your team members can upload the documents 
  5. Always look into the government policy documents related to your program as it gives you a big picture from the national level. It is required that you know what's going on with the national policy on your program. Here is the website to the Cambodian Ministry of Education: http://www.moeys.gov.kh/en/policies-and-strategies.html  
  6. Start writing up program description. For example, if you are working on your preschool program, then write all about the program background and overall activities. 

Monday, August 31, 2015

Aug 31st: Plan and Essential Information for your Evaluation

After you have selected your evaluation topic, here are steps that could help you plan for your evaluation, especially for you to complete your first part of the assignment:
  1. Schedule a meeting with your team members to plan for your evaluation. Kelly and I can also join your meeting to help focus your topic. With your team members, start brainstorming all kinds of questions you want to ask the primary stakeholders (those who primarily manage the program on day to day basis), and write them down. Questions should be as detailed as possible, covering background and history of the program, program's mission or goals, day-to-day activities, budget, staff involved with program operation, number of beneficiaries it serves, the purpose of this evaluation, and the intended uses of the results from this evaluation etc.     This needs to be done this week by Friday 9/4th. 
  2. Part of your initial contact, send a brief note (via email) of the program you are evaluating to the best of your understanding what is being evaluated, to your primary stakeholders (for CFC project send to: Jamie and Natalie) to obtain their feedback. This needs to be short and right to the point. It needs to be done by Friday 9/4th. 
  3. Once you receive their feedback, the next step is to schedule a meeting (via Skype) with the primary stakeholders (for CFC project: Jamie and Natalie), again, to explore the program being evaluated more in details. Make sure that your questions allow you to understand their needs and their intended uses of the program results. It would also be important to ask them who should be included in your sample (data to be collected). Right now, their schedule is open on Tuesday September 8th at 9:30am till noon. 
  4. Depending on who they refer you to talk to, please then schedule a Skype call with them. Those may be staff on the ground including Savy Ung, CFC's superintendent and Christin Spoolstra, CFC's deputy country director.  
  5. Set up your evaluation timeline--when to finish what. 
  6. Discuss individual team member's roles--who is doing what? It is also helpful to assign one team member as a contact person to primary stakeholders and Kelly and I. 
  7. Conduct literature search related to your evaluation topic including relevant government policy, similar programs run by other NGOs or institutions, as well as empirical findings related to the program.  
  8. Begin drafting your IRB Human Subject Application. You have almost 4 weeks to do this. Make sure it is ready to be submitted by September 28th. 
  9. The next step would be to meet up with your team members to work on writing up the first part of your evaluation.   
Questions for discussion

1. Think of your project, in what way that it can be a needs assessment?
2. You are planning your talk with primary stakeholders (e.g., those who sponsor the needs assessment), think of some questions to ask that show your understanding of needs assessment project you are undertaking.
3. Think about who should be included as your stakeholders in your needs assessment project.

Due next week, September 7th 2015 

IRB package will be created on the irbnet.org website. This package is the registration of your project on the website that includes project information, IRB certificate upload, and names of all your team members added to the package.

Task division of your team members should have already be defined. Please be prepared to share everyone in class.

Your team will also be asked to briefly described the progress of the project. 

Monday, August 24, 2015

Welcome to CIE 402: Development and Evaluation of International Educational Projects

Welcome to CIE402!

There are a several things that we will go through in this first class:
  1. Introduction
  2. Syllabus
  3. Introduction to Caring for Cambodia (CFC) project at Lehigh with a video 
  4. Choosing an evaluation topic of your interest
  5. Selecting a team members for your evaluation group 
  6. Cambodia trip: November 4th-14th  
  7. Expectation of the report, style and contents 
  8. Example of evaluation reports from past semesters (available in coursesite) 
  9. IRB certificate and submission of your project  
  10. IRB proposal submission 
  11. Resources for evaluation (conferences, journals, etc.) 
  12. Evaluation topics 
  13. Overview of program evaluation: definition, purposes, types, standards etc.  
  14. Basic facts about CFC
  15. Contact information of CFC primary stakeholders that you can reach out to negotiate your project 

7. Expectation of the report, style and contents
Let's talk about item #7: Expectation of the report, style and contents. I created this template for the consistency of use in this particular class. Please use this template file for your evaluation report. Word file created on your own will not be accepted.  


8. Example of evaluation reports from past semesters 
I would also like to share with you a sample evaluation paper that was written in a previous class on student council evaluation at the Caring for Cambodia schools. This is the link the paper (available in coursesite too).  

9. IRB certificate and submission of your project  
Since students are required to conduct data collection and data analysis as part of their evaluation project, obtaining an IRB certificate is required. This assignment accounts for 5 points of the total grade. For those of you who have already completed the training and received the certificate, the grade will be automatically applied to your overall grade. To take the training and certificate, please go to this website: http://phrp.nihtraining.com/ . Once you received the certificate, please save it in a PDF file, and email my TA, Kelly Grace (krg314@lehigh.edu).   

10. IRB proposal submission
Students are required to submit their evaluation proposal for IRB approval for their evaluation to take place. Please see the dateline of this proposal submission in your syllabus. Here is a sample of IRB proposal submitted by a group of students in previous class: link to the file

11. Resources for evaluation
American Evaluation Association is perhaps one of the largest, well recognized organizations in the United States and in the world with about 7700 members from within the US and over 60 other countries, as of 2014. It's mission is to "improve evaluation practices and methods, increase evaluation use, promote evaluation as a profession, and support the contribution of evaluation to the generation of theory and knowledge about effective human action." An article by Donaldson & Christie (2006) lists all of the major associations worldwide. It publishes a well-known journal in this field called, American Journal of Evaluation. Here is brief introduction to the journal: 

"The American Journal of Evaluation (AJE) explores decisions and challenges related to conceptualizing, designing and conducting evaluations. Four times/year it offers original, peer-reviewed, articles about the methods, theory, ethics, politics, and practice of evaluation. AJE features broad, multidisciplinary perspectives on issues in evaluation relevant to education, public administration, behavioral sciences, human services, health sciences, sociology, criminology and other disciplines and professional practice fields."

This association holds an annual conference where both academics and practitioners come together to share their results and lessons learned. This year's conference will be held on Nov 9-14 in Chicago. It's its 29th annual conference. This year's conference theme is: Exemplary Evaluations in a Multicultural World.  

What can you do with evaluation skills?   

Well you can be an evaluator or evaluation specialist, consultant, program analyst, research analyst, based on your area of specialty. You can find a whole job listing in the American Evaluation Association's website. Let's pick one and see what they are looking for. 

This is an excellent article that talks about "Emerging Career opportunities in the Transdiscipline of Evaluation Science" by Donaldson & Christie (2006). 

12. Evaluation Topics 

Students will work in a group of three to carry out an evaluation project below. 

CFC Project 1: Teachers’ perceptions toward database project: A needs assessment 

Lehigh’s CSB (Computer Science and Business) faculty and students have developed academic and health database system for CFC schools. The database system allows the teachers to enter students information that include their academic activities and performance as well as their health record. The system has yet to be implemented, but training in how to use the database has been provided to the teachers and CFC. Little is known how receptive the teachers are toward this database system participation and implementation. The purpose of this evaluation is to assess the needs of this program by interviewing teachers and CFC staff their perspectives in implementing the database program.

CFC Project 2: Preschool Program Evaluation: Formative and Summative Evaluation

CFC has been implementing preschool program since 2008. Two evaluation studies have been conducted previously, one in 2010 and the other one in 2012. For this evaluation, we aim at examining the progress of the program, strengths and challenges facing program operation (formative evaluation), as well as the impact of the program on students’ school attendance and performance (summative evaluation). In this evaluation, we plan to draw the data from school record of students who participated in the program and those who did not, in terms of their school attendance and performance as well as teacher evaluation on the two groups of students. So this evaluation utilizes both formative and summative methods.

CFC Project 3: Needs Assessment of Community Contributions to Education

This evaluation seeks to assess community’s willingness or interests to contribute to schools in terms of materials and finance in order to sustain the schools and all the existing programs created by CFC. The evaluation will seek to gain perspectives from relevant stakeholders within the school community, namely, students, parents, teachers, principals, village and commune chiefs, and relevant government officials. In addition, the review of existing programs related to community contributions in Cambodia or elsewhere will need to be included in order to understand best practices of these similar programs that could be adapted for the CFC schools. Previous research by UNESCO in five post-conflict societies found that “the most community involvement involved the provision of material and financial resources, primarily in the form of providing land for building school or classroom venues, contributing materials for school rehabilitation and maintenance. Some communities also contributed human resources through the selection of teachers or involvement in governance structures such as Parent Teacher Association (PTAs).” However, the research also showed that trust is important in determining whether the community will contribute. In other words, trust is a prerequisite to gaining community contributions. With this idea in mind, this evaluation will also seeks to understand:
  1. how the community perceives the role of CFC in its educational provision to the children in their community. 
  2. how the community perceives its role to be in its educational contributions. In other words, how do they think their roles in the education that their children receive from CFC schools.
  3. what are the existing community resources, tangible and intangible, that readily available for the schools to use
  4. finally if the community is willing or interested in contributing to education. 
CFC Project 4: English Language Program 

We have spent the last three years working on the English Language Program in CFC primary schools. Through research studies and evaluations we determined that the ELP needed to be changed, as students were struggling in all areas of language acquisition. We are currently in the second pilot stage of a newly written ELP that incorporates interdisciplinary content linked to traditional classroom learning objectives in order to promote transfer of learning, as well as provide students with supplementary exposure to traditional classroom content. The ELP employs a student-centered, project-based learning approach, and initial findings from a short pilot in the spring proved very promising.

The first full-year pilot of the new program will take place this fall, and we need to establish a clear plan for monitoring and evaluating the program throughout the year. The group working on this project will establish a baseline assessment of students' English Language proficiency, and work to frame both formative assessments that will monitor progress throughout the year, as well as and end-of-year summative assessment to help teachers and administrators understand the strengths and weaknesses of the program.
Other Project 5
Other Project 6 

13. Overview of program evaluation

There are a few things that you need to do to get you started with an evaluation project:
  1. First of all know the purpose of your evaluation or the intended use of the evaluation: to improve or to determine the outcomes? To improve the program, we use "formative evaluation" as it helps an evaluator understand the process of the program and how it runs. In other words, it aims to find out what works and what does not work and what to do to improve the program. To determine the program's outcomes, we use "summative evaluation" as it helps an evaluator determine if the program impacts targeted stakeholders with the intended benefits. This is such as interesting analogy to better understand the two inquiries: "When the cook tastes the soup, that's formative; when the guests taste the soup, that's summative" (Professor Bob Stake, U. of Illinois, Urbana-Champaign."
  2. Then, you need to identify relevant stakeholders (U1), starting from primary stakeholders, those who are closely managing the program (e.g., director, program manager, program officers etc.). Primary conversation will need to start taking place with these above mentioned people in order to understand their needs for program evaluation: do they want to improve the program or examine the impact of the program, or both? Different stakeholders may have different expectations and so yourself. What you plan based on program description you saw on the ad may be different once you start communicating with your stakeholders. This is an important step to be taken to ensure that the evaluation does what everyone or at least the majority of the stakeholders expected. 
  3. This next step is very important: to obtain formal agreements with your stakeholders after your negotiations to undertake the evaluation project (P2: Formal Agreements). The goals have to be clear at this point--although things can change later as the evaluation takes place, the evaluator will need to keep communicating with the stakeholders to modify the agreement terms. This will help with any political conflict from various interest groups (F2: Political Viability or P7: Conflict of Interest). "Any possible attempts by any of these groups to curtail evaluation operations or to bias or misapply the results can be averted or counteracted (The Joint Committee on Standards for Educational Evaluation, 1994, p. 71)." In addition, the agreements should also document if the procedures to be taken during the evaluation (ongoing communication, data collection, or information gathering) is practical and feasible (F1: Practical Procedures). For example, the evaluator needs to ensure that qualified personnel are available to help out during the whole evaluation process. 
  4. Next step is to start gathering information and documents (A1: Program Documentation), examine the context of the program (A2: Context Analysis), reliability and validity of the program information (A5: Valid Information & A6: Reliable Information). At this step, the evaluator needs to be in constant contact with personnel assigned to assist with the program evaluation to ensure that all the information obtained is correctly reflected the program's theory or activities.   
  5. Once information has been obtained, the evaluator is now ready to design their evaluation. He or she can start planning on sampling (participant selection), procedures of data collection, designing questionnaire or developing interview questions for corresponding stakeholders, and finally collecting the data. Here the evaluator needs to ensure that the procedures are practical as mentioned above in F1: Practical Procedures. The evaluator needs to discuss with the primary stakeholders in the process of identifying stakeholders that they deem important to include in the evaluation. This process is done to ensure that your data are completely collected from various stakeholders identified by you and by the primary stakeholders. During data collection, the evaluator needs to ensure that participants' rights are respected and protected (P3: Rights of Human Subjects." If participants do not want to answer any questions due to the information sensitivity, the evaluator should respect their wishes and move on the next questions. 
  6. The next step is to analyze the data collected and put together a complete report including recommendations based on the findings. Results dissemination may be expected via printed materials or oral presentation by the evaluator. This is when the evaluator needs to prepare a racing car in case the sponsoring organization finds the results disturbing :-). Findings should be disclosed to those impacted by the evaluation (P6: Disclosure of Findings). The Joint Committee noted that "the formal parties to an evaluation should ensure that the full set of evaluation findings along with pertinent limitations are made accessible to the persons affected by the evaluation, and any others with expressed legal rights to receive the results (p. 109).  
  7. At the same time, the evaluator needs to be knowledgeable in statistical analysis, at least at the basic levels. The evaluator needs to also be equipped with knowledge in interview response analysis (qualitative method). 
  8. The next step, the evaluator should consider submitting their evaluation report for a national or international conference, pending on approval from the sponsoring organization.  

14. Basic Facts of CFC
Below are basic facts of CFC written by Alyssa Buccella, Busra Ozturk and Amanda Pritt of Comparative and International Education, College of Education, Lehigh University.

  • Eleven years ago, Jamie Amelio founded Caring for Cambodia (CFC), a non-profit organization that provides quality education for 6,400 students across 21 schools in Siem Reap.
  • Caring for Cambodia provides teacher training, hygienic classrooms, and essentials for students’ health and welfare such as meals, hygiene kits, uniforms, school supplies, etc. 
  • According to national estimates, only 28% of students graduate from high school across Cambodia. CFC is dedicated determining how they could assist in making a difference in the lives of these children. 
  • With an estimated 44% of children facing chronic malnutrition (CDHS, 2010), ensuring that students are fed twice a day serves as both an incentive to attend school and a means to focus in class with a full stomach. 
  • Further, many students weren’t able to attend school due to lack of transportation. In 2012 alone an extra 197 bicycles were distributed to students, making their commute easier. 
  • Financial restraints pose an immense barrier to education in Cambodia, but with a free education, meals, uniforms, and school supplies, CFC has greatly lightened the burden from families, encouraging them to send their children to CFC schools. 
  • CFC has proven to have the lowest dropout rates in all of Cambodia (less than 2% for its primary level compared to the national average of more than 10%), consistently achieving successful results over the past decade. 
  • Total revenue and support in 2012 amounted to $1.26 million, and expenses totaled $1.19 million. Teacher training alone costs about $56,000 in 2012, but well-trained and incentivized teachers are perhaps one of the most important factors in ensuring a high-quality education for the students in CFC schools. 
  • When taking science and math curriculum into account, it is important to note that CFC operates within standards set by the government. Still, they have yearly goals and by working closely with child-friendly schools, they aim to supplement the national curriculum in order to provide students with a quality education. 

15. Contact information of CFC primary stakeholders that you can reach out to negotiate your project 

Here are some key contacts at the CFC schools for your evaluation:

Mr. Savy Ung, Superintendent. Savy has been with CFC since the start of the organization, for about 13 years. Savy is managing all schools and overseeing all the programs being run at all the CFC supported schools, on the ground in Siem Reap province. Savy could provide background information of most of the programs including day-to-day activities, the programs' mission, and his intention of using the results of all of the program evaluations.

Ms. Christin Spoolstra, our new Deputy Country Director. Originally from Indiana and a graduate of Albion College in Michigan, Christin has spent the last three years working for the Peace Corps in Cambodia. Initially, she worked in the English Teaching and Teacher Training Project in Svay Rieng Province, and then, most recently, she was the Volunteer Coordinator in Phnom Penh. Christin's knowledge of the Khmer language, her familiarity with the Cambodian culture, and her passion for education will serve her well in her new role with CFC. As Deputy Country Director, Christin will be Country Director Ung Savy's right-hand woman and the central point of communication. Her primary responsiblilities will include helping to establish strategic direction and execution of key CFC programs, working with the Program Managers to strengthen current programs and implement new community programs, development of Human Resource policies and procedures, and training of local staff with communication and PR skills.

Ms. Natalie Bastow, Chief Operating Officer. Natalie manages all aspects of CFC including program initiatives, international network connections, fund raising activities, and decision making. Natalie has been a great resource in linking Lehigh with CFC on the ground and worldwide. She could provide larger picture information related to CFC visions and how evaluations would benefit CFC in a long run, ensuring all aspects of sustainability of CFC.

Ms. Lydia Breckon, Development Director. Lydia manages fund raising activities corporate relationships / opportunities and grant applications. As one of the primary stakeholders, Lydia plays an essential role in evaluation topic selection as it provides research-based evidence that she could use to support her potential grant proposals.