Research Methods and Program Evaluation Key Concepts: A Study Guide


Refresh and try again. Open Preview See a Problem? Thanks for telling us about the problem. Return to Book Page. A Study Guide by Anita Knight ,. A Study Guide, by A. Bruns offers an overview and consolidation of key concepts in the study of research methods and program evaluation to help students prepare for exams or opportunities to demonstrate their knowledge that cover research methods and program evaluation.

Contact Us

Some key concepts included are: Now included is access information for online tutorials that supplement the text. Kindle Edition , pages. To see what your friends thought of this book, please sign up.

  1. Bringing Innovation to School: Empowering Students to Thrive in a Changing World (Solutions).
  2. A Midsummer Nights Dream (The Shorter Shakespeare Series Book 1)!
  3. Research Methods and Program Evaluation, Key Concepts | Bookshare?
  4. A Medicine Woman Speaks: Exploration of Native American Spirituality.
  5. Research Methods and Program Evaluation, Key Concepts: Study Guide (Second Edition).
  6. How To Make Money Fast - Insider Secrets For Everyday People!

Lists with This Book. This book is not yet featured on Listopia. May 12, Samuel rated it it was amazing. Authors do a great job explaining difficult concepts! Well written and extremely helpful to this student. Excellent study guide for all who is taking the course in Research Methods.

Kristin V Watson rated it liked it Nov 23, Carissa Beard rated it it was ok Dec 15, Max rated it it was ok Mar 12, Perhaps the most difficult part of evaluation is determining whether the program itself is causing the changes that are observed in the population it was aimed at. Events or processes outside of the program may be the real cause of the observed outcome or the real prevention of the anticipated outcome.

Causation is difficult to determine. One main reason for this is self selection bias. For example, in a job training program, some people decide to participate and others do not. Those who do participate may differ from those who do not in important ways. They may be more determined to find a job or have better support resources.

These characteristics may actually be causing the observed outcome of increased employment, not the job training program. Evaluations conducted with random assignment are able to make stronger inferences about causation.

  • The Demons Tale.
  • Birthday Surprises Are Best Served Nude!
  • Key Concepts in Evaluation Research - SAGE Research Methods;
  • Binny for Short: Book 1.

Randomly assigning people to participate or to not participate in the program, reduces or eliminates self-selection bias. Thus, the group of people who participate would likely be more comparable to the group who did not participate. However, since most programs cannot use random assignment, causation cannot be determined. Impact analysis can still provide useful information. For example, the outcomes of the program can be described. Thus the evaluation can describe that people who participated in the program were more likely to experience a given outcome than people who did not participate.

If the program is fairly large, and there are enough data, statistical analysis can be used to make a reasonable case for the program by showing, for example, that other causes are unlikely. It is important to ensure that the instruments for example, tests, questionnaires, etc. According to Rossi et al. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'.

The reliability of a measurement instrument is the 'extent to which the measure produces the same results when used repeatedly to measure the same thing' Rossi et al. If a measuring instrument is unreliable, it may dilute and obscure the real effects of a program, and the program will 'appear to be less effective than it actually is' Rossi et al. The validity of a measurement instrument is 'the extent to which it measures what it is intended to measure' Rossi et al.

The principal purpose of the evaluation process is to measure whether the program has an effect on the social problem it seeks to redress; hence, the measurement instrument must be sensitive enough to discern these potential changes Rossi et al. Only measures which adequately achieve the benchmarks of reliability, validity and sensitivity can be said to be credible evaluations. It is the duty of evaluators to produce credible evaluations, as their findings may have far reaching effects.

A discreditable evaluation which is unable to show that a program is achieving its purpose when it is in fact creating positive change may cause the program to lose its funding undeservedly. The steps described are: Though program evaluation processes mentioned here are appropriate for most programs, highly complex non-linear initiatives, such as those using the collective impact CI model, require a dynamic approach to evaluation. Collective impact is "the commitment of a group of important actors from different sectors to a common agenda for solving a specific social problem" [20] and typically involves three stages, each with a different recommended evaluation approach:.

Developmental evaluation to help CI partners understand the context of the initiative and its development: Formative evaluation to refine and improve upon the progress, as well as continued developmental evaluation to explore new elements as they emerge. Formative evaluation involves "careful monitoring of processes in order to respond to emergent properties and any unexpected outcomes. Summative evaluation "uses both quantitative and qualitative methods in order to get a better understanding of what [the] project has achieved, and how or why this has occurred.

Planning a program evaluation can be broken up into four parts: Critical questions for consideration include:. However, it is not always possible to design an evaluation to achieve the highest standards available. Many programs do not build an evaluation procedure into their design or budget.

Hence, many evaluation processes do not begin until the program is already underway, which can result in time, budget or data constraints for the evaluators, which in turn can affect the reliability, validity or sensitivity of the evaluation. Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation Bamberger et al. Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation.

Due to the budget constraints it might be difficult to effectively apply the most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation Bamberger et al. The most time constraint that can be faced by an evaluator is when the evaluator is summoned to conduct an evaluation when a project is already underway if they are given limited time to do the evaluation compared to the life of the study, or if they are not given enough time for adequate planning.

Time constraints are particularly problematic when the evaluator is not familiar with the area or country in which the program is situated Bamberger et al. If the evaluation is initiated late in the program, there may be no baseline data on the conditions of the target group before the intervention began Bamberger et al. Multiple methods, such as the combination of qualitative and quantitative data can increase validity through triangulation and save time and money. Additionally, these constraints may be dealt with through careful planning and consultation with program stakeholders.

Buy for others

By clearly identifying and understanding client needs ahead of the evaluation, costs and time of the evaluative process can be streamlined and reduced, while still maintaining credibility. All in all, time, monetary and data constraints can have negative implications on the validity, reliability and transferability of the evaluation. The shoestring approach has been created to assist evaluators to correct the limitations identified above by identifying ways to reduce costs and time, reconstruct baseline data and to ensure maximum quality under existing constraints Bamberger et al.

The five-tiered approach to evaluation further develops the strategies that the shoestring approach to evaluation is based upon. The earlier tiers generate descriptive and process-oriented information while the later tiers determine both the short-term and the long-term effects of the program.

For each tier, purpose s are identified, along with corresponding tasks that enable the identified purpose of the tier to be achieved. The task for that tier would be to assess the community's needs and assets by working with all relevant stakeholders. While the tiers are structured for consecutive use, meaning that information gathered in the earlier tiers is required for tasks on higher tiers, it acknowledges the fluid nature of evaluation.

Book Details

The five-tiered approach is said to be useful for family support programs which emphasise community and participant empowerment. This is because it encourages a participatory approach involving all stakeholders and it is through this process of reflection that empowerment is achieved. The purpose of this section is to draw attention to some of the methodological challenges and dilemmas evaluators are potentially faced with when conducting a program evaluation in a developing country.

Culture is defined by Ebbutt , p. Language also plays an important part in the evaluation process, as language is tied closely to culture. In particular, data collection instruments need to take meaning into account as the subject matter may not be considered sensitive in a particular context might prove to be sensitive in the context in which the evaluation is taking place. This is a difficult task to accomplish, and uses of techniques such as back-translation may aid the evaluator but may not result in perfect transference of meaning.

It is not a common occurrence for concepts to transfer unambiguously from one culture to another. Thus, it can be seen that evaluators need to take into account the methodological challenges created by differences in culture and language when attempting to conduct a program evaluation in a developing country. There are three conventional uses of evaluation results: Persuasive utilization is the enlistment of evaluation results in an effort to persuade an audience to either support an agenda or to oppose it.

Unless the 'persuader' is the same person that ran the evaluation, this form of utilization is not of much interest to evaluators as they often cannot foresee possible future efforts of persuasion. Evaluators often tailor their evaluations to produce results that can have a direct influence in the improvement of the structure, or on the process, of a program. For example, the evaluation of a novel educational intervention may produce results that indicate no improvement in students' marks.

This may be due to the intervention not having a sound theoretical background, or it may be that the intervention is not conducted as originally intended. The results of the evaluation would hopefully cause to the creators of the intervention to go back to the drawing board to re-create the core structure of the intervention, or even change the implementation processes. But even if evaluation results do not have a direct influence in the re-shaping of a program, they may still be used to make people aware of the issues the program is trying to address.

Going back to the example of an evaluation of a novel educational intervention, the results can also be used to inform educators and students about the different barriers that may influence students' learning difficulties. A number of studies on these barriers may then be initiated by this new information. There are five conditions that seem to affect the utility of evaluation results, namely relevance , communication between the evaluators and the users of the results , information processing by the users , the plausibility of the results , as well as the level of involvement or advocacy of the users.

Quoted directly from Rossi et al. The choice of the evaluator chosen to evaluate the program may be regarded as equally important as the process of the evaluation. Division for oversight services, The following provides a brief summary of the advantages and disadvantages of internal and external evaluators adapted from the Division of oversight services , for a more comprehensive list of advantages and disadvantages of internal and external evaluators, see Division of oversight services, Potter [36] identifies and describes three broad paradigms within program evaluation.

The first, and probably most common, is the positivist approach, in which evaluation can only occur where there are "objective", observable and measurable aspects of a program, requiring predominantly quantitative evidence. The positivist approach includes evaluation dimensions such as needs assessment, assessment of program theory, assessment of program process, impact assessment and efficiency assessment Rossi, Lipsey and Freeman, The second paradigm identified by Potter is that of interpretive approaches, where it is argued that it is essential that the evaluator develops an understanding of the perspective, experiences and expectations of all stakeholders.

This would lead to a better understanding of the various meanings and needs held by stakeholders, which is crucial before one is able to make judgments about the merit or value of a program.

Research Methods and Program Evaluation Key Concepts: A Study Guide by Anita Knight

A report commissioned by the World Bank details 8 approaches in which qualitative and quantitative methods can be integrated and perhaps yield insights not achievable through only one method. Potter also identifies critical-emancipatory approaches to program evaluation, which are largely based on action research for the purposes of social transformation. This type of approach is much more ideological and often includes a greater degree of social activism on the part of the evaluator. This approach would be appropriate for qualitative and participative evaluations. Because of its critical focus on societal power structures and its emphasis on participation and empowerment, Potter argues this type of evaluation can be particularly useful in developing countries.

Despite the paradigm which is used in any program evaluation, whether it be positivist, interpretive or critical-emancipatory, it is essential to acknowledge that evaluation takes place in specific socio-political contexts. Evaluation does not exist in a vacuum and all evaluations, whether they are aware of it or not, are influenced by socio-political factors.

It is important to recognize the evaluations and the findings which result from this kind of evaluation process can be used in favour or against particular ideological, social and political agendas Weiss, One of the main focuses in empowerment evaluation is to incorporate the program participants in the conducting of the evaluation process. This process is then often followed by some sort of critical reflection of the program.

How to support Research with Theoretical and Conceptual Frameworks

Once a clear understanding of the participants perspective has been gained appropriate steps and strategies can be devised with the valuable input of the participants and implemented in order to reach desired outcomes. According to Fetterman [42] empowerment evaluation has three steps;. The first step involves evaluators asking the program participants and staff members of the program to define the mission of the program. Evaluators may opt to carry this step out by bringing such parties together and asking them to generate and discuss the mission of the program.

The logic behind this approach is to show each party that there may be divergent views of what the program mission actually is. Taking stock as the second step consists of two important tasks.

Looks like you do not have access to this content.

The first task is concerned with program participants and program staff generating a list of current key activities that are crucial to the functioning of the program. The second task is concerned with rating the identified key activities, also known as prioritization. For example, each party member may be asked to rate each key activity on a scale from 1 to 10, where 10 is the most important and 1 the least important. The role of the evaluator during this task is to facilitate interactive discussion amongst members in an attempt to establish some baseline of shared meaning and understanding pertaining to the key activities.

In addition, relevant documentation such as financial reports and curriculum information may be brought into the discussion when considering some of the key activities. After prioritizing the key activities the next step is to plan for the future. Here the evaluator asks program participants and program staff how they would like to improve the program in relation to the key activities listed.

The objective is to create a thread of coherence whereby the mission generated step 1 guides the stock take step 2 which forms the basis for the plans for the future step 3.

Navigation menu

Thus, in planning for the future specific goals are aligned with relevant key activities. In addition to this it is also important for program participants and program staff to identify possible forms of evidence measurable indicators which can be used to monitor progress towards specific goals. Goals must be related to the program's activities, talents, resources and scope of capability- in short the goals formulated must be realistic.

  • UNDER HIS WINGS: DWELLING IN THAT SECRET PLACE.
  • A Place to Call Home!
  • Research Methods and Program Evaluation Key Concepts: A Study Guide.
  • Krishna (Illustrated in colour) (Illustrated Classics)?
  • Well, How Did We Get Here? A Brief History of the British Economy, Minus the Wishful Thinking?

These three steps of empowerment evaluation produce the potential for a program to run more effectively and more in touch with the needs of the target population. Empowerment evaluation as a process which is facilitated by a skilled evaluator equips as well as empowers participants by providing them with a 'new' way of critically thinking and reflecting on programs. Furthermore, it empowers program participants and staff to recognize their own capacity to bring about program change through collective action.

Shopping Cart

Multiple methods, such as the combination of qualitative and quantitative data can increase validity through triangulation and save time and money. Needs analysis is hence a very crucial step in evaluating programs because the effectiveness of a program cannot be assessed unless we know what the problem was in the first place. The CIPP model is an attempt to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a programme. Kristin V Watson rated it liked it Nov 23, This was conclusively demonstrated by Gene V. Retrieved September 20,

The transformative paradigm is integral in incorporating social justice in evaluation. Donna Mertens, primary researcher in this field, states that the transformative paradigm, "focuses primarily on viewpoints of marginalized groups and interrogating systemic power structures through mixed methods to further social justice and human rights". The transformative paradigm introduces many different paradigms and lenses to the evaluation process, leading it to continually call into question the evaluation process.

Both the American Evaluation Association and National Association of Social Workers call attention to the ethical duty to possess cultural competence when conducting evaluations. Cultural competence in evaluation can be broadly defined as a systemic, response inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates epistemology of the evaluation endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholder-generated, interpretive means to arrive at the results and further use of the findings.

The root of cultural competency in evaluation is a genuine respect for communities being studied and openness to seek depth in understanding different cultural contexts, practices and paradigms of thinking. This includes being creative and flexible to capture different cultural contexts, and heightened awareness of power differentials that exist in an evaluation context.

The paradigms axiology , ontology , epistemology , and methodology are reflective of social justice practice in evaluation. These examples focus on addressing inequalities and injustices in society by promoting inclusion and equality in human rights. Differences in perspectives on what is real are determined by diverse values and life experiences.

Program evaluation

Knowledge is constructed within the context of power and privilege with consequences attached to which version of knowledge is given privilege. Methodological decisions are aimed at determining the approach that will best facilitate use of the process and findings to enhance social justice; identify the systemic forces that support the status quo and those that will allow change to happen; and acknowledge the need for a critical and reflexive relationship between the evaluator and the stakeholders.

While operating through social justice, it is imperative to be able to view the world through the lens of those who experience injustices. These lenses create opportunity to make each theory priority in addressing inequality. Critical Race Theory CRT is an extension of critical theory that is focused in inequities based on race and ethnicity. Daniel Solorzano describes the role of CRT as providing a framework to investigate and make visible those systemic aspects of society that allow the discriminatory and oppressive status quo of racism to continue.

The essence of feminist theories is to "expose the individual and institutional practices that have denied access to women and other oppressed groups and have ignored or devalued women" [49]. Given the Federal budget deficit, the Obama Administration moved to apply an "evidence-based approach" to government spending, including rigorous methods of program evaluation.

An inter-agency group delivers the goal of increasing transparency and accountability by creating effective evaluation networks and drawing on best practices. The framework is as follows:. CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation. The CIPP framework was developed as a means of linking evaluation with programme decision-making.

Product details

Research Methods and Program Evaluation Key Concepts: A Study Guide - Kindle edition by Anita Knight, Clayton Kendrick, Henry Bruns, William Gribbin. Kona Publishing & Media Group: Creative Solutions for Educators, Students, and Booksellers.

It aims to provide an analytic and rational basis for programme decision-making, based on a cycle of planning, structuring, implementing and reviewing and revising decisions, each examined through a different aspect of evaluation —context, input, process and product evaluation.

The CIPP model is an attempt to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a programme. These aspects are context, inputs, process, and product. These four aspects of CIPP evaluation assist a decision-maker to answer four basic questions:.

This involves collecting and analysing needs assessment data to determine goals, priorities and objectives. For example, a context evaluation of a literacy program might involve an analysis of the existing objectives of the literacy programme, literacy achievement test scores, staff concerns general and particular , literacy policies and plans and community concerns, perceptions or attitudes and needs.

This involves the steps and resources needed to meet the new goals and objectives and might include identifying successful external programs and materials as well as gathering information. This provides decision-makers with information about how well the programme is being implemented. By continuously monitoring the program, decision-makers learn such things as how well it is following the plans and guidelines, conflicts arising, staff support and morale, strengths and weaknesses of materials, delivery and budgeting problems.