Provide a critical comparison and analysis of two of the five paradigms for monitoring, evaluation and impact assessment, in order to demonstrate a nunderstanding of which paradigms are most appropriate for conducting MEIA in which development contexts
Must critically analyse the perceptions of the nature of evidence, and how these impact upon the MEIA of development projects
Examine critical debates and key issues surrounding the monitoring,evaluation and impact assessment in international and community development project cycles, to build appropriate professional skills and understanding of how to design and implement appropriate monitoring, evaluation and impact assessment strategies
Pragmatic Evaluations
Indigenous Evaluations
At the beginning of the 21st century, world leaders gathrered to discuss what the development goals of this century would be. These discussions led to the creation of the Millenium Development Goals. These goals targeted areas of daily income, universal education, gender equality, maternal health and environmental sustainability, just to name a few.
Most of the these goals had the set deadline of 2015, which passed last year. Though the most recent report acknowledges significant improvement in achieving its goals, it also acknowledges shortfalls and uneven success rates (United Nations, 2015). As we discuss the topic of human development, and the investment of time, funding and personnel, the significance of project evaluation becomes more apparent. What works, what hasn’t worked and what needs to be adjusted in order to deliver on outcomes are all aims of a good evaluation.
However, approaches to project evaluations differ greatly. What is central to all evaluations is the five main attributes put together by the Joint Commitee for Educational Evaluation (Mertens, 2012). They are: Utility, Feasibility, Propriety, Accuracy and the Meta-evaluation.
This essay looks at Pragmatic (or Impact) and Indigenous approaches to monitoring, evaluation and impact assessment as well as understanding the context by which approach would be most appropriate.
The reason for adopting these two approaches for this essay topic is because both of them have their sets of strengths as well as their shortcomings, which will be explained further.
Pragmatic Evaluation
Your role as the evaluator
Consultation with stakeholders on what the problems are, goals and objectives should be.
Philosophical and theoretical lens
Whilst there exists a single reality, all individuals have their own unique interpretation of that reality.
Gain knowledge in pursuit of desired ends as influenced by the evaluators’ and contextual values and politics
The evaluand and its context
Method (design, research purposes and questions, stakeholders and participants, data collection)
assumption
Match methods to specific questions and purposes of research; quantitative and qualitative methodologies can be used, or working back and forth between both to exploit the power of ‘triangulation’ of methods and their respective findings
Management and budget (reports and utilisation)
Reports should be presented in a from that talks of usefulness and effectiveness.
Criteria:
How useful is it,
Can it be implemented in this setting
Is it humane, ethical, moral, proper, legal and professional
Is it dependable, precise, truthful and trustworthy
Do you assure and control the quality of the evaluation research
Though the earliest evaluation methods used a positivist methodology, applying a scientific gaze to community development, several evaluation practitioners have since recognised its shortcomings. The Pragmatic approach identified significant challenges with positivistic evaluation styles. More often than not, positivisitic approaches do not involve stakeholders due to its pursuit of an objective result. This can result in significant resource costs due to the hiring of external staff and contractors in order to conduct such evaluations (Parry, Platt and Gnich, 2001). Furthermore, Cisneros-Cohermour (2005) identified significant limitations in the apparent ‘objectivity’ of teaching research done in the US. Researchers were often based at the university they were teaching and their results supported their positions either directly or indirectly.
Perhaps one of the greatest issues with the first paradigm of evaluation theory was the lack of participation stakeholders had in the design and implementation of the evaluation (Gertler et al., 2011). This often resulted in such reports not being read by practitioners or leaders in the field.
Impact evaluators sought to rectify these shortcomings by approaching assessments from a different philosophical angle. Where positivistic methods pursued complete objectivity, pragmatic evaluators recognised that there were different perceptions of the same project and thus sought to involve all stakeholders in the evaluation process (Shaker, 1990). Consequently, pragmatic evaluators are more concerned with the ‘usefulness’ of the data in assisting decision-makers in their role.
Furthermore, the inclusion of participatory methods to the evaluation process provided a more empowering approach that was missing from the positivistic paradigm (Chambers, 2009).
Impact evaluations typically use a mix of qualitative and quantitative methods in order to determine its results. Ultimately, the methodology is guided by the aims and objectives of the study and what is most likely to show effectiveness (or lack thereof).
Like all approaches to evaluation, the success of impact evaluations is not without its conditions. Impact evaluations are not without its faults. In 2000, the Center for Global Development created a working group to identify the major problems with impact evaluations as well as solutions for how the paradigm could be taken more seriously.
This led to the report titled, When will we ever learn? Improving lives through Impact Evaluation which outlined some of the problems with impact evaluation reports thus far, such as the lack of adherence to evaluation standards put by the Joint Committee, as well as a lack of rigour being exercised by evaluators (Center for Global Development, 2006). The work on the report led to the creation of the International Initiative for Impact Evaluation (3ie), an international body aimed at improving the quality of evaluations within the paradigm (Levine, 2016).
3IE adopted a position of providing grants to impact evaluators as well as collating the resulting evaluations, conducting systematic reviews of the evidence and finally, creating policy briefs for international bodies (3ieimpact.org, 2016). The body states that through the understanding the impact of interventions, programmes and policies and developing a scenarios of what would have happened in the absence of such developments, they are able to provide better evidence.
For instance, in a study done on the effect of microfinance initiatives in Hyderabad, India, the research design adopted randomised control trials based on households where microfinance was taken up, compared to areas where there was the potential, but it wasn’t considered (Banerjee et al., 2015). This study, compiled with a couple of other evaluations on microfinance interventions forms a policy brief for practitioners in the sector (3ieimpact.org, 2016).
The Evaluation Gap Working Group found that impact evaluations were most effective for new or expanding projects where effectiveness had not been established. Impact evaluations also had to be built into the design of the project in order to deliver on outcomes.
CIPP
The Context,Input, Processing, Product Model is one of the more established approaches of impact evaluation. Designed by Daniel Stufflebeam in 1967, the CIPP model aims to involve all stakeholders in the evaluation process, identifying the project background (Context), its resourcing (Input), the processes by which the outcome will be delivered and finally the outcome itself (Product). Stufflebeam and Coryn (2014) believe the approach is most effective when the evaluator regularly interacts with the stakeholders, as it allows for the evaluator to be regularly updated with new information, while keeping the decision-making process informed.
The CIPP model doesn’t aim to prove that a program works, but rather to improve on a program’s approach.
This form of evaluation was appropriate for determining the effectiveness of the Kaohsiung Suicide Prevention Center’s efforts in reducing suicide as well as identifying where shortfalls needed to be addressed (Ho et al., 2010).
Utilization Focused Evaluation
Utilization focused evaluations focus on the stakeholders that will ‘use’ the project’s findings and act on them. This becomes the first task of the evaluator. Where CIPP has a general guide to evaluation, the UFE approach accomodates for method that meets the needs of the stakeholders (Mertens, 2012).
This approach can be useful for addressing the needs of priority users of a project and effectively address their needs of the evaluation (Stufflebeam, 2014).
Indigenous Evaluation
Central tenet is constituted by the importance of relationships, with ourselves, with other people and everything on earth and in the universe.
Your role as the evaluator
Philosophical and theoretical lens
Knowledge is relational. You are answerable to all your relations when you are doing research.
foundationally (i.e. ontologically) based on doing and being relationally, rather than as an ‘abstract’ theory imposed on others or used for indoctrination.
The evaluand and its context
Method (design, research purposes and questions, stakeholders and participants, data collection)
- Consultation of Elders is vital.
- Indigenous knowledge informs the process
- a cyclical approach
Management and budget (reports and utilisation)
- Addresses the role of the researcher in questioning “givens”
- Consent involves individual, community, group and collective
- The ‘knower’ is named, when they give permission
Examine Evaluation techniques x 2
Start with the importance of Evaluations
State the intention of this essay
Why Impact Evaluations? Why Indigenous Evaluations - as a very specific form of evaluation?