- July 19, 2018
- Posted by: Thelma Obiakor
- Category: Economic Development, Monitoring and Evaluation Capacity Buidling
The economic development space has been replete with different interventions seeking to solve some of the world’s most pressing socio-economic issues. These interventions are often akin to a particular time, place and/or situation, reflect a time-sensitive weighting/ranking of policy goals, and constitute an intentional course of action in pursuit of a set goal. Given that the weighting of goals will vary with time and circumstances, policy interventions usually culminate in scarce resources being allocated to certain prioritized goals. Aiming the policy pendulum at a particular goal results in trade-offs and inadvertently raises a key issue in the development discourse: How do you ensure that your chosen interventions are effective, your set goals are being achieved, and scarce resources are being used efficiently?
Cue in Monitoring and Evaluation (or M&E, as it is colloquially called) which is often referred to as the ‘holy grail’ of economic progress. ‘Monitoring’ and ‘Evaluation’ are two terms that have inundated development discourse since the Paris declaration of 2005, with so much emphasis being laid on tracking progress toward set goals, benchmarking progress against a specified standard and highlighting challenges in the policy implementation process. Despite the wealth of information that exists on M&E, they remain two terms that people merely hear so much about, but understand so little of.
For all that is known about M&E, there is also so much misconception surrounding it. For a newcomer, M&E can be confusing, as most information is often steeped in technical jargon. For the more experienced, terms tend to be misused and intentions and purposes are often misunderstood. In order to understand what M&E really is, it might be easier to first tease apart what M&E is not, i.e. the some of the widely held misconceptions. Below is a non-exhaustive list of these misconceptions:
- The terms ‘Monitoring’ and ‘Evaluation’ are interchangeable.
- M&E is a task that is performed.
- M&E is synonymous with data.
- M&E and research are one and the same.
- M&E is a requirement for program strategy.
Having articulated some of the misconceptions surrounding M&E, I will attempt to frame your understanding of the concept by dispelling these misconceptions.
1) The terms ‘Monitoring’ and ‘Evaluation’ are not interchangeable; rather, they are synergetic. Monitoring and Evaluation are two fundamentally different concepts that refer to different functions of the same process/framework. To put it simply, M&E is a process that fosters improved performance and enables decision makers to achieve intended results.
The first function, Monitoring, is a continuous activity that involves the systematic and routine collection and analysis of data/information from programs, for the purpose of tracking progress against intended plans and allocated resources.
Essentially, monitoring is an internal activity that focuses on measuring the implementation process of a program, and is used to track changes in program performance over time. The information garnered through monitoring a program equips stakeholders to make informed decisions about the effectiveness of a program and the efficient use of resources, while the program is still in action. Monitoring focuses on what is being done and how it is being done, and typically includes the views of all the program stakeholders and beneficiaries.
The second function, Evaluation, occurs at a specific point in time (usually the end of a program) and it involves the systematic assessment of the collected data from a project/program in order to analyse how well expected goals have been met, and to determine the effect and the overall impact of a program.
Essentially, evaluation is an activity that assesses the entire cycle of a program and delves deeper into the relationship between the components/interventions of a program, and the effects produced by it. Evaluation allows stakeholders to draw conclusions about the relevance, effect, impact, efficiency, transferability and sustainability of a program. It also provides a platform for learning specific lessons that may be applicable to other programs, or broad lessons that may be used to improve the efficiency and effectiveness of programs in general. Evaluation focuses on what has been done, measures impact – both intended and unintended, and is used to inform future decision making.
Differences between Monitoring and Evaluation: For proper application of the Monitoring and Evaluation framework, it is not only necessary to understand what the terms mean, but it is also vital to disentangle the basic differences between the terms. There is a fine line between monitoring and evaluation and the line lies in the differences between two factors: timing, and the focus of assessment.
- Timing: Monitoring is a continuous activity that occurs throughout the implementation of the program. Evaluation on the other hand usually occurs at the end or at specific points during the program/project.
- Focus of Assessment: The M&E framework raises key questions of how stakeholders can learn from experiences and mistakes that are made.
The monitoring function specifically asks the key measurement questions of:
- How well the program has been implemented?
- Did the program benefit the intended people or yield the intended result?
- Were resources used efficiently?
Evaluation on the other hand goes beyond measurement, and asks the higher order question of:
- Is there a causal relationship between the components of a program, including the inputs invested into a program, and the overall impact on the output/outcome of a program?
In summary, monitoring tells a story – it gives insight into/summarises what has been done, elucidates on the success or lack thereof of a program, and indicates areas where changes need to be implemented. The answers to a monitoring question will reveal descriptive accounts of what happened during a program for the purpose of tracking program performance. The evaluation questions will yield answers that allows for informative judgment/assessment of the program for the purpose of learning and improvement
Which is more important: M or E? Despite the differences between M&E, it is also important to remember that they are intricately and integrally linked: Monitoring (in most cases) provides the data that is used for evaluation and the monitoring process usually involves elements of evaluations in the form of assessment of the implementation process. They are not in competition, and need not be ranked in order of importance. They are both very integral parts of the framework.
2) M&E is not a task that is performed; rather, It is a tool that is used to achieve a goal.
It offers a formal means of learning from decision outcomes, and provides the evidence needed to purposefully inform decisions made. Speaking specifically, Monitoring is a management tool used to gain valuable feedback on the progress of a program, while evaluation is a tool used to inform/improve knowledge on the basis of previous experience/evidence.
Viewing M&E as a task limits and undermines the scope and potential of a good M&E framework. However, tasks exist within the larger M&E framework; a task will essentially be an activity that needs to be accomplished with a view toward achieving a bigger/core goal. For instance, an example of a monitoring task would be data collection. If data is collected, but the monitoring function is not applied on the data, or stakeholders are not educated on reasons for collection of data, then it would merely be data collection for its own sake.
3) M&E is not synonymous with data. It is more than data – data is just a tool that is used as part of the M&E framework. Data refers to the facts/information – both quantitative and qualitative that are collected and analysed in order to perform both the monitoring and evaluation functions.
Developing an actual M&E framework with clearly stated goals, expected outcomes and strategic questions for both monitoring and evaluation should precede and guide data collection analyses. The type of data that is being collected will depend on the questions that you are seeking answers to.
Data for its own sake will essentially be irrelevant unless you know what information you need to glean from the data. For this reason, it is important to disentangle the difference between data and M&E. Before you delve into any conversations about data, you should already have established an M&E framework. This way, your data collected will be guided by the information you are seeking.
4) M&E is not always synonymous to research. Whereas the differences between Monitoring and Evaluation is a fine line, the difference between Research and M&E is not so fine/not linear. Depending on how both terms are defined/conceptualised, they could be viewed as completely dichotomous, or one, as a subset of the other.
However, the key to understanding each concept dwells in understanding the purpose for each: Both M&E and research use similar methods for data collection, however, the intended uses of the data collected are different: as mentioned, M&E is concerned with generating specific assessments/judgments and actionable learning about programs, while research generates generalizable descriptive knowledge about how things are and seeks to explain why things are the way they are.
The purpose of research is to observe and learn with the ultimate goal of generating new knowledge in an area of study/field. On the other hand, the purpose of M&E is to assess and judge, with the utmost aim of improving efficiency and effectiveness of programs.
Confusion about the two occur at a very high level of abstraction, where one begins to extract the different tools and methods used in both research and M&E. Such extractions will results in considerable overlap between the two, however, as a general rule of thumb, if you are unsure about what category your work falls under, just ask yourself, what is the purpose?
5) M&E is not merely a requirement of program strategy; it is the core component of a program. A well established M&E framework is more than just a tool required for tracking progress and measuring performance/outcome; it is a central component for result management, informed learning and accountability. A comprehensive M&E framework should be closely tied to the program/project plan and must be reflective of the program objectives and goals. Reducing M&E to just a requirement undervalues this critical component and underscores the failures inherent in many M&E systems.
In conclusion, monitoring and evaluations are individual functions that play distinct roles in development programs, and together, they are crucial to the success of development programs. To clarify, as a tool on its own, M&E is not the holy grail that it has been referred to as. The actual holy grail in economic development is to discover effective interventions that can be easily replicated across multifaceted settings. If used appropriately, then M&E can indeed become the tool used to achieve this holy grail. Understanding the definition of the terms ‘Monitoring’ & ‘Evaluation’ and their purposes, are only the tip of the iceberg. In order to meet the vital need of ensuring the effectiveness and efficiency of programs, it is of utmost importance to build the capacity and core competencies to fully utilize these tools to achieve your goals and objectives.