Making Use of Contextually Relevant Monitoring and Evaluation Tools.
Have you ever had the gut feeling, that feeling like “something doesn’t feel right?” your indicators might just feel the same way about the tools you use in monitoring projects or programme activities. The same could happen when you are conducting evaluations, and everything can be out of place, just for that one reason – like using kerosene to fuel a diesel-powered truck.
Starting out, due to my exposure and bias to digital tools at an early age, I assumed that online survey tools could do just anything I needed. Little did I know that such an assumption would cost me and others a lot, in time and resources. The idea that there is a large demography of people characterized by significantly limited internet penetration and mobile access was not apparent to me as I had not taken that into cognizance. Yet, the intervention was meant for them and they could not complete surveys considered basic, making a series of activities, as well as data collected during a 3-month period both costly and redundant.
It is apparently common for those starting out early in the field to, due to exposure and bias to digital tools, assume that online survey tools can do just anything that is required, and cater to all needs. However, such an assumption could cost this category of people and project valuable resources. In a scenario where key stakeholders are unable to complete surveys, provide data and feedback, it prolongs the project activities and leads to major concerns like waste, redundancy of data, and so on. Worst case scenario, it leads to ill-informed decisions, thereby steering programme activities to go far in the wrong direction.
To put it simply, monitoring and evaluation tools in the context of this article refer to those objects, devices, and methodologies that an evaluator makes use of in collecting data, conducting analysis, inference information, and reporting to necessary stakeholders. While there are many components of M&E that are extremely important, it is safe to say that these tools form the crux of conducting evaluation activities.
While a tool is only as good as the one who uses it, and where it is used, knowing that certain tools exist, makes it easier to put them to use. A lot of tools exist at different levels in Monitoring and Evaluation. Some of the essential tools used include the Logframes (Results Chain as some call it) and the Theory of Change (ToC). For this article, the focus would be on data collection tools.
A Key fundamental in any tool selection is knowing defining peculiar use and likely trade-offs depending on what it is intended to be used for. In the table below, an attempt is made to highlight some commonly communicated use cases and trade-offs of some frequently used tools, hence, relevant as a guide.
It is imperative to ensure an optimal level of balance is achieved. Hence, there are instances where offline tools such as paper-based assessments, checklists, registers, and the like are more effective as they do not require extensive knowledge to deploy or use and can be used without a mobile device, internet access, or significant supervision and cost, depending on the context. Also, they can be inputted digitally using excel workbooks and spreadsheets (and in some cases, survey forms and toolkits like Kobo Collect and ODK).
As evident from the table above, some of the most important items to consider in the use of tools include;
- Questions to be answered; The evaluation questions to be answered usually sets the pace for the processes involved as it guides the end results. For example, in seeking to provide answers to ensure project sustainability of an inclusive education programme or intervention, there is a chance to tilt towards qualitative data collection, hence influencing the type of data collection tool to be used.
- Demography of Respondents and stakeholders involved; The stakeholders involved in the process also influence the nature of data collection tools and methods to be applied. For literate demography within the typical urban regions, mobile and internet access are usually available for young people. It is likely that tools like Google Forms and Survey monkey might work. However, for their counterparts, ODK and Kobo Collect alongside other traditional and paper-based approaches might be more effective.
- Cost to be incurred vs Budget; Typical low-budget programmes might not have the choice of selecting paid tools or making subscriptions. They are likely to make use of available free or open-source tools and platforms. However, for well-funded interventions, the data collection process is likely to be well coordinated and effectively supported with all requisite tools as there are usually relatively limited budget constraints. There is also the opportunity to bring in experts to use or teach the use of certain tools to enumerators, data collectors, or other support individuals that go to the field.
- Nature of data to be collected; Collecting large datasets is usually tricky and cumbersome with the use of traditional methods as it is common for issues such as missing data or incomplete data to arise. It is also common for concerns around data integrity to exist as there are usually operational gaps in the collection of widespread data collection. In this case, it is advisable to factor in ways to collect data electronically using enumerators or other field support individuals. Also, some studies have shown that respondents prefer to provide sensitive data anonymously. All these are important considerations to be taken to account during the process of data collection planning.
- Internet availability/connectivity; In developing countries, it is not unheard of for more than 30% of a country’s population to have limited or no internet access or connectivity. Most individuals that fall within this category are found to be residents in hinterland regions, suburbs, or rural areas. It is especially important to take such realities into account to ensure that the methodologies and tools used fit their realities and context. This ensures that whatever data is collected is effective in making inferences and informed decisions.
Other important considerations include whether or not there is access to a device(s) and technical know-how, the response rate preferred as well as organizational capacity (Human and Material Resources) to conduct monitoring and evaluation using certain tools. Notable, if one is considering using SurveyCTO or ODK toolbox, you can find the difference in this blog post.
Lastly, it is important to ensure an effective alignment process in the deployment and use of any tool after the planning, selection, and design process. This is to ensure that all stakeholders are on the same page and prepare effectively and ensure effectiveness when the tool is being put to use to collect data or provide data. For example, where key stakeholders include focal persons such as community leaders, sub-groups, and key informants, a make-shift town hall can be used to communicate the processes involved as well as what role each stakeholder plays in the process. This is likely to make the process more seamless, faster, and inclusive for all stakeholders. It is also likely to make all stakeholders spot possible gaps that could exist and nip them in the bud. With all these taken into account, a commissioner of evaluation or stakeholders involved should have a gut free of anxiety – and are likely to be confident that they have used the right fuel for the “evaluation truck”.
About The Author
Abdullahi Ibrahim is a Monitoring, Evaluation, Research, and Learning (MERL) professional with a background in Education and Geographic Information Systems. Abdullahi has an immense passion for Impact Evaluation for social good with 6 years post tertiary in-depth experience across, human capital development, strategy, and program design, stakeholder engagement, education leadership, and inclusive technology. Abdullahi is keen on ensuring that interventions work for social good in an informed and locally relevant manner.