Monitoring is an important part of project management. Proper monitoring of development interventions is critical for collecting data for management and learning, keeping track of progress, steering the implementation process towards the intended results, and providing sufficient reporting to account to donors, beneficiaries and partners. Monitoring provides information during the life of the project, so that adjustments and/or modifications can be made if necessary. As demands for greater accountability and real results have increased, there is an attendant need for enhanced results-based monitoring of projects and programmes.
While all projects require good monitoring, the size of the project and the resources available must be considered when establishing monitoring components and tailoring them to the specific needs of a project. A healthy balance has to be found to meet the needs of communities, partners, project staff and donors for timely field-level information, to keep track of progress, and collect data for management and learning.
With my work I tried to show that monitoring does not necessarily have to be complicated and cumbersome, and that even children are able to monitor changes in their communities.
Monitoring a project's progress towards its objective is crucial to ensuring the continuity of project benefits beyond the lifetime of the project itself. In many projects local people are only involved in the actual data collection process and little input is sought from the community regarding their data needs and expectations. As a result, villagers see M&E largely as a donor-driven process. Moreover, villagers are seldom involved in analysing monitoring information and taught how to use monitoring data to assess the quality of their development plans and projects and learn how to improve the development process.
To promote wider ownership of monitoring it is important that communities are involved in defining indicators, methodologies and processes to monitor and evaluate development based on their own worldview and local experience.
Possible steps for introducing a participatory monitoring system include:
Step 1: Discuss reasons for monitoring. Show communities the benefits and purpose of monitoring and let them discuss how monitoring information could help them to improve development activities.
Step 2: Review activities and objectives on the village action plan. Trigger community indicators by asking questions like:
PLA tools developed during the assessment and design process are another source to generate community indicators. 'Wealth' or 'well-being' rankings can be used to establish and define local terms and definitions of ‘well-being’. Child Life Timelines are suitable to learn about local definitions and indicators on ‘child well-being’.
Selecting the best indicators is, however, not always easy. It is in fact a balancing act between choosing locally-relevant indicators, and those that can be applied more widely. It is therefore important that indicators identified by communities are compared and matched with national standard indicators. By doing so, local communities are better able to communicate development results to government entities.
Step 3: Decide which information gathering tools are needed. For each indicator the most appropriate information gathering tool must be chosen.
Step 4: Decide who will do the monitoring. Monitoring may require people with specific skills such as bookkeeping or mathematics. It will also require a certain amount of labour (time) from people. Those with the right skills and the time have to be identified.
Step 5: Analyze and present results. It is important that information monitored be analyzed at specific times throughout the activities. The analysis can be discussed at community meetings or posted on community notice boards. The community will then know whether or not activities are progressing as planned or if changes or modifications are required.
Many project monitoring systems are too much focused on generating information about project activities based on simple quantitative indicator: e.g. # of people trained, # of water systems built, […]. Output-level monitoring information is mainly used to report to donors whether agreed activities have been undertaken and completed. There is a strong focus on quantitative targets and ‘bean counting’ rather than on quality outputs. However, this approach does not provide project managers and stakeholders with an understanding of whether they have produced the actual, intended results.
Results-oriented monitoring differs from traditional activity-focused monitoring in that it moves beyond an emphasis on activities and helps to answer important management questions:
One way to achieve this is to move away from simple quantitative indicator to compound indicators. These indicators have a standard in them that needs defining and assessing.
For the proper measurement of compound indicators it is recommended to develop simple monitoring forms (e.g. checklist, questionnaires, observation forms) which can be used to check standards on the occasion of monitoring visits.
A wide range of methods and tools can be used for monitoring. While the focus is often placed on quantitative measurement (e.g. questionnaires), there are also other ways to monitor whether a project is achieving its outputs:
The majority of project teams are conducting regular field visits where they observe what is actually going on at project sites. Often these field visits are done informally without systematically collecting and recording data. ‘Structured observation’ is a method whereby conditions or key behaviours or are observed through a systematic, structured process, using well-designed observation record forms. Structured observations are easy to perform and can generally be implemented rapidly and unobtrusively.
Because of typical time and resource constraints, structured observation has to be selective, looking at a few key characteristics, or phenomena that are central to quality of the implementation process.
The observation record form should list the items to be observed and provide spaces to record observations. These forms are similar to survey questionnaires, but investigators record their own observations, not respondents' answers. Observation record forms help standardize the observation process and ensure that all important items are covered. They also facilitate better aggregation of data gathered from various sites (e.g. households, villages) or by various investigators.
The two pile sorting exercise is a good tool to explore people’s understanding on a development issue.
How to do it:
1. Give out the sets of two pile sorting drawings/ photos, and two heading cards – one with the word ‘good’, another with the word ‘bad’. Symbols to represent these qualities should be printed on each heading card (e.g.: smiley face, sad face);
2. Ask the participants to sort the drawings/ photos into two piles. (E.g. good – those which they think show activities that are good for their health. Bad - those which they think show activities that are bad for their health);
3. Once all the drawings/ photos have been sorted out, ask the participants to explain their selections and why they made these choices;
4. Calculate the percentage of drawings/ photos which have been sorted out into the right category.
The smiley scale uses a five-point scale to capture participant perceptions and satisfaction relating to a given indicator (e.g. quality of education services, health services). The smiley scale tool can be used across age lines and is suitable for children as well as adults. Whether illiterate or highly educated, everybody can understand the message in the ‘smiley’ rating scale.
How to do it
1. Decide on key characteristics/ items of the work to be monitored;
2. Formulate research questions. Research questions can be generated in a participatory fashion or you can use predetermined, standard research questions. The research questions must be formulated as positive statements of opinion that can be evaluated by stakeholders according to whether they ‘strongly agree’, ‘agree’, are ‘neutral’, ‘disagree’, ‘strongly disagree’, or ‘don’t know’;
3. Introduce the smiley scale scoring system;
4. Use a practice indicator to ensure that all the participants are fully aware of the process and the use of the smiley scale score;
5. Prepare a blank matrix (see illustration below), with the statements to be evaluated, and the various levels of agreement or disagreement. Write the meaning of the smiley faces underneath them (‘strongly agree’, ‘agree’, etc.);
6. Then turn the matrix away from the group so that participants can vote privately;
7. Give each participant one voting dot (stone, seed, leaves, etc.) per statement to be evaluated;
8. Instruct participants to be careful to put one, and only one, dot in each column, under the different statements to be evaluated;
9. Ask participants to vote one by one;
10. Calculate the results (or ask a participant to calculate the results) for each statement (strongly agree = 5; agree = 4; neutral = 3; disagree = 2; strongly disagree = 1; don’t know = 0 and the vote is not counted). Calculate the mean for each research question, and interpret the results together with the group.