On this page:
Monitoring, evaluation and learning (MEL) refers to different parts of the monitoring and evaluation cycle(iii).
It involves:
- monitoring a project to see how it is travelling in real time compared with your expectations or to identify unintended issues or outcomes
- evaluating a project to understand more about its implementation, impacts, and value
- learning from the information gathered so you and others can improve what you are doing.
Evaluation and monitoring should be planned and scoped at the same time as the design of a project – don’t leave them until the end. You should also consult your evaluation plan regularly throughout your project management work.
Good MEL practice can help you work out the difference your project is making. This will strengthen its delivery. In addition, you and your stakeholders can learn from your experience to improve the project’s performance in the future.
Victorian organisations are delivering critically important work to end family violence and violence against women. We know that evidence-based prevention activities work – however, the evidence base is still emerging.
Understanding the impact of our efforts in prevention is critical to our learning so we can change the norms, practices, and structures that drive family violence and violence against women.
About monitoring
Monitoring is a systematic process for collecting and reviewing data and information across the life of a project (rather than just at the end)(iv). Regular monitoring allows you to assess real-time progress against objectives and to identify possible risks at an early stage. Monitoring also creates the potential for higher quality evaluation reporting. By monitoring from the beginning of a project, you can expand the type and quality of data you collect throughout.
There are several important reasons for monitoring. Some of these are listed in the following table.
Why we monitor
Monitoring type | Benefits |
---|---|
Activity/Process monitoring | Tracks use of inputs and resources towards project activities Ensures project activities are being delivered efficiently and on time Helps identify any early project risks Enables action to improve project early on Supports reporting requirements Enables management of work planning and performance |
Compliance monitoring | Ensures project is meeting funder requirements (such as budget) Ensures ethical and contractual requirements to relevant stakeholders are being met |
Results monitoring | Tracks and assesses the effects of project delivery (both outputs and outcomes) Gathers data for formative and summative evaluation Identifies unintended impacts Links to evaluation, learning and reporting so the activity can inform decision-making |
Context monitoring | Tracks and helps understand the setting in which the project operates – especially as they impact assumptions and risks. Identifies any environmental changes to the project |
Organisational monitoring | Tracks sustainability and capacity building variables (for example communication and collaboration amongst partners) |
Excerpts adapted from International Federation of Red Cross and Red Crescent Societies (2011) Project/programme monitoring and evaluation (M & E) guide [PDF](v). IFRC, Geneva.
Suggested resources:
- The Better Evaluation website contains great information and tips on monitoring practice.
- Also see the Federation of Red Cross and Red Crescent Societies (2011) Project/programme monitoring and evaluation (M & E) guide [PDF].
About evaluation
Michael Scriven (1991), an international leader in evaluation theory, sums up evaluation as, ‘the process of determining the merit, worth, or value of something, or the product of that process’(vi).
In relation to primary prevention projects, evaluation means applying a value judgment to data and information you collect to identify if your project is meeting its primary prevention goals, including why or why not. Evaluation work can tell you about areas of strength and areas for improvement. It also helps you estimate the impacts and cost-effectiveness of your projects.
Evaluation is important because it can provide a picture of your success and an opportunity for learning and evidence-informed decision-making for future projects.
Types of evaluation
There are many different reasons for conducting an evaluation and therefore, many different approaches to evaluation(vii).
Three common types of evaluation include formative, summative, and developmental(viii). Formative and summative evaluation are ways of assessing projects already in progress. Developmental evaluation is a real time process that helps you reflect on and improve your project as you are designing it.
Formative evaluation occurs during the early parts of a project, before longer term outcomes or impact are possible to assess. As the word suggests, formative evaluation occurs as you are ‘forming’ an understanding of what works or what doesn’t.
Formative evaluation tends to focus on process and implementation – that is, how the administration, management, and process elements of a project are ‘rolling out’, including what’s working and what might need to change. It can also focus on early progress towards outcomes.
Summative evaluation focuses on outcomes and impact. It assesses whether the project is achieving what it set out to do, including whether it’s sustainable. This often informs decisions about whether it should cease, continue, or be expanded(ix).
Developmental evaluation is an increasingly popular form of real-time evaluation(x). It relies on a close relationship between the evaluator and project owner to refine and iterate a project as it is being developed. Developmental evaluation is particularly helpful in complex or uncertain environments when project developers are learning as they design. The learning that comes from developmental evaluation allows you to make adjustments more quickly, compared with waiting for a summative evaluation that occurs at a later time.
The following table offers a useful summary of the differences between formative and summative evaluation.
Formative and summative evaluation
Formative Evaulation – Improve |
Summative Evaluation – Prove | |
---|---|---|
Information purpose | Provides information that helps you improve your project. Generates periodic reports. Information can be shared quickly. | Generates information that can be used to demonstrate the results of your project to funders and your community. |
Information type | Focuses most on project activities, outputs, and short-term outcomes for the purpose of monitoring progress and making mid-course corrections when needed. | Focuses most on the intermediate outcomes and impact of a project. Although data may be collected throughout the project, the purpose is to determine the value and worth of a project based on results. |
Use of information | Helpful in bringing suggestions for improvement to the attention of staff or managers. | Helpful in describing the quality and effectiveness of your project by documenting its impact on participants and the community. |
Adapted from the W K Kellogg Foundation(xi) (based on Bond, Boyd & Montgomery (1997)(xii).
Suggested resources:
- Michael Quinn Patton (a leader in utilisation-based evaluation and developmental evaluation) has created this clever short video listing 100 different types of evaluation
- The Free from Violence Monitoring and Evaluation Strategic Framework [PDF] (pgs. 54–56) provides information about evaluation types, approaches and levels that are relevant to the Free from Violence Strategy.
Evaluation criteria
Those working in project evaluation are often looking to see ‘what works’. But what does ‘working’ mean? The following is a summary of criteria commonly used in government, international development, community development, and social justice settings for the evaluation of projects. Key evaluation questions are often framed around these criteria (please see Step 4).
Appropriateness/relevance
The extent to which the project addresses an identified need. The extent to which the project was relevant or suited to (or the best way of) delivering the outcomes.
Fidelity(xiii)
The extent to which the project was delivered as intended by its developers and in line with the project model.
Efficiency(xiv)
The extent to which the relationship between inputs and outputs is timely, cost-effective and to expected standards.
Effectiveness(xv)
The extent to which the intervention achieved, or is expected to achieve, its objectives, and its results, including any differential results across groups.
Outcomes
The extent to which we reached our short, medium, and long-term outcomes, drawing on the measures we used for assessment.
Impact(xvi)
The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, longer-term effects.
Sustainability
The extent to which the outcomes or benefits of the project can be sustained, and what is required to enable this. The degree to which there are indications of ongoing benefits that can be attributed to the project.
In relation to each of these, we might also ask, what is our evidence to support our answers to these questions?
Suggested resources:
- Take a look at the Organisation for Economic Co-operation and Development’s (OECD’s) DAC criteria for evaluation
- See also criteria for evaluation in the World Health Organisation’s (2013) Evaluation Practice Handbool [PDF].
Difference between monitoring and evaluation
Monitoring includes the ongoing collection and use of data to ensure tracking of outputs and outcomes as they unfold. Monitoring enables early glimpses of the direction of your work, which gives you the power to make changes as you go along.
Evaluation uses that data (and collects other data) to make more comprehensive value judgements about key evaluation questions. Monitoring data provides insights into activities as they happen, which assists in the review and improvement of projects at regular intervals.
Stage 3 Step 6 and Step 7 provide more detail on interpreting data and making value judgments.
While monitoring and evaluation are different, they have a complementary relationship. The following table compares aspects of monitoring and evaluation.
Monitoring and evaluation – Comparative table
Category | Monitoring | Evaluation |
---|---|---|
Scope |
What the project:
|
How the project:
|
Purpose | Keeps a real-time eye on developments in the project Enables shifts or changes towards improvement during the activity |
Makes value judgements about whether the project meets outcomes based on a balanced assessment of the relevant criteria |
Measure |
Inputs, activities, outputs |
Criteria and measures against outcomes set both internally (in self-evaluation) and externally (by independent evaluators) |
Main responsibility | Usually internal to an organisation | Internal staff or evaluators |
Resourcing | Embedded as part of management processes | May require additional budget or specific resources |
Reporting | Regular reporting Focus on outputs |
Reporting at agreed intervals Detailed reporting |
Adapted from the Centre for Evaluation and Research Evidence(xvii) (drawing on Markiewicz, A. and Patrick, I. (2015)(xviii).
About learning in MEL
A common focus of monitoring and evaluation is to report on how you are tracking against outputs and short or long-term outcomes. While reporting on outputs and outcomes is an important purpose for projects, learning from these activities is equally important.(xix) Good monitoring and evaluation will turn data into information that should be used to inform continuous improvements in project delivery.
In an ideal world, we learn from the monitoring and evaluation data collected, and apply these learnings during any of the four stages of project delivery. Monitoring data and information can be used to reflect on your management approach, community engagement strategies, or to see whether you are on track to reach particular outputs or outcomes.
MEL frameworks
A MEL framework is the suite of documents you will prepare as you plan for monitoring and evaluation. The MEL framework consists of documents you will prepare across Stages 1 and Stages 2 of this toolkit. Some organisations bring this framework information together into a high-level document.
You can find all the templates you need to create a MEL framework on the Resources and templates page.
Suggested resources:
- For those keen to map out a highly detailed MEL Framework, Markiewicz and Patrick (2015) provide an online template.
- The Department of Foreign Affairs and Trade website also has some great real world example MEL frameworks [PDF].
Endnotes
(iii) Network of International Development Organisations in Scotland (nd.) Monitoring, Evaluation, and Learning Guide. Using MEL to strengthen your organisational effectiveness. Accessed on 6/7/22. Available at: [https://www.intdevalliance.scot/application/files/5715/0211/8537/MEL_Support_Package_4th_June.pdf]
(iv) Better Evaluation (2022) Monitoring. Accessed on 7/7/22. Available at: [https://www.betterevaluation.org/en/themes/monitoring#Top]
(v) International Federation of Red Cross and Red Crescent Societies (2011) Project/programme monitoring and evaluation (M & E) guide. IFRC, Geneva.
(vi) p.39, Scriven, M. (1991) Evaluation Thesaurus. Sage, Thousand Oaks, CA.
(vii) For those who are interested, Michael Quinn Patton (a leader in utilization-based evaluation and developmental evaluation) has created this clever short video listing 100 different types of evaluation https://www.youtube.com/watch?v=GEGtBnkDyBk.
(viii) Centre for Evaluation and Research Evidence (2021) Monitoring and Evaluation Guide. State Government of Victoria. Victoria.
(ix) p.91, Centre for Evaluation and Research Evidence (2021) Monitoring and Evaluation Guide. State Government of Victoria, Victoria.
(x) For more detail, see Patton, M.Q. (2011) Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. The Guilford Press, New York.
(xi) W.K. Kellogg Foundation (2004) Logic Model Development Guide: Using logic models to bring together planning, evaluation, and action. Accessed 5/7/22. Available at: [https://wkkf.issuelab.org/resource/logic-model-development-guide.html#back=https://www.wkkf.org/resource-directory]
(xii) Bond, S.L., Boyd, S. E., & Montgomery, D.L. (1997) Taking Stock: A Practical Guide to Evaluating Your Own Programs. Chapel Hill. NC, Horizon Research Inc.
(xiii) Breitenstein, S.M., Fogg, L., Garvey, C., Hill, C., Resnick, B., and Gross, D. (2010) Measuring implementation fidelity in a community-based parenting intervention. Nursing Research, 59(3), p. 158-65.
(xiv) OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [https://doi.org/10.1787/543e84ed-en]
(xv) ECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [https://doi.org/10.1787/543e84ed-en]
(xvi) OECD (2021), Applying Evaluation Criteria Thoughtfully, OECD Publishing, Paris. Accessed on 7/7/22. Available at: [https://doi.org/10.1787/543e84ed-en]
(xvii) p. 18, Centre for Evaluation and Research Evidence (2021) Monitoring and Evaluation Guide. State Government of Victoria, Victoria.
(xviii) Markiewicz, A. and Patrick, I. (2015) Developing Monitoring and Evaluation Frameworks. Sage Publications. Thousand Oaks, CA.
(xix) USAid (2022) M&E for learning: What is it? Accessed on 6/7/22. Available at: [https://usaidlearninglab.org/cla/cla-toolkit/me-learning]