On this page:
This section provides advice on how to create a data collection plan, including assessing the kinds of data you might need and from whom/where you might collect them.
You can find a data collection plan template on the resources and templates page.
What is a data collection plan?
A data collection plan helps you map out the type of data you need to collect for your evaluation and the ways in which you will collect it. A data collection plan is a practical map for your evaluation – translating your evaluation plan into real world actions.
Sources of data
Researchers and evaluators often refer to primary, secondary, and tertiary sources of data. In simple terms, primary data is new data that an evaluator (internal or external) collects during monitoring and evaluation activities (think surveys, interviews, observations etc). Secondary data are data that you obtain from other sources (think population wide statistics from a local government authority or an annual report). Tertiary data are a collection of primary and secondary sources (e.g., encyclopaedia or textbooks).
Your choice of data sources will depend on what you are trying to measure to answer your key evaluation question/s. In an evaluation, it can often be useful to begin by identifying secondary data sources (because this data already exists). Secondary data is useful because:
- someone has already collected it
- it can provide useful background and contextual material for your project
- it can help you tick off what you do not need to include in your own primary data collection.
Primary data collection (new data collection) can be resource intensive and time consuming so make sure to confirm the types of primary data you really need.
Forms of data
Researchers often categorise data according to whether it is quantitative or qualitative data:
- Quantitative data are things that can be counted – how much and how often, usually gathered through survey, administrative data, or document review.
- Qualitative data are words – usually gathered through focus groups or interviews and provide answers to ‘how’ and ‘why’ questions.
It is common for evaluation work to include both quantitative and qualitative data, which is referred to as taking a mixed-methods approach. There are many benefits to a mixed-methods approach. Different types of data provide insights into different issues or variables in an evaluation. For example, quantitative data from a survey might tell you the number of people who felt satisfied about a particular activity, while qualitative data taken from the same respondents may help you understand why they felt a particular way about the activity.
Drawing on and comparing data from multiple reliable sources can strengthen the rigour of your findings(xliii) (when each supports the same conclusion) or can show the need for further data collection or analysis (when they show differing results). Taking a mixed-methods approach also reduces the risk of a misleading result that relies on only one type of data. Using different forms of data to test a particular concept or build a stronger finding is called triangulation.
Suggested resources:
- This slide deck from the Evaluation Lab at the University of New Mexico [PDF] provides a simple introduction to the differences between quantitative and qualitative methods in evaluation.
- This journal paper by Giel Ton provides practical advice on mixing methods to improve rigour in impact evaluations.
How do I select data collection methods?
The choice of primary data collection methods can depend on a range of contextual factors. The following prompts might be useful for considering data collection in your own project:
- Think about what you want to know, and whether you are categorising or exploring. We refer to questions (often in surveys) that ask you to select pre-determined categories as close-ended and questions that ask for a self-directed, descriptive or narrative response as open-ended. While all methods help you discover new information, some collect information by limiting the responses participants can provide (e.g., ‘on a scale of one to five, how satisfied were you with the module?’). Others help you explore or uncover new information (e.g., ‘what helps you to take action as a bystander’? or ‘what did you find helpful about the training module’?).
- Are you measuring process or outcomes? Each of these might benefit from different data types. Qualitative data (e.g., interviews with project staff and participants) might provide some useful insights into a project’s processes. Quantitative data (e.g., close-ended questions in a survey) might provide some useful insights into changes in community attitudes and behaviours.
- Consider the communities or participants with whom you are working. Are there cultural considerations or lived experience that might influence your approach? In the case of a focus group, will you include women only? Might a yarning circle work best when evaluating with an Aboriginal community? Do your surveys need translation into community languages?
- Think about how you might maximise participation or response rates. A response rate is the percentage of people who respond to your request for participation (compared with everyone who received it). If you are working with young people, is a survey positioned on multiple digital platforms more likely to have a wider reach? If you are working with elderly people in aged care, will a personal interview best elicit the information you are looking for, and increase likelihood of participation?
- Are you seeking particular experience or general opinion/feedback? If you want to understand the strengths and limitations of delivery in a primary prevention module, you might target and interview project managers and participants with experience of that course. If you want to know whether people in a local government area (LGA) think that reducing violence should be a council priority, you might use an online survey.
- Choose methods that are fit for purpose and that help deliver the information you need. For example, if one purpose of your evaluation is project improvement, then a survey that asks participants to rank whether a module helped their learning may be useful, but only to a limited degree. You might want to ask some open-ended questions about what participants learned, or why they found learning easy/difficult.
- Finally, for those who are super keen to learn more about evaluation methodologies and theory, the Better Evaluation website and Australian Evaluation Society provide lots of useful reading. Techniques of interest might be the Most Significant Change (MSC) approach [PDF](xliv), Contribution Theory [PDF](xlv), or Realist Evaluation [PDF](xlvi). (Note: special methods are not necessary to your project evaluation.)
Always consider ethics and participant safety as you identify the most appropriate data collection approach. The ethical practice section on the key concepts for practice page has more on this topic.
What data collection methods are available?
The following table, reproduced from the Northwest Centre for Health Practice [PDF], provides a summary of some common data collection methods used in health-based evaluation. Consider the advantages and disadvantages of each method when deciding if the method will help you measure progress against your indicators.
Data Collection Methods – Advantages and Disadvantages
Method | Use when | Advantages | Disadvantages |
---|---|---|---|
Document review | Project documents or literature are available and can provide insight into the project or the evaluation |
|
|
Observation |
You want to learn how the project actually operates its – processes and activities |
|
|
Survey | You want information directly from a defined group of people to get a general idea of a situation, to generalise about a population, or to get a total count of a particular characteristic |
|
|
Interview | You want to understand impressions and experiences in more detail and be able to expand or clarify responses |
|
|
Focus groups | You want to collect in-depth information from a group of people about their experiences and perceptions related to a specific issue. |
|
|
Northwest Center for Public Health Practice (nd.) Data Collection for Program Evaluation(xlvii).
Whatever methods you select, we strongly recommend you undertake some further reading from the range of materials at trusted institutions, freely available on the internet or through libraries. A stronger understanding of how to implement your selected methods will increase the quality of your evaluation.
Suggested resources:
- Better Evaluation provides useful advice on interviews, surveys, focus groups, and other methods.
- See VicHealth’s Evaluating Victorian Projects for the Primary Prevention of Violence against Women: A concise guide [PDF] Tool 4: Data collection methods
- For interviewing tips in PVAW projects, look at the kNOwVAWdata tip sheet on interviewing victim-survivors [PDF].
- INCEPT has a comprehensive website for evaluation planning for primary prevention projects, including further information on data collection methods.
- Tools4Dev – Should I use interviews or focus groups provides information to evaluators about the differences between interviews and focus groups and how to select the most appropriate approach at their site.
Who will participate in my data collection?
Refer to your evaluation plan and think about the individuals and groups with/from whom you will be collecting data:
- Who are the broad categories of people relevant to your project that you will include (e.g., stakeholders, service clients)?
- Are you including only people from your project or are you using a wider group (sample) of people?
- Are you concerned with lived experience? Do you want to collect data from one or more specific demographic groups (e.g., people living in regional and urban areas, people of different genders)?
- If you are using interviews and focus groups, how will you decide who participates in each?
- If you are conducting a community survey, how many people will you send it to? Do you have a list of those you want to survey, or will you send it out randomly and/or widely?
Develop your data collection tools
It’s now time to get into the detail of creating your data collection tools – the instruments you will use to collect your data. Data collection tools can include questionnaires, checklists, spreadsheets, survey schedules, interview and focus group schedules or case-study guides.
The important thing about your tools is that the questions you ask or variables you observe should be designed to help you measure your indicators. Keep the following in mind:
- Do you want open- or close-ended questions in your surveys and interviews? (Remember, close-ended questions generally help you record frequency or count while open-ended questions can help you understand the how and why of someone’s perceptions and experiences.)
- How many questions might you need in an interview schedule or a survey, to meet the relevant indicators from your evaluation plan? (Remember, interviews or focus groups should have a limited number of questions so participants have the time and opportunity to speak. Surveys can have more questions.)
- Many core activities in primary prevention are based around building knowledge and understanding, building confidence, changing attitudes, and changing behaviours (all related to behaviour change theory). Therefore, many of the data collection instruments you create will have questions about these topics.
- Consider the format your tools will take. For example, will you enter survey questions into specialised software (such as SurveyMonkey or Qualtrix) or put them onto a printed handout? If you’re in an online workshop, will you use one of the polls popular on digital platforms such as Zoom or Mentimeter? Consider whether you might format, or phrase questions differently based on these choices
- If you’re asking participants to capture data for monitoring, what will make it easiest for them? An app? A live web document? A journal?
- Is the focus of individual questions more around process or outcomes? Consider what this might mean for shaping questions. In the table below, we provide some example questions based on indicators.
Outcomes measurement – Example indicators and questions
Indicator (What indicates success?) | Example questions in the data collection instrument |
---|---|
Reaction Proportion of participants who felt positively about the workshop |
Close-ended question (survey) Was your participation in the workshop a positive experience? (Yes/No/Unsure) |
Knowledge and skills Participants can describe and understand key concepts in family violence |
Close-ended question (feedback form) I learned something new about the different forms of family violence (Yes/No/Unsure) |
Confidence Number of people who feel more confident to intervene as a bystander |
Open-ended question (interview) What new information did you learn from this course? Open-ended question (focus group) In what ways (if at all) did the workshop help with your sense of confidence to be an active bystander? |
Attitudes Increase in community network members that recognise violence is unacceptable (and can identify excuses for violence against women) |
Open discussion topic (network event) Facilitated discussion about attitudes that condone violence that might be visible in the media (with observational notes) Note: It can sometimes be useful to explore participant attitudes by first asking about broader social attitudes. |
Anticipated behaviour change Number of participants who express the intention to implement a new gender equality policy in their workplace |
Interview questions: How likely are you to advocate for or implement a new gender equality policy in your workplace? Has the module assisted with this choice, and if so, in what way? |
There are many expert resources online for creating good survey and interview schedules, and other kinds of data collection tools.
Suggested resources:
- Tools4Dev provides extensive information about drafting accessible, clear and relevant surveys in How to write awesome survey questions Part 1 and How to write awesome survey questions Part 2.
- Harvard University’s Strategies for qualitative interviews [PDF] provides concise advice and tips on framing useful interview questions.
- SurveyMonkey’s Smart survey design [PDF] is a tool for online survey development and includes examples of useful survey questions and approaches.
- Tools4Dev’s How to do great semi-structured interviews provides great guidance and practical tips on the process of designing questions and conducting an interview.
- Our Re-shaping Attitudes: A toolkit for using the NCAS in the primary prevention of violence against women can be a useful tool to understand how you could use the results and approach of the National Community Attitudes to Violence against Women Survey to design your project, including developing your data collection instruments.
Don’t forget ...
- Test your data collection tools with people who have knowledge of your project and activities. You may also have an opportunity to test with individuals from the group you want to use the tool with.
- Always consider ethics and participant safety in your survey, interview and focus group design.
- In addition to identifying who is responsible for data collection, it can be useful to note who is responsible for analysis and reporting.
Endnotes
(xliii) Ton, G. (2012) The mixing of methods: A three-step process for improving rigour in impact evaluations. In Evaluation, 18(1). Available at: [https://journals.sagepub.com/doi/10.1177/1356389011431506]
(xliv) Davies, R. and Dart, J. (2005) The 'Most Significant Change' Technique - A Guide to Its Use, Funded by CARE International, United Kingdom Oxfam Community Aid Abroad, Australia | Learning to Learn, Government of South Australia| Oxfam New Zealand | Christian Aid, United Kingdom | Exchange, United Kingdom Ibis, Denmark | Mellemfolkeligt Samvirke (MS), Denmark Lutheran World Relief, United States of America. Accessed on 11/7/22. Available at: [https://europa.eu/capacity4dev/file/28239/download?token=lWZXyl9R]
(xlv) Mayne, J. (2011) Contribution Analysis: Addressing cause and effect. In K. Forss, M. Marra and R. Schwartz (Eds.) Evaluating the Complex. Transaction publishers, Ottawa.
(xlvi) Pawson, R. & Tilley, N. (1997). Realistic Evaluation. London, Sage.
(xlvii) Northwest Center for Public Health Practice (nd.) Data Collection for Program Evaluation. Accessed on 6/7/22. Available at: [https://www.nwcphp.org/docs/data_collection/data_collection_toolkit.pdf]