Whenit comes to survey testing, some people have the misconception that it takes too long to do. Meaning they end up running their survey without any testing at all. This is a huge mistake. Even testing it with a couple of people could throw up issues, that if not spotted and solved could cause real problems with your data later on. So, any testing is better than none at all.
Re-examine the wording of your questions to see if they contain anything potentially problematic for the reader. This could include double-barrelled questions (essentially two questions in one), negative questions (those containing negative words) and overly long questions.
Try interviewing yourself, where you put yourself in the position of the respondent and try to answer each question. You might also consider doing a mock interview of the questions with a close colleague. Both methods can enable you to identify potential difficulties in your question-answer process.
While you may be reasonably knowledgeable on the subject area of your written questions, there could be others in your business with more detailed expertise. They could also have expertise in specific areas such as questions design and testing, field issues and the cultural perspectives of the survey.
Whatever beneficial knowledge they have, try getting a small panel of them together to review your questionnaire. This can provide some really useful initial feedback for you regarding any potential problems with your survey questionnaire.
The pilot test of your questionnaire which should be carried out under survey conditions, should be tested on a sample size that is representative of the target population for your research. Besides completing your survey, your respondents should offer up their feedback about their experiences of filling it out.
Ideally you want to be able to get all your respondents together to observe their behaviour while they take your survey. Making notes throughout, look for places where they hesitate or make mistakes. You can also record the overall time it takes each participant to complete your survey. Much of what you find could indicate that your survey questions and layout are not clear enough and need improving.
For anyone struggling to find enough respondents to carry out a test from their existing contacts, a consumer panels service is a good alternative. With their ability to provide instant access to millions of respondents worldwide, consumer panels are great for reaching the exact demographic or niche audience you require.
When you consider that insightful data is the main reason why businesses run surveys in the first place. Then piloting your survey can be extremely valuable in highlighting any questions that are currently not returning the information you need.
From discovering if your questionnaire takes too long to complete, or if there are any firewall-related issues that prevent your respondents from accessing your survey, to any problems with your email client, disabling the links to your survey invitation emails. There are many practical problems you can identify and overcome with a pilot survey.
It really depends on the size and audience involved. For smaller, less complex surveys, pretesting may be sufficient in revealing any necessary improvements and corrections required. Similarly, pretesting may be ample before running an HR survey to assess staff contentment, particularly for a smaller company.
With the internal pilot study, respondents in the pilot are also considered as the first participants in the main survey. In contrast, with an external pilot survey the questionnaire will be administered to a small group of target participants who are not included in the main survey.
In contrast, with a participatory pilot participants are fully aware of their participation and clearly informed about what they need to do. This pilot type is useful when you require feedback from the target audience about your survey content, processes and messages.
The Abdul Latif Jameel Poverty Action Lab (J-PAL) is a global research center working to reduce poverty by ensuring that policy is informed by scientific evidence. Anchored by a network of more than 900 researchers at universities around the world, J-PAL conducts randomized impact evaluations to answer critical questions in the fight against poverty.
Piloting is the testing, refining, and re-testing of survey instruments in the field to make them ready for your full survey. It is a vital step to ensure that you understand how your survey works in the field, that you are collecting accurate, appropriate data. It also helps in the process of designing staff training for the final launch. This section focuses on piloting the questionnaire, but field protocols must also be piloted to ensure data collection runs as planned.
The piloting process begins once you have a first, rough draft of your questionnaire (see also resources on measurement and survey design) and ends when you have a final, translated questionnaire. Piloting is iterative but involves three main phases (J-PAL staff and affiliates: see J-PAL South Asia's presentation on piloting best practices for more information):
Brand new survey instruments will need to go through all three stages. Adaptations of well-designed questionnaires from a reliable source in the survey country may only need stages 2 and 3. Follow-up surveys with no major changes from the baseline may only need phase 3. Typically, the 3 stage piloting process should start 4-6 months before survey launch.
This stage requires heavy involvement of the researchers and RA and is covered in greater detail in the survey design resource. You should end up with a draft paper-based questionnaire in both English and the local language(s).
This stage is predominantly carried out by senior field staff, with perhaps 1-2 additional enumerators, and should take place before enumerator training. At this point, you should have a full draft questionnaire and a good sense of the questions that need to be asked. The questionnaire should have been translated so that the version that will be used in the field is ready for pre-testing. The field team should be accompanied by research staff to ensure that any gaps in the instrument are systematically captured and addressed. Pen and paper pilots are recommended at this stage, even if the final survey will be digital, as it is easier to take notes and write suggestions on paper. See more at DIME's survey piloting guide.
See also the World Bank's survey guidelines and guidelines on field protocols to test, such as when respondents are available, infrastructure (e.g., electricity and internet access), sampling protocols and replacement strategy, and collecting geo-data.
Having developed an initial set of candidate questions for inclusion in the questionnaire, we then went on test the acceptability of the questions through three focus groups with diverse patient groups. This informed the development of version 1 of the questionnaire, which was about GP practices. This was subject to two rounds of piloting, the revisions to the questionnaire being made based on the results of each pilot. The first involved 450 patients in three GP practices and the second involved 300 patients in two GP practices. In both rounds of piloting, completion rates and distribution of responses to questions were assessed to help refine the questions. Each pilot also included interviews with a sample of up to 20 responders; the first set of interviews aimed to test the face validity of the questionnaire, and the second set were cognitive interviews to help refine the wording and layout of the questionnaire. Informed by the results of pilot 1, we developed version 2 of the GP surgery questionnaire and version 1 of the pharmacy questionnaire. The pharmacy version was piloted with 150 patients in two pharmacies and was subject to a small number of cognitive interviews.
During this phase, we conducted a consultation with our advisory group members about their priorities for the provision of a number of different formats for the questionnaire. Following piloting, version 3 of the GP questionnaire was prepared (along with version 2 of the pharmacy version and version 1 of the walk-in centre version) to go on to be tested for reliability and acceptability in a large-scale pilot (see Chapter 7).
Appendix 6 shows the resulting summary framework of the issues arising from the initial draft questions in the focus groups. This was used to inform revisions to the initial draft questions, to produce a set of questions for version 1 of the GP questionnaire. Key issues are summarised in Table 8.
Questions on demographics were developed, drawing on other validated measures where possible, including the national GP patient survey (questions on sex, age, occupation, parent/carer status, sign language, sexuality), the General Practice Assessment Questionnaire (question on ethnicity), and the Scottish Patient Experience Survey (question on disability).
Version 1 related to GP surgeries only, consistent with our approach, which was to develop a GP version of the questionnaire which could then be adapted for other primary care providers. Version 1 of the questionnaire is included in Appendix 7. Version 1 was produced in paper and online (using SurveyMonkey:
www.surveymonkey.com) formats.
This analysis from the interviews was used in conjunction with analysis of descriptive statistics from the pilot survey to develop the next iteration of the questionnaire. The draft was circulated to members of the team for feedback, and the content and format of version 2 was agreed.
3a8082e126