Hi Folks,
Of course the folks that come to every ONA meeting are different from, and in addition to, those that only come to some meetings. This is intuitively obvious to the casual observer and does not take a statistician to reach such a conclusion. But what difference does it make? The really critical part of the survey, with respect to the ONA, are the comments from the respondents. It is those that should be carefully considered. If only 19 of the approximately 300 respondents come routinely to ONA meetings it is difficult to conclude that the ONA is statistically representative of the community, especially in light of some of the comments. It is that issue with which we should concern ourselves.
As far as the 'n' number is concerned. Many questions were designed to provide for multiple responses. Therefore, in those cases (such as the first question) 'n' can be greater than the number of respondents. Again, so what? It is the responses that are critical. The number of responses to a particular question can be dissected in many ways to develop percentages. But a percentage, even if correctly calculated, may obfuscate the relevant information contained in the responses.
For instance: suppose the survey had 296 total respondents, fictitious question #80 received 261 responses and the responder was to select only one answer to this particular question. Let's suppose that it is a yes/no question with an additional choice to be "Don't know." I will illustrate below:
Question #80:
Answer Total Percentage Percentage Percentage
Yes 44 14.9% 55% 16.8%
No 36 12.1% 45% 13.8%
Don't know 181
Which percentage is correct? They all are. Which one you think is correct depends on context. I won't go on about this simple example; but there is more than one way to present information as a percentage. And if the percentage is offered without an explanation of the context under which it was derived it has no relevance.
If the number of responses are manipulated to create some statistical picture, we risk a distortion of the info actually provided. (BTW, I am by education a mathematician.)
Question 1 is not suspect. The percentages are. Ignore the percentages. Many folks, including myself, correctly responded to question 1 with more than one response. It is valid for a respondent to give multiple responses to question 1.
I'm not sure what you mean by reviewing the survey after "that change is made," but I would suggest that changing the survey invalidates the results. Ignore the percentages; they are not relevant to many questions. This 'lite' version of the survey tool (there is a more industrial strength version available) applies a standard method of presenting percentages that is not relevant for all questions. Again, just ignore the percentages. Look at the numbers and at the graphs; and read the comments verbatim. If a particular question wasn't clear then "Don't know" is a valid answer. But that is no reason to discredit the responses to the question; those that answered appeared to have understood the question.
The tone of the email to which I am responding seems to be inclined towards discrediting the results of the survey. Perhaps, I wonder, since not all the responses were as some might have predicted or desired? But I hope that it was not the intent of that email (to which I am responding) to discredit the survey results or obfuscate the information that the responses contain.
I recommend that we carefully consider the valuable information that was collected and move ahead accordingly. The moment to debate the form and wording of the questions has passed. We can revisit them in five or ten years for the next survey.
Thanks,
Steve