ONA attendance last five years

16 views
Skip to first unread message

Judson Randall

unread,
May 13, 2007, 1:16:06 PM5/13/07
to oceanside...@googlegroups.com
Zone and Plan Review Committee (and freeloaders),

        At the May 12 committee meeting, in discussion of the survey, there was confusion about the frequency with which respondents said they attended Oceanside Neighborhood Association meetings and with the actual attendance at any given meeting.  The two are not the same. 

        For clarity then, here is a spreadsheet with the attendance at every association meeting from February 2003 through May 2007.

        Regards,

                 Jud

http://www.pdx.edu/media/p/s/psu_signature165x35.gif  
Adviser, Student Publications
Vanguard, Rearguard, Spectator
Portland Review, Graphic Design Center
Voice: 503-725-5687; Fax: 503-725-4534

ONA Attendance03-07.xls

Honeysuckle Landscape & Design

unread,
May 13, 2007, 2:09:06 PM5/13/07
to oceanside...@googlegroups.com
Jud,
    Thank you for providing this, which shows that cross referencing is important when other info. is available.  The same point may be made for the oceanside community club attendance if one were to research this.  
    This is info. that could help profile the community  in the community plan.     Kris

Mary Auvil

unread,
May 13, 2007, 8:35:06 PM5/13/07
to oceanside...@googlegroups.com
Jud,
 
Another statistic of interest might be the total number in the ONA e-mail list. This would show how many people receive the minutes and other news from you.
 
I think the way the survey question about ONA attendance is reported can be misleading. There must be a way to show the responses that makes it clear that the people who come every time are in addition to those who come less often.
 
Deb said she was going to consult with a statistician at Oregon State. I hope she does.
 
Kris and I had a talk today. In my opinion there should be an "n" for each item--the number of people who replied to the question. All percentages should be based on that number. The "n" should never be higher than the number of written surveys returned plus online surveys completed.
 
I think for the survey to have credence, we should carefully review the results after that change is made and identify items that are "suspect" (item 1 surely is). We should also identify items that can be rephrased to get a more definite answer. For example, I responded to general questions about changes and "improvements" with Don't Know when I thought it depended on the location. In some cases the question might be Do you think there should be a study of locations where there need to be...(wider roads, sidewalks, speed limit signs posted, etc. etc.)  Items where there was no clear "yes/no" result need to be reconsidered--is it because there was no clear preference or because the statement needs to be reworded?
 
ANYWAY, the community survey as done can be a great learning tool. I hope we use it that way.
 
Mary
----- Original Message -----
Sent: Sunday, May 13, 2007 10:16 AM
Subject: [Oceanside Planning] ONA attendance last five years

Stephen Macartney

unread,
May 14, 2007, 1:04:28 AM5/14/07
to oceanside...@googlegroups.com
Hi Folks,

Of course the folks that come to every ONA meeting are different from, and in addition to, those that only come to some meetings.  This is intuitively obvious to the casual observer and does not take a statistician to reach such a conclusion.  But what difference does it make?  The really critical part of the survey, with respect to the ONA, are the comments from the respondents.  It is those that should be carefully considered.  If only 19 of the approximately 300 respondents come routinely to ONA meetings it is difficult to conclude that the ONA is statistically representative of the community, especially in light of some of the comments.  It is that issue with which we should concern ourselves. 

As far as the 'n' number is concerned.  Many questions were designed to provide for multiple responses.  Therefore, in those cases (such as the first question) 'n' can be greater than the number of respondents.  Again, so what?  It is the responses that are critical.  The number of responses to a particular question can be dissected in many ways to develop percentages.  But a percentage, even if correctly calculated, may obfuscate the relevant information contained in the responses.  

For instance:  suppose the survey had 296 total respondents, fictitious question #80 received 261 responses and the responder was to select only one answer to this particular question.  Let's suppose that it is a yes/no question with an additional choice to be "Don't know."  I will illustrate below:

Question #80:

Answer Total Percentage Percentage Percentage
Yes   44 14.9% 55% 16.8%
No   36 12.1% 45% 13.8%
Don't know 181

Which percentage is correct?  They all are.  Which one you think is correct depends on context.  I won't go on about this simple example; but there is more than one way to present information as a percentage.  And if the percentage is offered without an explanation of the context under which it was derived it has no relevance.

If the number of responses are manipulated to create some statistical picture, we risk a distortion of the info actually provided. (BTW, I am by education a mathematician.)

Question 1 is not suspect.  The percentages are.  Ignore the percentages.  Many folks, including myself, correctly responded to question 1 with more than one response.  It is valid for a respondent to give multiple responses to question 1.

I'm not sure what you mean by reviewing the survey after "that change is made," but I would suggest that changing the survey invalidates the results.  Ignore the percentages; they are not relevant to many questions.  This 'lite' version of the survey tool (there is a more industrial strength version available) applies a standard method of presenting percentages that is not relevant for all questions.  Again, just ignore the percentages.  Look at the numbers and at the graphs; and read the comments verbatim.  If a particular question wasn't clear then "Don't know" is a valid answer.  But that is no reason to discredit the responses to the question; those that answered appeared to have understood the question.

The tone of the email to which I am responding seems to be inclined towards discrediting the results of the survey.  Perhaps, I wonder, since not all the responses were as some might have predicted or desired?  But I hope that it was not the intent of that email (to which I am responding) to discredit the survey results or obfuscate the information that the responses contain.

I recommend that we carefully consider the valuable information that was collected and move ahead accordingly.  The moment to debate the form and wording of the questions has passed.  We can revisit them in five or ten years for the next survey.

Thanks,

Steve
Reply all
Reply to author
Forward
0 new messages