New! File View Pro

0 views
Skip to first unread message
Message has been deleted

Gerarda Zmuda

unread,
Jul 10, 2024, 9:54:32 AM7/10/24
to hybapormedd

CONTENT: You can also see the content and the editorial of each previous issue of the magazine here. (We are gradually adding more back issues).
COVERS: You can view a picture gallery of the front coverartwork of the magazine.

New! File View Pro


DOWNLOAD https://gohhs.com/2yMFI7



The New View Campaign was formed in 2000 as a grassroots network to challenge the distorted and oversimplified messages about sexuality that the pharmaceutical industry relies on to sell its new drugs.

The pharmaceutical industry wants people to think that sexual problems are simple medical matters, and it offers drugs as expensive magic fixes. But sexual problems are complicated, sexuality is diverse, and no drug is without side effects.

The goal of the New View Campaign is to expose biased research and promotional methods that serve corporate profit rather than people's pleasure and satisfaction. The Campaign challenges all views that reduce sexual experience to genital biology and thereby ignore the many dimensions of real life.

This website is now an archive of our work, a catalog and snapshot of one project in early 21st century feminist scholar-activism. Our physical archive will be in the Kinsey Institute in Bloomington, Indiana.

A Scale of Magnitudes for EffectStatisticsSuppose you get a correlation of 0.47 between two variables. Is that big or small, in the scheme of things? If you haven't a clue, you're not alone. Most people don't know how to interpret the magnitude of a correlation, or the magnitude of any other effect statistic. But people can understand trivial, small, moderate, and large, so qualitative terms like these need to be used when you discuss results. One day, stats programs will include these terms in their output. In the meantime, we have to do the job manually using a scale of magnitudes. I'll now explain a scale of magnitudes for linear trends (using the correlation coefficient), differences in means (using the standardized difference), and relative frequencies (using relative risks, odds ratios, and differences in frequencies).

Correlations

Jacob Cohen has written the most on this topic. In hiswell-known book he suggested, a little ambiguously, that acorrelation of 0.5 is large, 0.3 is moderate, and 0.1 is small(Cohen, 1988). The usual interpretation of thisstatement is that anything greater than 0.5 is large, 0.5-0.3 ismoderate, 0.3-0.1 is small, and anything smaller than 0.1 isinsubstantial, trivial, or otherwise not worth worrying about. Hiscorresponding thresholds for standardized differences in means are 0.8, 0.5and 0.2. He did not provide thresholds for the relative risk and oddsratio.

For me, the main justification for this scale of correlationscomes from the interpretation of the correlation coefficient as theslope of the line between two variables when their standarddeviations are the same. For example, if the correlation betweenheight (X variable) and weight (Y variable) is 0.7, then individualswho differ in height by one standard deviation will on average differin weight by only 0.7 of a standard deviation. So, for a correlationof 0.1, the change in Y is only one-tenth of the change in X. Thatseems a reasonable justification for calling 0.1 the smallestworthwhile correlation. I guess it's also reasonable to accept that achange in Y of one half that in X (corresponding tor = 0.5) is also the threshold for a large effect, andr = 0.3 seems a logical way to draw the line between smalland moderate correlations.

Threshold values for standardized differences or changes in means and for relativefrequency can be derived by converting these statistics tocorrelations. The procedure is a little artificial, so the resultingvalues need to be scrutinized to ensure they make sense. Here's howit's done.

Problem! Cohen's thresholds for small, moderate and large are 0.20, 0.50 and 0.80. The lowest of these two sets of values agree (0.20), but the othersdon't. Cohen derived his thresholds from a consideration of non-overlap of the distributions of values in the two groups. He chose certain arbitrary amounts of non-overlap as defining his thresholds. The thresholds for small obviously correspond, but the others don't.

Something like Cohen's thresholds for standardized differences can be got by making the independent variable normally distributed, then "dichotomizing" it by splitting its values down the middle to make the two fitness groups. Correlations of 0.1, 0.3, and 0.5 then turn into standardized differences of 0.17, 0.50, and 0.87: yet another set of thresholds! Which set is correct? I think that this dichotomizing operation throws away information, and that therefore the values of 0.17, 0.50 and 0.87 underestimate the thresholds.

One reality check on these thresholds comes from considering the average separation between individuals in a normally distributed population. It turns out to be 1.13 standard deviations, which is a standardized difference of 1.13. So we have to ask: is it reasonable that the average difference between individuals in a population should be on the threshold between moderate and large? I think so, and I therefore think that Cohen's 0.5 and 0.8 are too low to define the thresholds for moderate and large effects.

Relative Frequencies

To work out a scale for comparing frequencies, we have to code not only the grouping variable, but also the dependent variable. See the example on the right, in which a cluster of points represents the frequencies for each level of the independent and dependent variables. Once again the values of 0 and 1 for the variables don't matter, but if we represent the frequencies as percents in each group, we get something really nice. For the example shown, heart disease was 75% in the smoking group and 30% in the non-smoking group. The difference in frequencies (75 - 30 = 45%) divided by 100 is 0.45, which turns out to be the correlation between our two newly coded variables. This result--the correlation times 100 equals the difference in percent frequencies--is true for all frequencies. The threshold correlations of 0.1, 0.3, and 0.5 therefore convert to thresholds of 10, 30 and 50 for differences in percent frequencies between the occurrence of something in two groups.

Now, are you happy with the notion that a difference of 10% in thefrequency of something between two groups is indeed small? Forexample, if you made sedentary people active and thereby reduced theincidence of heart disease from 55% to 45% in some age group, wouldthat be a small gain? At first glance you'd think this gain might bebetter described as moderate. Perhaps the way to view it isthat the 10% in question is only one tenth of the entire group. On anabsolute population basis, we may be talking about a lot of people,but it's still only one in 10. The threshold between moderateand large represents something that affects half the group,which seems OK. The boundary between small and moderate(three people in 10) is also acceptable.

Frequency differences do not convert simply into relative risks,because the values of this statistic depend on the frequencies ineach group. For example, the threshold frequency difference of 10%for the smallest worthwhile effect represents a relative risk of55/45 or 1.22 if the frequencies are 55% and 45%, but the relativerisk is 11 if the frequencies are 11% and 1%. The odds ratio is evenmore sensitive to the absolute frequencies in each group. Thesmallest values for the relative risk and odds ratio occur when thefrequencies in the two groups are symmetrically disposed about 50%(55-45, 60-40, 65-35 and so on).

The Complete Scale

It seems to me that the vista of large effects is leftunexplored by Cohen's scale. Surely more than just large canbe applied to the correlations that lie between 0.5 and 1? What'smissing from the picture is a rationale for breaking up this big halfof the scale with a couple more levels. Here's the way I do it:

I've adopted a Likert-scale approach by using very for thelevel above large, and I've assigned it to a correlation of 0.7 tokeep the scale linear for correlations and frequency differences. Alevel of magnitude above very large is warranted forcorrelations, because a value of 0.9 is a kind of threshold forvalidity when the associated straight lineis used to rank individuals, andreliability needs to be greater than 0.9to be most useful for reducing samplesizes in longitudinal studies. I've opted for nearlyperfect to describe these correlations. Values for the othereffect statistics were calculated as before, and the values for therelative risk and odds ratio are the minimum values for thesestatistics.

SAS programs that generated the results on this page areattached.

Other Effect Statistics

Cohen devised several other effect statistics and discussedtheir magnitudes, but I have not seen these statistics inpublications. He also considered whether, for example,variance explained (thecorrelation squared) might be a more suitable scale to representmagnitude of linearity, especially when you take into account theuseful additive property of variance explained in such things asstepwise regression. He rejectedit, though, because a correlation of 0.1 corresponds to a varianceexplained of only 1%, which he thought did not convey adequately themagnitude of such a correlation. I agree.

The so-called common-language effect statistic (McGraw& Wong, 1992) or probability ofsuperiority represents a more recent attempt on the summit of auniversal scale of magnitudes. This statistic is easiest tounderstand when you compare two groups whose means differ. Theprobability of superiority is the probability that someone drawn atrandom from one group will have a higher value than someone drawnfrom the other group. The problem here is that no difference betweenthe means implies a value of 50% or 0.5 (equal chance that the personwill have a higher or lower value). A value of 50% for no differencedoesn't feel right.

b1e95dc632
Reply all
Reply to author
Forward
0 new messages