Duncan Multiple Range Test Software Free Download

0 views
Skip to first unread message
Message has been deleted

Kirby Apodaca

unread,
Jul 11, 2024, 9:22:08 AM7/11/24
to boucomlohsco

Exception: The sole exception to this rule is that no difference between two means can be declared significant if the two means concerned are both contained in a subset of the means which has a non-significant range.

duncan multiple range test software free download


Download https://geags.com/2yLLHc



The tests are performed in the following order: the largest minus the smallest, the largest minus the second smallest, up to the largest minus the second largest; then the second largest minus the smallest, the second largest minus the second smallest, and so on, finishing with the second smallest minus the smallest.

With only one exception, given below, each difference is significant if it exceeds the corresponding shortest significant range; otherwise it is not significant. Where the shortest significant range is the significant studentized range, multiplied by the standard error.The shortest significant range will be designated as R ( p , α ) \displaystyle R_(p,\alpha ) , where p \displaystyle p is the number means in the subset.The sole exception to this rule is that no difference between two means can be declared significant if the two means concerned are both contained in a subset of the means which has a non-significant range.

Duncan's multiple range test makes use of the studentized range distribution in order to determine critical values for comparisons between means. Note that different comparisons between means may differ by their significance levels- since the significance level is subject to the size of the subset of means in question.

Note that although this procedure makes use of the Studentized range, his error rate is neither on an experiment-wise basis (as with Tukey's) nor on a per- comparisons basis. Duncan's multiple range test does not control the family-wise error rate. See Criticism Section for further details.

Duncan (1965) also gave the first Bayesian multiple comparison procedure, for the pairwise comparisons among the means in a one-way layout.This multiple comparison procedure is different for the one discussed above.

Duncan's Bayesian MCP discusses the differences between ordered group means, where the statistics in question are pairwise comparison (no equivalent is defined for the property of a subset having 'significantly different' property).

Duncan modeled the consequences of two or more means being equal using additive loss functions within and across the pairwise comparisons. If one assumes the same loss function across the pairwise comparisons, one needs to specify only one constant K, and this indicates the relative seriousness of type I to type II errors in each pairwise comparison.

A study, which performed by Juliet Popper Shaffer (1998), has shown that the method proposed by Duncan, modified to provide weak control of FWE and using an empirical estimate of the variance of the population means, has good properties both from the Bayesian point of view, as a minimum- risk method, and from the frequentist point of view, with good average power.

In addition, results indicate considerable similarity in both risk and average power between Duncan's modified procedure and the Benjamini and Hochberg (1995) False discovery rate -controlling procedure, with the same weak family-wise error control.

Let us assume one is truly interested, as Duncan suggested, only with the correct ranking of subsets of size 4 or below. Let us also assume that one performs the simple pairwise comparison with a protection level γ 2 = 0.95 \displaystyle \gamma _2=0.95 . Given an overall set of 100 means, let us look at the null hypotheses of the test:

Other possible solutions, which do not include hypothesis testing, but result in a partition of subsets include Clustering & Hierarchical Clustering. These solutions differ from the approach presented in this method:

The test is named after David B. Duncan, who developed it in the 1950s. It is a widely used post-hoc test in experimental research, especially in agriculture, where it is used to compare the yields of different crops.

The test works by comparing the differences between the means of all possible pairs of groups. It calculates a critical value based on the number of groups and the number of observations in each group. If the difference between two means is greater than the critical value, then the means are considered significantly different from each other.

where t is the t-distribution critical value, alpha is the significance level, dfw is the degrees of freedom within groups, MSE is the mean square error from the ANOVA test, and n is the total number of observations.

The test is easy to perform and interpret, but it has some limitations. It assumes that the variances of the groups are equal and that the data are normally distributed. It also does not adjust for multiple comparisons, so the probability of a type I error increases with the number of comparisons made.

Does anybody know how to calculate a Duncan Multiple Comparison or have a macro to do so? I've been asked to look into it at work, but have had difficulty in finding anything about it (I've read it's not a well-used test, but it's the standard in my lab, and I don't have the say). I will go to the literature, but I wanted to drop a note to see if anybody has any macro that would help me. Thanks.

I just read up on the procedure. It looks like it would take some time to program. First you would need to use Excel to do the ANOVA and have your macro retrieve the appropriate sample means, SE, etc. Then be able to calculate Tukey's statistic for various parameters, as you cycle through certain of your two mean differences keeping track of which were significant and which were not. I'm not aware of a free macro to do this, but if you want to develop one, I would be happy to comment as you proceed. Good luck.

Thanks for the offer Derk. I'll start ASAIC, and post what I develop. I'll probably start by simply asking for the stats that would be gotten from an ANOVA, as this is the basest application it will be needed for here in the office, then I think I may work in the ANOVA calculations themselves. The problem with my workplace is that we have a half-dozen programs each of which perform a handful of analyses. We enter the data in Excel and then export to these programs. Why not do it all in Excel you ask? Because I don't want to pay for it, and most of what we do doesn't have a macro anyway. So here I am, trying to update my lab to the Microsoft Age in computing. Anyway, thanks in advance for the help Derk.

Hi Kabong,
For an elementary intro to multiple comparisons that discusses the various approaches including Duncan, you might want to read through the HyperStat section starting with
_ANOVA.html

Kabong,
You may want to look at a commercial product. I have not used Costat and know nothing about it other than what's on its web site, but it reads well, appears to do what you need, works with Excel (inports from etc.), and has a free trial. The complete package is a little over $300. You can see a list of its features at

Hello again Derk. Yes, I'm still slowly plugging away at this code. It's been hard finding documentation on this topic due to the fact that it is hardly ever used, frustrating. I've been using an equation that figures the mean square error for the ANOVA and uses that in an equation to calculate the Duncan range. I then compare this range to the difference in the means of the two groups being analyzed. This seems to be the correct way to do things, from what I've read. What's more, I don't think this will be do dastardly a code to write... at least not as bad as that damned Fisher Exact! Anyway, thanks for your continued attention, and I'll take a look at your suggestions. We may just end up going with a professional program.

Hi Kabong,
Here's a bunch of code that does everything but allow for versatile inputs and outputs (they are hard coded here). The inputs are names, counts, and averages for the different samples, the MSE for the data, and a value for the nominal alpha. The code runs 4 of the standard multiple comparisons, so toss the part that doesn't do Duncan if you want. The output area is hard coded as well. It's dimensioned for only 4 samples at the moment, but you can change that. It uses the function I posted above.

c = multcompare(stats) returns a matrix c of the pairwise comparison results from a multiple comparison test using the information contained in the stats structure. multcompare also displays an interactive graph of the estimates and comparison intervals. Each group mean is represented by a symbol, and the interval is represented by a line extending out from the symbol. Two group means are significantly different if their intervals are disjoint; they are not significantly different if their intervals overlap. If you use your mouse to select any group, then the graph will highlight all other groups that are significantly different, if any.

c = multcompare(stats,Name,Value) specifies options using one or more name-value arguments. For example, you can specify the confidence interval, or the type of critical value to use in the multiple comparison test.

[c,m] =multcompare(___) also returns a matrix, m,which contains estimated values of the means (or whatever statisticsare being compared) for each group and the corresponding standarderrors. You can use any of the previous syntaxes.

The small p-value (value in the column Prob>F) indicates that group mean differences are significant. However, the ANOVA results do not indicate which groups have different means. You can perform pairwise comparisons using a multiple comparison test to identify the groups that have significantly different means.

Specify CriticalValueType as "dunnett" to perform Dunnett's test. multcompare selects the first group (USA) as the control group by default. You can select a different control group by using the ControlGroup name-value argument.

In the figure, the blue circle indicates the mean of the control group. The red circles and bars represent the means and confidence intervals for the groups with significantly different means from the mean of the control group. Note that the red bars do not cross the dotted vertical line representing the mean of the control group. Groups that do not have significantly different means appear in grey.

7fc3f7cf58
Reply all
Reply to author
Forward
0 new messages