And the problem could be also that they have LOST visible companions.
A couple of stars can be destroyed in gravitational interactions when
the couple is born, both of them becoming lone stars and leaving
no trace of their common origin...
See:
http://www.sciencedaily.com/releases/2011/09/110915083715.htm
or the scientific article:
http://arxiv.org/abs/1109.2896
I think that proving or disproving Robert's hypothesis could be VERY
difficult. The only way to know if he is right would be to weight star
systems when they are just born...
> In article <mt2.0-13710...@hydra.herts.ac.uk>,
> eric gisse <jowr.pi...@gmail.com> wrote:
>>I have decided to take two hours away from dead island and do
>>something equally productive with my time: test the numerology of
>>Robert Oldershaw.
>
> To help people doing this sort of work: there's a standard way of
> saying whether a given number of standard deviations away from a model
> is significant, the chi^2 test. Eric's code almost does this already,
> but below[1] is a slightly modified version which computes the chi^2
> statistic, which is simply the sum of the squares of the deviations
> over the errors.
[...stuff...]
Irritating perl bug: The value of $residual is going to be negative a
significant portion of the time, but squaring the negative quantity
triggers some sort of overflow/insanity. Might need to do some
reporting/squashing after this.
After calculating the chi squared of the sample, I noticed I was getting
values north of 10^22. I found that to be odd.
It turns out that perl does not square a negative number correctly. So I
have to add yet another hackish/goofy computational protection in the
form of abs($residual) so the squaring doesn't go nutbar.
Now for the chi squared test, let's do that right.
Given most of my experience is with ye olde standard deviation and
associated distribution widgetry, my knowledge of the test has been
limited to "what I learned in school and forgot, and the residual notion
that a large chi squared means the result is crap".
Re-reading my statistical analysis textbook (Taylor, haven't opened it
in about 2 years) and the discussion of this, it turns out that this is
a _far_ better test of a distribution hypothesis rather than what I was
doing. I was merely generalizing my previous behavior of 'take the
latest example, and testing whether it works' which is fine for the
small scale but does not generalize.
Now, you don't want just the regular chi-squared value. That isn't
useful. What is wanted is the reduced chi-squared value, which is chi-
squared divided by the number of degrees of freedom.
For this, the hypothesis is a binning of 0.145 solar masses. This
requires figuring out the amount of bins, and can't be done beforehand
because the data set covers a wide range of masses with a wider range of
confidence.
Now this may be a point of contention, but I argue the degrees of
freedom is not the amount of stars themselves but rather the amount of
bins of 0.145 M_sun required to cover the mass range. This is actually
more generous to the testing, given that more degrees of freedom makes
it more likely that the hypothesis is true. Turns out to not matter
much, since the answer is "zero" for meaningful chunks of the data.
Knowing the probability that the resultant reduced chi squared is going
to be larger than chi squared for a certain amount of degrees of freedom
requires a computation using a variation on the gamma function. Screw
that, that is what CPAN is for. The suggested module does that!
Putting all this together, i have DOESSTUFF_mod2.pl
It turns out that the hypothesis is so wrong that it does not matter
what data quality cutoff I use.
BEEP BOOP...analyzing 17187 stars with masses determined to 100% or
better
Average standard deviation per star: 0.48
Average mass of star: 1.28 solar masses
Mass range of sample: 0.88 to 7.77 solar masses
Chi-squared of the expected binning hypothesis: 35097
Reduced chi-squared: 731.1875
The probability that the reduced chi-squared value
of 731.1875 is larger than the value of 35097
for 48 degrees of freedom is 0.
BEEP BOOP...analyzing 185 stars with masses determined to 1% or better
Average standard deviation per star: 2.97
Average mass of star: 1.35 solar masses
Mass range of sample: 1.00 to 4.63 solar masses
Chi-squared of the expected binning hypothesis: 537
Reduced chi-squared: 21.48
The probability that the reduced chi-squared value
of 21.48 is larger than the value of 537
for 25 degrees of freedom is 0.
BEEP BOOP...analyzing 10 stars with masses determined to 0.6% or better
Average standard deviation per star: 2.99
Average mass of star: 2.07 solar masses
Mass range of sample: 1.71 to 4.63 solar masses
Chi-squared of the expected binning hypothesis: 26
Reduced chi-squared: 1.3
The probability that the reduced chi-squared value
of 1.3 is larger than the value of 26
for 20 degrees of freedom is 0.16581.
The only remarkable thing here is that there is a 4.63 M_sun determined
to 0.6%!
Sorry Robert. The probability that your theory is right is competing
with the limits of floating point math.
--------------
#!/usr/bin/perl
use warnings;
use strict;
use Math::Round ':all';
use Statistics::Distributions;
open (DATA, "stars2.txt");
my $cutoff = 1; my $percent = 100 * $cutoff;
my $totaldeviations; my $count; my $totalmass = 0;
my $chisq = 0; my @stars;
while (my $line = <DATA>) {
my @data = split(' ', $line);
my $mass = $data[2]; my $mass_error = $data[3];
my $multiple = round( nearest(0.145, $mass) / 0.145 );
my $residual = $mass - ($multiple * 0.145);
my $truncated_residual = substr($residual, 0, 5);
my $deviations = substr( abs($residual / $mass_error), 0, 4);
if ($mass_error / $mass le $cutoff) {
$totaldeviations += $deviations;
$count++;
$totalmass += $mass;
$chisq += abs($residual / $mass_error)^2;
push (@stars, $mass);
}
}
my @sorted = sort @stars;
my $largest = $sorted[-1]; my $smallest = $sorted[0];
my $binning_largest = nearest_ceil(0.145, $largest);
my $binning_smallest = nearest_floor(0.145, $smallest);
my $degrees = ($binning_largest - $binning_smallest) / 0.145;
my $reduced_chisq = $chisq / $degrees;
my $probability = Statistics::Distributions::chisqrprob ($degrees,
$chisq);
my $average_deviations = substr($totaldeviations / $count, 0, 4);
my $average_mass = substr($totalmass / $count, 0, 4);
print "BEEP BOOP...analyzing $count stars with masses determined to
$percent% or better
Average standard deviation per star: $average_deviations
Average mass of star: $average_mass solar masses
Mass range of sample: $smallest to $largest solar masses
Chi-squared of the expected binning hypothesis: $chisq
Reduced chi-squared: $reduced_chisq
The probability that the reduced chi-squared value
of $reduced_chisq is larger than the value of $chisq
for $degrees degrees of freedom is $probability.
";
> On Sep 18, 5:43�am, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> print("Chi^2 is $chi2 for $count stars.\n");
>> printf("Probability under null hypothesis:
>> %g\n",Statistics::Distributions::chisqrprob($count,$chi2));
>> printf("(chi^2 required to rule out null hypothesis at (1-%g)\n � �
>> confidence level is
>> %g)\n",$confidence,Statistics::Distributions::chisqrdistr($count,
$conf
>> idenc�e));
>>
>> [2] I haven't used this module before and can only say that the
>> numbers it produces look entirely reasonable.
> ----------------------------------------------------------------------
-
> ---------------------------------------
>
> I am currently looking at the paper "Accurate masses and radii of
> normal stars: Modern results and applications" by Torres, Andersen and
> Gimenez.
>
> This catalog is available at VizieR, and published Astronomy &
> Astrophysics Review, vol. 18, 2010.
>
> It is also available at arxiv.org (search Torres, 2009, astro-ph)
>
> This sample is heavily weighted toward "massive" stars, but it might
> yield some interesting results in the 1 to 4 solar mass range for
> total system masses.
Why would you think it'd tell you anything we don't already know? ~
17,000 stars, the majority within that mass range, explicitly disproves
your theory.
Another hundred won't change anything.
>
> An independent analysis of the data would be most welcome.
How fortunate for you that someone did your research for you. There's a
program right there for you to analyze the data with.
>
> RLO
> Fractal Cosmology
Not in the perl I use! (I did test the script I posted...) That would
be a very serious and basic bug if it were really present!
I think you are mistaking the operator ^ for the operator ** (see my
code). ^ in perl is not 'raise to power' but 'bitwise xor'. ** is the
exponentiation operator.
>Now, you don't want just the regular chi-squared value. That isn't
>useful. What is wanted is the reduced chi-squared value, which is chi-
>squared divided by the number of degrees of freedom
Not so. The reduced chi^2 is a useful way of telling at first glance
whether there is something wrong with a model: a reduced chi^2 much
greater than 1 indicates a problem. However, the reduced chi^2 doesn't
tell you everything that you want to know, quantitatively -- a reduced
chi^2 of 2 for 2 degrees of freedom is very much less interesting than
a reduced chi^2 of 2 for several thousand degrees of freedom. So in
fact the standard thing to do is use the chi^2 value itself and
calculate (or look up) the critical value for a given number of
degrees of freedom. You can do the equivalent thing for a reduced
chi^2, so it's not a big deal, but chi^2/d.o.f. is what people quote
in the astrostatistics literature, so it's what I used.
>For this, the hypothesis is a binning of 0.145 solar masses. This
>requires figuring out the amount of bins, and can't be done beforehand
>because the data set covers a wide range of masses with a wider range of
>confidence.
>
>Now this may be a point of contention, but I argue the degrees of
>freedom is not the amount ouf stars themselves but rather the amount of
>bins of 0.145 M_sun required to cover the mass range. This is actually
>more generous to the testing, given that more degrees of freedom makes
>it more likely that the hypothesis is true. Turns out to not matter
>much, since the answer is "zero" for meaningful chunks of the data.
I don't think this is right. The degrees of freedom is the number of
data points, minus the number of free parameters of the model (none in
this case): see e.g.
http://en.wikipedia.org/wiki/Chi-square_statistic. So I don't think
your numbers are correct (in particular, it would seem very bizarre
if, as you suggest, including stars with larger errors caused the
model to be ruled out more stringently... in fact, if you include all
the stars with huge errors *and* calculate the chi^2 correctly, you
should find an acceptable fit, but that's only because you'd be
diluting the stars that can actually constrain the model with the many
more that can't).
However, the key point is that this test can be done, and, when it's
done with stars with accurately measured masses, it is inconsistent
with the proposed model at a very high confidence level, as I said
earlier.
Martin
As these are non-contact binaries and the authors say they should have
evolved independently, I did the chi^2 test as described in the
earlier posting for the individual stars with errors less than 0.145
solar masses: chi^2 of 16085 for 172 degrees of freedom, null
hypothesis ruled out at such a large confidence level that I can't
calculate it offhand, but basically such that the model can't possibly
be right.
If I add up the two components and take only the systems where the
combined error on mass is less than 0.145 solar masses, I get a chi^2
of 1154 for 82 d.o.f., again wildly inconsistent with the model.
So, again, the data are not telling you what you would like them to
tell you. It took me about ten minutes to find the data you referred
to, get them into the right format, modify and run my code, and do the
modifications needed to run it again on the sums of the masses.
Testing models, when they make quantitative predictions, is easy, and
it's a skill that any would-be-modeller ought to learn. The half-hour
or so I've spent on this today is enough for me, though.
> In article <mt2.0-20923...@hydra.herts.ac.uk>,
> eric gisse <jowr.pi...@gmail.com> wrote:
>>Irritating perl bug: The value of $residual is going to be negative a
>>significant portion of the time, but squaring the negative quantity
>>triggers some sort of overflow/insanity. Might need to do some
>>reporting/squashing after this.
>
> Not in the perl I use! (I did test the script I posted...) That would
> be a very serious and basic bug if it were really present!
Yeah. That's what was throwing me.
>
> I think you are mistaking the operator ^ for the operator ** (see my
> code). ^ in perl is not 'raise to power' but 'bitwise xor'. ** is the
> exponentiation operator.
That'll do it. Major bug, but easily fixed.
This is what happens when you use too many devices and languages that
use different symbols for the same things.
>
>>Now, you don't want just the regular chi-squared value. That isn't
>>useful. What is wanted is the reduced chi-squared value, which is chi-
>>squared divided by the number of degrees of freedom
>
> Not so. The reduced chi^2 is a useful way of telling at first glance
> whether there is something wrong with a model: a reduced chi^2 much
> greater than 1 indicates a problem. However, the reduced chi^2 doesn't
> tell you everything that you want to know, quantitatively -- a reduced
> chi^2 of 2 for 2 degrees of freedom is very much less interesting than
> a reduced chi^2 of 2 for several thousand degrees of freedom. So in
> fact the standard thing to do is use the chi^2 value itself and
> calculate (or look up) the critical value for a given number of
> degrees of freedom. You can do the equivalent thing for a reduced
> chi^2, so it's not a big deal, but chi^2/d.o.f. is what people quote
> in the astrostatistics literature, so it's what I used.
I'm somewhat confused as what you said reinforces my point. The reduced
chi squared, in my understanding, takes into account the differences in
degrees of freedom.
As for the calculation, I see two different things going on:
In Taylor, the gamma integral is done over the reduced chi squared to
infinity, while in my 31st edition CRC the same integral is done from 0
to chi squared. I wonder if I'm just confused or there's not a standard
in the literature.
But as you say this is all (or should be) the same thing, so I am not
abundantly concerned.
>
>>For this, the hypothesis is a binning of 0.145 solar masses. This
>>requires figuring out the amount of bins, and can't be done beforehand
>>because the data set covers a wide range of masses with a wider range
of
>>confidence.
>>
>>Now this may be a point of contention, but I argue the degrees of
>>freedom is not the amount ouf stars themselves but rather the amount
of
>>bins of 0.145 M_sun required to cover the mass range. This is actually
>>more generous to the testing, given that more degrees of freedom makes
>>it more likely that the hypothesis is true. Turns out to not matter
>>much, since the answer is "zero" for meaningful chunks of the data.
>
> I don't think this is right. The degrees of freedom is the number of
> data points, minus the number of free parameters of the model (none in
> this case): see e.g.
> http://en.wikipedia.org/wiki/Chi-square_statistic.
I saw that, and was in fact the first thing I looked at before I
remembered "I have books on this stuff! Books that cost me money!"
>So I don't think
> your numbers are correct (in particular, it would seem very bizarre
> if, as you suggest, including stars with larger errors caused the
> model to be ruled out more stringently...
Well, I was running through the calculations with the exponentiation
done incorrectly the whole time. Either way, I do not get an answer
distinguishable from computational zero until I reduce the sample size
to like ten stars which is bad regardless.
The calculated probability for 185 stars (the 1%-or-better sample) is
indistinguishable from zero after fixing that and using both the number
of bins as well as the number of stars themselves as degrees of freedom.
>in fact, if you include all
> the stars with huge errors *and* calculate the chi^2 correctly, you
> should find an acceptable fit, but that's only because you'd be
> diluting the stars that can actually constrain the model with the many
> more that can't).
True, but the hypothesis is so wrong that the calculated probability is
computationally equal to zero until I work with a sample size on the
order of 10 stars.
The reason I argue that more degrees of freedom is more generous is that
it becomes more mathematically possible that the chi squared vs reduced
chi squared comparison does not reject the hypothesis being tested.
Take a chi squared of 10, for example.
Ten degrees of freedom vs one degree of freedom. Which one gives a
result that is more likely?
You can see this in a chi squared integral table, because as you
increase the degrees of freedom the probability goes up.
Or going by actual data:
The chi squared of the 1% set is 2460,
If I use the bins of 0.145 M_sun as the degrees of freedom, I have 25
degrees of freedom. Reduced chi squared is 98.4
If I use the actual amount of stars, I have 185 degrees of freedom.
Reduced chi squared is 13.3.
>
> However, the key point is that this test can be done, and, when it's
> done with stars with accurately measured masses, it is inconsistent
> with the proposed model at a very high confidence level, as I said
> earlier.
>
> Martin
Yep.
There is something that should be borne in mind: the Sandage - de
Vaucouleurs dust-up.
If you will recall, the two protagonists battled long and hard over
the value of the Hubble constant. Sandage insisted upon 50 km/sec/
Mpc, while de Vaucouleurs insisted upon 100 km/sec/Mpc. The battle
raged on for many years.
Both camps had the same observational data to work with.
Both camps had the same statistical methods to work with.
Both camps included the best astrophysicists of the time.
Both camps insisted that they were obviously right.
Both camps insisted that the other side was wrong.
If things can be unambiguoulsy decided with some data and some
statistical analysis,
HOW CAN THIS BE POSSIBLE?!?
There is a very important lesson here about a very common problem in
physics: often wrong - never in doubt.
Who was right? NEITHER, apparently. We now think that <H> ~ 70 km/sec/
Mpc.
Bottom lines: Be careful about what you say you can rule out.
And be very careful about what you say you are SURE is right.
Sorry for the EMPHASIS (perhaps I miss PH).
> On Sep 18, 6:02�pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> However, the key point is that this test can be done, and, when it's
>> done with stars with accurately measured masses, it is inconsistent
>> with the proposed model at a very high confidence level, as I said
>> earlier.
> --------------------------------------------------------------
>
> There is something that should be borne in mind: the Sandage - de
> Vaucouleurs dust-up.
>
> If you will recall, the two protagonists battled long and hard over
> the value of the Hubble constant. Sandage insisted upon 50 km/sec/
> Mpc, while de Vaucouleurs insisted upon 100 km/sec/Mpc. The battle
> raged on for many years.
>
> Both camps had the same observational data to work with.
> Both camps had the same statistical methods to work with.
> Both camps included the best astrophysicists of the time.
> Both camps insisted that they were obviously right.
> Both camps insisted that the other side was wrong.
>
> If things can be unambiguoulsy decided with some data and some
> statistical analysis,
> HOW CAN THIS BE POSSIBLE?!?
Models differ.
Fundamental disagreement over fundamental physics and assumptions.
Insufficient data.
Systematic errors in measurements.
I have no idea how many of the above were true for the kerfuffle over
the Hubble constant, nor do I care because it is completely irrelevant
to the topic at hand.
>
> There is a very important lesson here about a very common problem in
> physics: often wrong - never in doubt.
>
> Who was right? NEITHER, apparently. We now think that <H> ~ 70
km/sec/
> Mpc.
An answer that is now verified through several independent methods, all
with high quality data.
>
> Bottom lines: Be careful about what you say you can rule out.
> And be very careful about what you say you are SURE is right.
Your theory has been ruled out at a confidence level so high that the
chance you are right is computationally equal to zero.
You have been going on and on about this on USENET since 1995:
http://groups.google.com/group/sci.astro.research/msg/f489046a4e1ba66c?
dmode=source
And apparently actually got an ApJ editor in a moment of weakness, so
there's a publication in 1987 about it too.
In a few hours of work using data published more than a decade ago, your
theory was excluded.
Are you going to move on?
What of the thousands and thousand of stars that prove your theory wrong?
Are you going to keep posting example after example that agrees with you,
while ignoring the thousands that don't?
Are you even going to directly respond to my analysis?
> formation. They can find a good fit to the field white dwarf mass
> function and reproduce the broad peak near 0.6 Msun, and the long tail
> toward higher white dwarf masses.
----------------------------------------------------------------------------
But I think that even after all the above discussion, you have to
grant that there are distinct peaks indistinguishable from the
predicted peaks at 0.435 solar masses and 0.580 solar masses in the
referenced Tremblay et al graph.
Nobody ever predicted that peak at 0 .435 solar mass, or explained
it, until Discrete Scale Relativity came along.
Now go to http://www3.amherst.edu/~rloldershaw , click on "Stellar
Scale Discreteness?"
There you will find SEVEN SAMPLES of white dwarf stars, planetary
nebula nuclei and main sequence stars that all show indications of the
quantization predicted by Discrete Scale Relativity.
I am not saying that the data on my website, or the huge white dwarf
data sample in Tremblay et al, or all the systems I have brought to
people's attention in the last 2 weeks, prove the predicted DSR
quantization.
What I am saying is that there is good empirical evidence that
supports the prediction and argues compellingly for astrophysicists to
keep an open mind on this issue. Nature will eventually yield the
necessary evidence for a definitive answer.
Those who say the matter was settled long ago, or recently, seem much
too sure of themselves and much to dismissive of the relevant
uncertainties that undercut their beliefs.
RLO
Discrete Fractal Cosmology