Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A definitive test of discrete scale (relativity, numerology)

65 views
Skip to first unread message

eric gisse

unread,
Sep 18, 2011, 3:23:27 AM9/18/11
to
I have decided to take two hours away from dead island and do something
equally productive with my time: test the numerology of Robert
Oldershaw.

Exhibit "A" is the VizieR catalog "J/A+A/352/555/table1", sourced from
the Hipparcos catalog [1]. I am sure Robert is working on this very
thing, as he has had more than a decade to find this publication, and
about 2 weeks to find this catalog after I have first given it to him. I
have only given myself two hours because dead island is fun but I only
need a break rather than a sabattical.

The quality of the mass data is rather impressive. Statistics:

12,090 stars to 10% or better.
3028 stars to 5% or better.
185 stars are known 1% or better.
30 with zero error :P
641 stars known to 0.02 M_sun or better.

Procedure:

* Yoink the data off VizieR.
* Export to ascii with the only meaningful/relevant parameters being
mass and the uncertainty of the measurement.
* Grep out invalid entires.

I, personally, find zero error unlikely.

Maybe they are good, maybe not. Don't feel like making big effort for
0.17% of the catalog.

* Take the mass of each star, and subtract off the nearest integer
multiple of 0.145. A computational step, not a physics step.

What survives is the absolute difference from the predicted
quantization, and the error in the measurement.

I tried plotting out the 1% data with gnuplot, and even that small block
of quality data is visually _meaningless_. Overlapping error bars make
it impossible to get anything of value out of the data that way.

So might as well go all the way and use all the stars where the masses
are determined to 5% or better. Five percent or so, is about as large as
you can go because the average mass of a star in this catalog is 1.3
M_sun and I'd like the results to be useful to at least one standard
deviation.

Results of DOES_STUFF.PL [2]:

3028 stars with masses determined to 5% or better:
37 exactly as predicted
1938 within 1 standard deviations
630 off by 1 to 2 standard deviations
215 off by 2 to 3 standard deviations
115 off by 3 to 4 standard deviations
40 off by 4 to 5 standard deviations
53 off by more than 5+ standard deviations
Average standard deviation per star: 1.09
Average mass of star: 1.43 solar masses

Seeing that about half the data is wrong by a standard deviation or more
isn't all that telling given that the binning being tested for is only
twice the average error of the sample. Let's go deeper.

On a related note, Robert has frequently complained that rand("he can't
find", "there doesn't exist", "he hasn't bothered to look for") a
dataset that has masses determined to 1% or better. Negative twelve
years after making that complaint, science has finally come through for
him!

Re-run with $cutoff = 0.01:

185 stars with masses determined to 1% or better:
3 exactly as predicted
42 within 1 standard deviations
36 off by 1 to 2 standard deviations
28 off by 2 to 3 standard deviations
23 off by 3 to 4 standard deviations
16 off by 4 to 5 standard deviations
37 off by more than 5+ standard deviations
Average standard deviation per star: 2.97
Average mass of star: 1.35 solar masses

Now all the subtleties of averaging a sample of data aside, I think it
is pretty safe to say Robert's numerology is wrong when 75% of a
precisely determined sample of relevant data is more than one standard
deviation different from the predicted binning. Also when the _average_
standard deviation per sample is 3. Sure, there could be a corner case
or twenty in there that could be jacking up the average by a large
amount but I argue that tearing apart the data is an exercise for the
reader at this point.

None of this includes other data sets such as the various smaller sets
such as ones that include eclipsing binaries, which are determined to
fractions of a percent, or the Sun which has already been determined to
not agree with Robert's numerology by 100 standard deviations and
change.

The catalog seems to cut out at about 0.88 solar masses on the low side,
so small mass stars aren't really present here. There are other catalogs
of low mass stars that Robert can ignore, so it isn't really an issue.

Note that only 37 out of 3,028 stars are binned exactly as predicted. I
doubt I will hear a sarcastic rendition of my mathematical point of the
gauranteed existence of some matches, with such a small number.

I figure taking 2 hours, wasting half of it re-learning gnuplot and
realizing that there's no meaningful way to visualize this data, and
spending the other half doing something useful with it, is a better
investment of my time than repeatedly saying "HEY! HERE'S THE DATA! STOP
CRYING AND BE THE SCIENTIST YOU CLAIM TO BE."

I am going back to dead island so I can light undead minorities and
Australians on fire. [3]

----

[1] : "Fundamental parameters of nearby stars from the comparison with
evolutionary calculations: masses, radii and effective temperatures.",
Allende Prieto C., Lambert D.L., Astron. Astrophys. 352, 555 (1999)

[2] :

#!/usr/bin/perl

use warnings;
use strict;
use Math::Round ':all';
open (DATA, "stars2.txt");
my $cutoff = 0.01;
my $percent = 100 * $cutoff; my $zero = 0; my $one = 0;
my $two = 0; my $three = 0; my $four = 0; my $five = 0;
my $morethanfive = 0; my $totaldeviations; my $count; my $totalmass = 0;

while (my $line = <DATA>) {
my @data = split(' ', $line);
my $mass = $data[2]; my $mass_error = $data[3];
my $multiple = round( nearest(0.145, $mass) / 0.145 );
my $residual = $mass - ($multiple * 0.145);
my $truncated_residual = substr($residual, 0, 5);
my $deviations = substr( abs($residual / $mass_error), 0, 4);
if ($mass_error / $mass le $cutoff) {
$totaldeviations += $deviations;
$count++;
$totalmass += $mass;
if ($deviations eq 0) { $zero++; next; };
if ($deviations le 1 && $deviations ne 0) { $one++;
next; };
if ($deviations le 2 && $deviations gt 1) { $two++;
next; };
if ($deviations le 3 && $deviations gt 2) { $three++;
next; };
if ($deviations le 4 && $deviations gt 3) { $four++;
next; };
if ($deviations le 5 && $deviations gt 4) { $five++;
next; };
if ($deviations gt 5) { $morethanfive++; next; };
}
}
my $average_deviations = substr($totaldeviations / $count, 0, 4);
my $average_mass = substr($totalmass / $count, 0, 4);
print "$count stars with masses determined to $percent% or better:
$zero exactly as predicted
$one within 1 standard deviations
$two off by 1 to 2 standard deviations
$three off by 2 to 3 standard deviations
$four off by 3 to 4 standard deviations
$five off by 4 to 5 standard deviations
$morethanfive off by more than 5+ standard deviations
Average standard deviation per star: $average_deviations
Average mass of star: $average_mass solar masses
";

[3] : Fire is the cleanser.

Martin Hardcastle

unread,
Sep 18, 2011, 5:43:27 AM9/18/11
to
In article <mt2.0-13710...@hydra.herts.ac.uk>,
eric gisse <jowr.pi...@gmail.com> wrote:
>I have decided to take two hours away from dead island and do something
>equally productive with my time: test the numerology of Robert
>Oldershaw.

To help people doing this sort of work: there's a standard way of
saying whether a given number of standard deviations away from a model
is significant, the chi^2 test. Eric's code almost does this already,
but below[1] is a slightly modified version which computes the chi^2
statistic, which is simply the sum of the squares of the deviations
over the errors. The higher chi^2 is, the worse the model agrees with
the data. The advantage of doing that is that the probability of
obtaining a value of chi^2 as extreme, or more extreme than the one
that you actually see under the 'null hypothesis' that the data are
actually consistent with the model can be computed: if this
probability p is low, then we say that the model is ruled out at the
(1-p) confidence level. Various online calculators or the Perl module
Statistics::Distributions can be used to do this[2]. Thus it's possible
to show, for example, that taking the stars with 5% or better errors
on masses, the model is ruled out at the 99.9999999999% confidence
level. Similar tests could be done with other databases.

[1]
#!/usr/bin/perl

use warnings;
use strict;
use Math::Round ':all';
use Statistics::Distributions;

open (DATA, "stars2.txt");
my $chi2=0;
my $count=0;
my $cutoff=0.05;
my $confidence=1e-12;

while (my $line = <DATA>) {
# $line just contains mass and error
my @data = split('\s+', $line);
my $mass = $data[0]; my $mass_error = $data[1];
if ($mass_error/$mass<=$cutoff) {
my $multiple = round( nearest(0.145, $mass) / 0.145 );
my $residual = $mass - ($multiple * 0.145);
my $deviation = $residual / $mass_error;
$chi2+=$deviation**2;
$count++;
}
}

print("Chi^2 is $chi2 for $count stars.\n");
printf("Probability under null hypothesis: %g\n",Statistics::Distributions::chisqrprob($count,$chi2));
printf("(chi^2 required to rule out null hypothesis at (1-%g)\n confidence level is %g)\n",$confidence,Statistics::Distributions::chisqrdistr($count,$confidence));

[2] I haven't used this module before and can only say that the
numbers it produces look entirely reasonable.


--
Martin Hardcastle
School of Physics, Astronomy and Mathematics, University of Hertfordshire, UK
Please replace the xxx.xxx.xxx in the header with herts.ac.uk to mail me

jacob navia

unread,
Sep 18, 2011, 12:53:11 PM9/18/11
to
The problem is that all those stars could have invisible companions.

That is why I looked at Alpha centauri system, since I thought that in
such a close star that problem would be solved...

As I see it, this prediction is impossible to verify really. As far
as the data that eric shows, the answer is negative: there is no quanta
of matter when star systems are measured.

Is that the final truth?

We will know when we know for sure the exact characteristics of
thousands of systems. (And I mean "exact" i.e. when we can rule out any
unseen companions, we know the masses of all the planets, etc)

Robert L. Oldershaw

unread,
Sep 18, 2011, 12:55:18 PM9/18/11
to
On Sep 18, 5:43 am, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
wrote:
>
> print("Chi^2 is $chi2 for $count stars.\n");
> printf("Probability under null hypothesis: %g\n",Statistics::Distributions::chisqrprob($count,$chi2));
> printf("(chi^2 required to rule out null hypothesis at (1-%g)\n     confidence level is %g)\n",$confidence,Statistics::Distributions::chisqrdistr($count,$confidenc­e));
>
> [2] I haven't used this module before and can only say that the
> numbers it produces look entirely reasonable.
--------------------------------------------------------------------------------------------------------------

I am currently looking at the paper "Accurate masses and radii of
normal stars: Modern results and applications" by Torres, Andersen and
Gimenez.

This catalog is available at VizieR, and published Astronomy &
Astrophysics Review, vol. 18, 2010.

It is also available at arxiv.org (search Torres, 2009, astro-ph)

This sample is heavily weighted toward "massive" stars, but it might
yield some interesting results in the 1 to 4 solar mass range for
total system masses.

An independent analysis of the data would be most welcome.

RLO
Fractal Cosmology

jacob navia

unread,
Sep 18, 2011, 5:25:20 PM9/18/11
to
Le 18/09/11 18:53, jacob navia a �crit :

> The problem is that all those stars could have invisible companions.
>

And the problem could be also that they have LOST visible companions.
A couple of stars can be destroyed in gravitational interactions when
the couple is born, both of them becoming lone stars and leaving
no trace of their common origin...

See:
http://www.sciencedaily.com/releases/2011/09/110915083715.htm
or the scientific article:
http://arxiv.org/abs/1109.2896

I think that proving or disproving Robert's hypothesis could be VERY
difficult. The only way to know if he is right would be to weight star
systems when they are just born...

eric gisse

unread,
Sep 18, 2011, 5:28:29 PM9/18/11
to
Martin Hardcastle <m.j.har...@xxx.xxx.xxx> wrote in
news:mt2.0-31008...@hydra.herts.ac.uk:

> In article <mt2.0-13710...@hydra.herts.ac.uk>,
> eric gisse <jowr.pi...@gmail.com> wrote:
>>I have decided to take two hours away from dead island and do
>>something equally productive with my time: test the numerology of
>>Robert Oldershaw.
>
> To help people doing this sort of work: there's a standard way of
> saying whether a given number of standard deviations away from a model
> is significant, the chi^2 test. Eric's code almost does this already,
> but below[1] is a slightly modified version which computes the chi^2
> statistic, which is simply the sum of the squares of the deviations
> over the errors.

[...stuff...]

Irritating perl bug: The value of $residual is going to be negative a
significant portion of the time, but squaring the negative quantity
triggers some sort of overflow/insanity. Might need to do some
reporting/squashing after this.

After calculating the chi squared of the sample, I noticed I was getting
values north of 10^22. I found that to be odd.

It turns out that perl does not square a negative number correctly. So I
have to add yet another hackish/goofy computational protection in the
form of abs($residual) so the squaring doesn't go nutbar.

Now for the chi squared test, let's do that right.

Given most of my experience is with ye olde standard deviation and
associated distribution widgetry, my knowledge of the test has been
limited to "what I learned in school and forgot, and the residual notion
that a large chi squared means the result is crap".

Re-reading my statistical analysis textbook (Taylor, haven't opened it
in about 2 years) and the discussion of this, it turns out that this is
a _far_ better test of a distribution hypothesis rather than what I was
doing. I was merely generalizing my previous behavior of 'take the
latest example, and testing whether it works' which is fine for the
small scale but does not generalize.

Now, you don't want just the regular chi-squared value. That isn't
useful. What is wanted is the reduced chi-squared value, which is chi-
squared divided by the number of degrees of freedom.

For this, the hypothesis is a binning of 0.145 solar masses. This
requires figuring out the amount of bins, and can't be done beforehand
because the data set covers a wide range of masses with a wider range of
confidence.

Now this may be a point of contention, but I argue the degrees of
freedom is not the amount of stars themselves but rather the amount of
bins of 0.145 M_sun required to cover the mass range. This is actually
more generous to the testing, given that more degrees of freedom makes
it more likely that the hypothesis is true. Turns out to not matter
much, since the answer is "zero" for meaningful chunks of the data.

Knowing the probability that the resultant reduced chi squared is going
to be larger than chi squared for a certain amount of degrees of freedom
requires a computation using a variation on the gamma function. Screw
that, that is what CPAN is for. The suggested module does that!

Putting all this together, i have DOESSTUFF_mod2.pl

It turns out that the hypothesis is so wrong that it does not matter
what data quality cutoff I use.

BEEP BOOP...analyzing 17187 stars with masses determined to 100% or
better

Average standard deviation per star: 0.48
Average mass of star: 1.28 solar masses
Mass range of sample: 0.88 to 7.77 solar masses
Chi-squared of the expected binning hypothesis: 35097
Reduced chi-squared: 731.1875
The probability that the reduced chi-squared value
of 731.1875 is larger than the value of 35097
for 48 degrees of freedom is 0.

BEEP BOOP...analyzing 185 stars with masses determined to 1% or better

Average standard deviation per star: 2.97
Average mass of star: 1.35 solar masses

Mass range of sample: 1.00 to 4.63 solar masses
Chi-squared of the expected binning hypothesis: 537
Reduced chi-squared: 21.48
The probability that the reduced chi-squared value
of 21.48 is larger than the value of 537
for 25 degrees of freedom is 0.

BEEP BOOP...analyzing 10 stars with masses determined to 0.6% or better

Average standard deviation per star: 2.99


Average mass of star: 2.07 solar masses

Mass range of sample: 1.71 to 4.63 solar masses
Chi-squared of the expected binning hypothesis: 26
Reduced chi-squared: 1.3
The probability that the reduced chi-squared value
of 1.3 is larger than the value of 26
for 20 degrees of freedom is 0.16581.

The only remarkable thing here is that there is a 4.63 M_sun determined
to 0.6%!

Sorry Robert. The probability that your theory is right is competing
with the limits of floating point math.

--------------

#!/usr/bin/perl

use warnings;
use strict;
use Math::Round ':all';
use Statistics::Distributions;

open (DATA, "stars2.txt");
my $cutoff = 1; my $percent = 100 * $cutoff;


my $totaldeviations; my $count; my $totalmass = 0;

my $chisq = 0; my @stars;

while (my $line = <DATA>) {

my @data = split(' ', $line);


my $mass = $data[2]; my $mass_error = $data[3];

my $multiple = round( nearest(0.145, $mass) / 0.145 );
my $residual = $mass - ($multiple * 0.145);

my $truncated_residual = substr($residual, 0, 5);
my $deviations = substr( abs($residual / $mass_error), 0, 4);
if ($mass_error / $mass le $cutoff) {
$totaldeviations += $deviations;
$count++;
$totalmass += $mass;

$chisq += abs($residual / $mass_error)^2;
push (@stars, $mass);
}
}

my @sorted = sort @stars;
my $largest = $sorted[-1]; my $smallest = $sorted[0];
my $binning_largest = nearest_ceil(0.145, $largest);
my $binning_smallest = nearest_floor(0.145, $smallest);
my $degrees = ($binning_largest - $binning_smallest) / 0.145;
my $reduced_chisq = $chisq / $degrees;
my $probability = Statistics::Distributions::chisqrprob ($degrees,
$chisq);


my $average_deviations = substr($totaldeviations / $count, 0, 4);
my $average_mass = substr($totalmass / $count, 0, 4);

print "BEEP BOOP...analyzing $count stars with masses determined to
$percent% or better

Average standard deviation per star: $average_deviations


Average mass of star: $average_mass solar masses

Mass range of sample: $smallest to $largest solar masses
Chi-squared of the expected binning hypothesis: $chisq
Reduced chi-squared: $reduced_chisq
The probability that the reduced chi-squared value
of $reduced_chisq is larger than the value of $chisq
for $degrees degrees of freedom is $probability.
";

eric gisse

unread,
Sep 18, 2011, 6:00:28 PM9/18/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-19368...@hydra.herts.ac.uk:

> On Sep 18, 5:43�am, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> print("Chi^2 is $chi2 for $count stars.\n");
>> printf("Probability under null hypothesis:
>> %g\n",Statistics::Distributions::chisqrprob($count,$chi2));
>> printf("(chi^2 required to rule out null hypothesis at (1-%g)\n � �
>> confidence level is
>> %g)\n",$confidence,Statistics::Distributions::chisqrdistr($count,
$conf

>> idenc�e));

>>
>> [2] I haven't used this module before and can only say that the
>> numbers it produces look entirely reasonable.
> ----------------------------------------------------------------------
-
> ---------------------------------------
>
> I am currently looking at the paper "Accurate masses and radii of
> normal stars: Modern results and applications" by Torres, Andersen and
> Gimenez.
>
> This catalog is available at VizieR, and published Astronomy &
> Astrophysics Review, vol. 18, 2010.
>
> It is also available at arxiv.org (search Torres, 2009, astro-ph)
>
> This sample is heavily weighted toward "massive" stars, but it might
> yield some interesting results in the 1 to 4 solar mass range for
> total system masses.

Why would you think it'd tell you anything we don't already know? ~
17,000 stars, the majority within that mass range, explicitly disproves
your theory.

Another hundred won't change anything.

>
> An independent analysis of the data would be most welcome.

How fortunate for you that someone did your research for you. There's a
program right there for you to analyze the data with.

>
> RLO
> Fractal Cosmology

eric gisse

unread,
Sep 18, 2011, 6:01:30 PM9/18/11
to
jacob navia <ja...@spamsink.net> wrote in news:mt2.0-20923-1316381120
@hydra.herts.ac.uk:

> Le 18/09/11 18:53, jacob navia a écrit :
The 12k star sample have their masses determined spectroscopically.
Besides, Robert has argued that invididual stars are quantized in mass in
addition to the systems themselves.

Yes, really.

Not my theory, I don't have to justify it.

Martin Hardcastle

unread,
Sep 18, 2011, 6:02:33 PM9/18/11
to
In article <mt2.0-20923...@hydra.herts.ac.uk>,

eric gisse <jowr.pi...@gmail.com> wrote:
>Irritating perl bug: The value of $residual is going to be negative a
>significant portion of the time, but squaring the negative quantity
>triggers some sort of overflow/insanity. Might need to do some
>reporting/squashing after this.

Not in the perl I use! (I did test the script I posted...) That would
be a very serious and basic bug if it were really present!

I think you are mistaking the operator ^ for the operator ** (see my
code). ^ in perl is not 'raise to power' but 'bitwise xor'. ** is the
exponentiation operator.

>Now, you don't want just the regular chi-squared value. That isn't
>useful. What is wanted is the reduced chi-squared value, which is chi-
>squared divided by the number of degrees of freedom

Not so. The reduced chi^2 is a useful way of telling at first glance
whether there is something wrong with a model: a reduced chi^2 much
greater than 1 indicates a problem. However, the reduced chi^2 doesn't
tell you everything that you want to know, quantitatively -- a reduced
chi^2 of 2 for 2 degrees of freedom is very much less interesting than
a reduced chi^2 of 2 for several thousand degrees of freedom. So in
fact the standard thing to do is use the chi^2 value itself and
calculate (or look up) the critical value for a given number of
degrees of freedom. You can do the equivalent thing for a reduced
chi^2, so it's not a big deal, but chi^2/d.o.f. is what people quote
in the astrostatistics literature, so it's what I used.

>For this, the hypothesis is a binning of 0.145 solar masses. This
>requires figuring out the amount of bins, and can't be done beforehand
>because the data set covers a wide range of masses with a wider range of
>confidence.
>
>Now this may be a point of contention, but I argue the degrees of

>freedom is not the amount ouf stars themselves but rather the amount of

>bins of 0.145 M_sun required to cover the mass range. This is actually
>more generous to the testing, given that more degrees of freedom makes
>it more likely that the hypothesis is true. Turns out to not matter
>much, since the answer is "zero" for meaningful chunks of the data.

I don't think this is right. The degrees of freedom is the number of
data points, minus the number of free parameters of the model (none in
this case): see e.g.
http://en.wikipedia.org/wiki/Chi-square_statistic. So I don't think
your numbers are correct (in particular, it would seem very bizarre
if, as you suggest, including stars with larger errors caused the
model to be ruled out more stringently... in fact, if you include all
the stars with huge errors *and* calculate the chi^2 correctly, you
should find an acceptable fit, but that's only because you'd be
diluting the stars that can actually constrain the model with the many
more that can't).

However, the key point is that this test can be done, and, when it's
done with stars with accurately measured masses, it is inconsistent
with the proposed model at a very high confidence level, as I said
earlier.

Martin

Martin Hardcastle

unread,
Sep 18, 2011, 6:34:25 PM9/18/11
to
In article <mt2.0-19368...@hydra.herts.ac.uk>,

Robert L. Oldershaw <rlold...@amherst.edu> wrote:
>This sample is heavily weighted toward "massive" stars, but it might
>yield some interesting results in the 1 to 4 solar mass range for
>total system masses.
>
>An independent analysis of the data would be most welcome.

As these are non-contact binaries and the authors say they should have
evolved independently, I did the chi^2 test as described in the
earlier posting for the individual stars with errors less than 0.145
solar masses: chi^2 of 16085 for 172 degrees of freedom, null
hypothesis ruled out at such a large confidence level that I can't
calculate it offhand, but basically such that the model can't possibly
be right.

If I add up the two components and take only the systems where the
combined error on mass is less than 0.145 solar masses, I get a chi^2
of 1154 for 82 d.o.f., again wildly inconsistent with the model.

So, again, the data are not telling you what you would like them to
tell you. It took me about ten minutes to find the data you referred
to, get them into the right format, modify and run my code, and do the
modifications needed to run it again on the sums of the masses.
Testing models, when they make quantitative predictions, is easy, and
it's a skill that any would-be-modeller ought to learn. The half-hour
or so I've spent on this today is enough for me, though.

eric gisse

unread,
Sep 19, 2011, 2:47:29 AM9/19/11
to
Martin Hardcastle <m.j.har...@xxx.xxx.xxx> wrote in news:mt2.0-
25382-13...@hydra.herts.ac.uk:

> In article <mt2.0-20923...@hydra.herts.ac.uk>,
> eric gisse <jowr.pi...@gmail.com> wrote:
>>Irritating perl bug: The value of $residual is going to be negative a
>>significant portion of the time, but squaring the negative quantity
>>triggers some sort of overflow/insanity. Might need to do some
>>reporting/squashing after this.
>
> Not in the perl I use! (I did test the script I posted...) That would
> be a very serious and basic bug if it were really present!

Yeah. That's what was throwing me.

>
> I think you are mistaking the operator ^ for the operator ** (see my
> code). ^ in perl is not 'raise to power' but 'bitwise xor'. ** is the
> exponentiation operator.

That'll do it. Major bug, but easily fixed.

This is what happens when you use too many devices and languages that
use different symbols for the same things.

>
>>Now, you don't want just the regular chi-squared value. That isn't
>>useful. What is wanted is the reduced chi-squared value, which is chi-
>>squared divided by the number of degrees of freedom
>
> Not so. The reduced chi^2 is a useful way of telling at first glance
> whether there is something wrong with a model: a reduced chi^2 much
> greater than 1 indicates a problem. However, the reduced chi^2 doesn't
> tell you everything that you want to know, quantitatively -- a reduced
> chi^2 of 2 for 2 degrees of freedom is very much less interesting than
> a reduced chi^2 of 2 for several thousand degrees of freedom. So in
> fact the standard thing to do is use the chi^2 value itself and
> calculate (or look up) the critical value for a given number of
> degrees of freedom. You can do the equivalent thing for a reduced
> chi^2, so it's not a big deal, but chi^2/d.o.f. is what people quote
> in the astrostatistics literature, so it's what I used.

I'm somewhat confused as what you said reinforces my point. The reduced
chi squared, in my understanding, takes into account the differences in
degrees of freedom.

As for the calculation, I see two different things going on:

In Taylor, the gamma integral is done over the reduced chi squared to
infinity, while in my 31st edition CRC the same integral is done from 0
to chi squared. I wonder if I'm just confused or there's not a standard
in the literature.

But as you say this is all (or should be) the same thing, so I am not
abundantly concerned.

>
>>For this, the hypothesis is a binning of 0.145 solar masses. This
>>requires figuring out the amount of bins, and can't be done beforehand
>>because the data set covers a wide range of masses with a wider range
of
>>confidence.
>>
>>Now this may be a point of contention, but I argue the degrees of
>>freedom is not the amount ouf stars themselves but rather the amount
of
>>bins of 0.145 M_sun required to cover the mass range. This is actually
>>more generous to the testing, given that more degrees of freedom makes
>>it more likely that the hypothesis is true. Turns out to not matter
>>much, since the answer is "zero" for meaningful chunks of the data.
>
> I don't think this is right. The degrees of freedom is the number of
> data points, minus the number of free parameters of the model (none in
> this case): see e.g.
> http://en.wikipedia.org/wiki/Chi-square_statistic.

I saw that, and was in fact the first thing I looked at before I
remembered "I have books on this stuff! Books that cost me money!"


>So I don't think
> your numbers are correct (in particular, it would seem very bizarre
> if, as you suggest, including stars with larger errors caused the
> model to be ruled out more stringently...

Well, I was running through the calculations with the exponentiation
done incorrectly the whole time. Either way, I do not get an answer
distinguishable from computational zero until I reduce the sample size
to like ten stars which is bad regardless.

The calculated probability for 185 stars (the 1%-or-better sample) is
indistinguishable from zero after fixing that and using both the number
of bins as well as the number of stars themselves as degrees of freedom.

>in fact, if you include all
> the stars with huge errors *and* calculate the chi^2 correctly, you
> should find an acceptable fit, but that's only because you'd be
> diluting the stars that can actually constrain the model with the many
> more that can't).

True, but the hypothesis is so wrong that the calculated probability is
computationally equal to zero until I work with a sample size on the
order of 10 stars.

The reason I argue that more degrees of freedom is more generous is that
it becomes more mathematically possible that the chi squared vs reduced
chi squared comparison does not reject the hypothesis being tested.

Take a chi squared of 10, for example.

Ten degrees of freedom vs one degree of freedom. Which one gives a
result that is more likely?

You can see this in a chi squared integral table, because as you
increase the degrees of freedom the probability goes up.

Or going by actual data:

The chi squared of the 1% set is 2460,

If I use the bins of 0.145 M_sun as the degrees of freedom, I have 25
degrees of freedom. Reduced chi squared is 98.4

If I use the actual amount of stars, I have 185 degrees of freedom.
Reduced chi squared is 13.3.


>
> However, the key point is that this test can be done, and, when it's
> done with stars with accurately measured masses, it is inconsistent
> with the proposed model at a very high confidence level, as I said
> earlier.
>
> Martin

Yep.

Robert L. Oldershaw

unread,
Sep 19, 2011, 2:48:25 AM9/19/11
to
On Sep 18, 6:34 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
wrote:
>
> So, again, the data are not telling you what you would like them to
> tell you. It took me about ten minutes to find the data you referred
> to, get them into the right format, modify and run my code, and do the
> modifications needed to run it again on the sums of the masses.
> Testing models, when they make quantitative predictions, is easy, and
> it's a skill that any would-be-modeller ought to learn. The half-hour
> or so I've spent on this today is enough for me, though.
---------------------------------------------------------------------------

When you get refreshed, maybe you could put in a half hour or so on
white dwarf masses.

No one seems to want to talk about the Tremblay et al SDSS white dwarf
mass function.

This is odd since it is a large, recent sample, and is carefully
analysed.

It also has clear and statistically significant peaks at DSR's
predicted values.

Why is everyone ignoring this piece of information? (He asks
rhetorically).

RLO
http://www3.amherst.edu/~rloldershaw

Robert L. Oldershaw

unread,
Sep 19, 2011, 2:52:30 AM9/19/11
to
On Sep 18, 6:02�pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
wrote:
>

> However, the key point is that this test can be done, and, when it's
> done with stars with accurately measured masses, it is inconsistent
> with the proposed model at a very high confidence level, as I said
> earlier.
--------------------------------------------------------------

There is something that should be borne in mind: the Sandage - de
Vaucouleurs dust-up.

If you will recall, the two protagonists battled long and hard over
the value of the Hubble constant. Sandage insisted upon 50 km/sec/
Mpc, while de Vaucouleurs insisted upon 100 km/sec/Mpc. The battle
raged on for many years.

Both camps had the same observational data to work with.
Both camps had the same statistical methods to work with.
Both camps included the best astrophysicists of the time.
Both camps insisted that they were obviously right.
Both camps insisted that the other side was wrong.

If things can be unambiguoulsy decided with some data and some
statistical analysis,
HOW CAN THIS BE POSSIBLE?!?

There is a very important lesson here about a very common problem in
physics: often wrong - never in doubt.

Who was right? NEITHER, apparently. We now think that <H> ~ 70 km/sec/
Mpc.

Bottom lines: Be careful about what you say you can rule out.
And be very careful about what you say you are SURE is right.

Sorry for the EMPHASIS (perhaps I miss PH).

RLO
http://www3.amherst.edu/~rloldershaw

eric gisse

unread,
Sep 19, 2011, 3:59:08 AM9/19/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in news:mt2.0-
26561-13...@hydra.herts.ac.uk:

> On Sep 18, 6:02�pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> However, the key point is that this test can be done, and, when it's
>> done with stars with accurately measured masses, it is inconsistent
>> with the proposed model at a very high confidence level, as I said
>> earlier.
> --------------------------------------------------------------
>
> There is something that should be borne in mind: the Sandage - de
> Vaucouleurs dust-up.
>
> If you will recall, the two protagonists battled long and hard over
> the value of the Hubble constant. Sandage insisted upon 50 km/sec/
> Mpc, while de Vaucouleurs insisted upon 100 km/sec/Mpc. The battle
> raged on for many years.
>
> Both camps had the same observational data to work with.
> Both camps had the same statistical methods to work with.
> Both camps included the best astrophysicists of the time.
> Both camps insisted that they were obviously right.
> Both camps insisted that the other side was wrong.
>
> If things can be unambiguoulsy decided with some data and some
> statistical analysis,
> HOW CAN THIS BE POSSIBLE?!?

Models differ.
Fundamental disagreement over fundamental physics and assumptions.
Insufficient data.
Systematic errors in measurements.

I have no idea how many of the above were true for the kerfuffle over
the Hubble constant, nor do I care because it is completely irrelevant
to the topic at hand.

>
> There is a very important lesson here about a very common problem in
> physics: often wrong - never in doubt.
>
> Who was right? NEITHER, apparently. We now think that <H> ~ 70
km/sec/
> Mpc.

An answer that is now verified through several independent methods, all
with high quality data.

>
> Bottom lines: Be careful about what you say you can rule out.
> And be very careful about what you say you are SURE is right.

Your theory has been ruled out at a confidence level so high that the
chance you are right is computationally equal to zero.

You have been going on and on about this on USENET since 1995:

http://groups.google.com/group/sci.astro.research/msg/f489046a4e1ba66c?
dmode=source

And apparently actually got an ApJ editor in a moment of weakness, so
there's a publication in 1987 about it too.

In a few hours of work using data published more than a decade ago, your
theory was excluded.

Are you going to move on?

eric gisse

unread,
Sep 19, 2011, 4:01:29 AM9/19/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-26561...@hydra.herts.ac.uk:

> On Sep 18, 6:34 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> So, again, the data are not telling you what you would like them to
>> tell you. It took me about ten minutes to find the data you referred
>> to, get them into the right format, modify and run my code, and do
>> the modifications needed to run it again on the sums of the masses.
>> Testing models, when they make quantitative predictions, is easy, and
>> it's a skill that any would-be-modeller ought to learn. The half-hour
>> or so I've spent on this today is enough for me, though.
> ----------------------------------------------------------------------
-
> ----
>
> When you get refreshed, maybe you could put in a half hour or so on
> white dwarf masses.

What a ballsy request.

The programming for handling large data samples of star masses has been
written out and explained by me, and further refined by Martin.

I spent a few minutes debating with myself whether to bother because I
knew the result would be some combination of you blatantly ignoring the
result that discredits your theory and a request for more analysis to be
done, previous results be damned.

Boy I wasn't even close, was I?

>
> No one seems to want to talk about the Tremblay et al SDSS white dwarf
> mass function.

It might have something to do with you having performed literally no
effort on your own.

I have news for you: there is no serious intere

>
> This is odd since it is a large, recent sample, and is carefully
> analysed.

True, yet completely irrelevant.

>
> It also has clear and statistically significant peaks at DSR's
> predicted values.

Really, Robert? How do you define 'statistically significant'?

Have you done any analysis of the stars themselves to see if they match
your predictions?

You could, because the programming is RIGHT THERE FOR YOU.

Will you? Probably not. Feel free to use your reflexive disagreement
with me as an engine for doing something useful for yourself, though.

Are you being consistent with your atomic scale numerology?

Nope. You want to claim that white dwarfs obey a mass distribution
similar to atoms, but completely neglect the fact that the stellar mass
distribution completely disagrees with you.

Plus, there are plenty of stars in the neighborhood of 0.73 M_sun which
further discredits your theory.

Given the existence of 100+ solar mass stars, your numerology predicts >
600 atomic mass nuceli. Or inverting the argument, your numerology
predicts a lack of stars above 15 solar masses or so because there are
no stable nuclei past Z ~ 100.

>
> Why is everyone ignoring this piece of information? (He asks
> rhetorically).

Probably because nobody cares. Or maybe because people are willing to
put in about as much effort as you, which is to say 'none at all'.

I have arguably worked harder on this subject than you have in the last
decade.

* Your notions of dark matter composition and distribution? Completely
wrong. I was the one who gave you the literature on microlensing
searches, which was ignored.

No, saying "Hawkins" three times fast doesn't make 15 years of
microlensing surveys go away.

* Stellar mass distribution? Completely wrong. You've already moved on
to completely ignoring how wrong you are about this, and it didn't even
take 24 hours.

You've put literally zero effort into doing this yourself, and you've
had decades. The data set I used was published in 1999. You have no
excuses.

* Eclipsing binary system mass distribution? Completely wrong. Martin
did the analysis for you, which you are free to repeat yourself given
the available framework. But you won't, and we both know that.

The most recent data is from 2010, but since you missed stuff from 1999
I am not surprised you missed that.

* Planet mass distribution? You've made literally zero effort in testing
that one despite being given the data, and I'm not doing it for you.

Guess we'll never know on that one...

Oh wait, yes we will.

http://groups.google.com/group/sci.astro.research/msg/c69568058f29d8ea?
dmode=source

I generally interpret "complete ignoring of technical points" on
research newsgroups as evidence for the person I'm responding to having
no argument.

So much for numerology on planetary mass distributions.

Given all the failures and didn't-even-try's above, do you really have
to ask why another block of data you think is interesting is being
ignored?

I will promise you this however: If you ever get another journal of note
to publish your claims after this, I'll be sure to get a refutation
published.

The science is in, and your theory is wrong. Will you be the scientist
you claim to be and move on to something new, or adopt crank behaviors
like ignoring data that disagrees with you?

Choose carefully, as this is an archived medium.

>
> RLO
> http://www3.amherst.edu/~rloldershaw
>

Robert L. Oldershaw

unread,
Sep 19, 2011, 5:04:11 PM9/19/11
to
On Sep 19, 3:59 am, eric gisse <jowr.pi.ons...@gmail.com> wrote:
>
> > Who was right? NEITHER, apparently.  We now think that <H> ~ 70
> km/sec/
> > Mpc.
>
> An answer that is now verified through several independent methods, all
> with high quality data.
------------------------------------------------------------------------------------

Perhaps someone will offer a primer on how the recently postulated
acceleration of the Huble Bubble might affect the value of H and the
concept of uniform expansion.

Both Sandage and de Vaucouleurs argued that their "answer [was] now
verified through several independent methods, all with high quality
data."

Don't people understand?

There are Platonic over-simplified mathematical models of reality,
i.e., pseudo-reality.

And then there is the physical reality of the real world, i.e.,
reality.

If you assert that the former model is an absolute and unchangeable
version of the latter, you commit a cardinal sin, from the scientific
point of view.

RLO
Discrete Scale Realtivity

Robert L. Oldershaw

unread,
Sep 19, 2011, 5:04:50 PM9/19/11
to
On Sep 18, 6:34 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
wrote:

> earlier posting for the individual stars with errors less than 0.145
> solar masses: chi^2 of 16085 for 172 degrees of freedom, null
>
> If I add up the two components and take only the systems where the
> combined error on mass is less than 0.145 solar masses, I get a chi^2

-----------------------------------------------------------------------------------------------

How can you possibly test a "model" that predicts quantization at
0.145 solar mass when you accept data with an error of up to just
under 0.145 solar mass?

Would you not need errors of 0.01 or less?

Are systematic errors accounted for?

How much error can sin(i) and sin^3 (i) introduce into mass
calculatuons?

Thanks,
RLO

Martin Hardcastle

unread,
Sep 19, 2011, 5:43:26 PM9/19/11
to
In article <mt2.0-10213...@hydra.herts.ac.uk>,
Robert L. Oldershaw <rlold...@amherst.edu> wrote:
>How can you possibly test a "model" that predicts quantization at
>0.145 solar mass when you accept data with an error of up to just
>under 0.145 solar mass?

Easy: the chi^2 statistic does this for you. Stars with large errors
will just not contribute very much to the final sum. Any basic
statistics book will explain this. Bevington & Robinson is one I've
recommended to students in the past.

(If we restrict ourselves to systems where the magnitudes of the
errors are very small, the result is actually much, much worse for
your model, because all the systems where we don't actually know
enough to say much are excluded, leaving all the systems which simply don't
fit. Run the code, make a different cut, see for yourself.)

>Are systematic errors accounted for?

You tell me what systematic errors are present, I'll tell you whether
they're accounted for. As you'll see if you look at the paper, the
authors have gone to some trouble to determine and correct for
systematic errors in the fitting.

>How much error can sin(i) and sin^3 (i) introduce into mass
>calculatuons?

They're *eclipsing* binaries, as again the paper makes clear. So,
virtually none; and what there is is accounted for in the errors in
mass used in the chi^2 calculation.

This is the database *you* suggested I run the test on: the paper is a
good piece of work, standard in its field, and clearly provides the
'definitive test' you wanted: I have done a test that any competent
undergraduate could do and the result is completely inconsistent with
your expectations: several of us have also provided you with the tools
you need to do the same test yourself, so you don't really have any
excuse to call bias. When the best available data conclusively rule
out a model, a good scientist thinks again. I think that's all I need
to say on the subject.

Martin
--

Robert L. Oldershaw

unread,
Sep 20, 2011, 2:42:19 AM9/20/11
to
On Sep 19, 5:43 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
wrote:
>
> This is the database *you* suggested I run the test on: the paper is a
> good piece of work, standard in its field, and clearly provides the
> 'definitive test' you wanted: I have done a test that any competent
> undergraduate could do and the result is completely inconsistent with
> your expectations: several of us have also provided you with the tools
> you need to do the same test yourself, so you don't really have any
> excuse to call bias. When the best available data conclusively rule
> out a model, a good scientist thinks again. I think that's all I need
> to say on the subject.
-----------------------------------------------------------------------------------

Sincere thanks for your efforts on this sample, which I do not
dispute. This sample does not manifest the predicted quantization.

However, we know that the number of stars with masses below 1.00 solar
mass and with errors at the 0.01 solar mass level is still quite small
in this sample. So I am nowhere near ready to give up yet.

I have much less faith in the arguments you use to summarily dismiss a
whole paradigm on the basis of one dubious sample, having seen this
kind of reasoning falsifed over and over again throughout the history
of science. You know: disproving evolution because it could be
mathematically "proven" that the Sun was less than a million years
old; or proving mathematicaly that H had to be 100 +/- 10 km/sec/Mpc
while simultaneously proving it had to be 50 +/- 5 km/sec/Mpc.

If white dwarf samples are consistent with discrete masses, or at
least show evidence for preferred masses, what do you say then?

RLO
http://www3.amherst.edu/~rloldershaw

Robert L. Oldershaw

unread,
Sep 20, 2011, 2:44:19 AM9/20/11
to
On Sep 19, 5:04 pm, "Robert L. Oldershaw" <rlolders...@amherst.edu>
wrote:
>
> And then there is the physical reality of the real world, i.e.,
> reality.
-------------------------------------------------------------------------

Here is an interesting system - a pulsar/star binary, published in
NATURE IN 2010.

http://arxiv.org/abs/1010.5788

Pulsar mass is 1.97 +/- 0.04 solar mass
Star mass is 0.500 +/- 0.006 solar mass

Total mass = 2.470 solar mass

Predicted DSR peak at 17 times 0.145 solar mass = 2.465 solar mass.

[2.470 - 2.465/2.470] times 100 = 0.2% error = 99.8% agreement.

RLO
http://www3.amherst.edu/~rloldershaw

eric gisse

unread,
Sep 20, 2011, 2:45:47 AM9/20/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-10213...@hydra.herts.ac.uk:

> On Sep 18, 6:34 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>
>> earlier posting for the individual stars with errors less than 0.145
>> solar masses: chi^2 of 16085 for 172 degrees of freedom, null
>>
>> If I add up the two components and take only the systems where the
>> combined error on mass is less than 0.145 solar masses, I get a chi^2
>
> ----------------------------------------------------------------------
-
> ------------------------
>
> How can you possibly test a "model" that predicts quantization at
> 0.145 solar mass when you accept data with an error of up to just
> under 0.145 solar mass?

What's the contribution to the chi squared when this is true?

Calculate it, please.

>
> Would you not need errors of 0.01 or less?

Do you know what a standard deviation is?

Given a residual mass difference that disagrees with your binning by 0.1
M_sun, with an error in the measurement of 0.03 M_sun, it can be said
that there is a 3 standard deviation disagreement with the predicted
binning.

You continue to labor under the notion that percentage based
representations of error are more accurate. You need to knock that off.
It is wrong.

>
> Are systematic errors accounted for?

What systematic errors? You seem to frequently invoke "systematic
errors" without ever bothering, even upon direct request, to explain
what you imagine they might be.

>
> How much error can sin(i) and sin^3 (i) introduce into mass
> calculatuons?
>
> Thanks,
> RLO

This is something you should be able to answer yourself. Did you ever
learn how to propagate error?

I have a better idea. Instead of complaining about unknown systematics,
discuss the results rather than pretending they don't exist.

eric gisse

unread,
Sep 20, 2011, 2:47:43 AM9/20/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-10213...@hydra.herts.ac.uk:

> On Sep 19, 3:59 am, eric gisse <jowr.pi.ons...@gmail.com> wrote:
>>
>> > Who was right? NEITHER, apparently.  We now think that <H> ~ 70
>> km/sec/
>> > Mpc.
>>
>> An answer that is now verified through several independent methods,
>> all with high quality data.
> ----------------------------------------------------------------------
-
> -------------
>
> Perhaps someone will offer a primer on how the recently postulated
> acceleration of the Huble Bubble might affect the value of H and the
> concept of uniform expansion.

[snip]

Could you stop changing the subject every time you reply and address the
fact that actual statistical analysis of the data you said needed to be
analyzed ended up disproving your theory?

There is less-than-zero interest in your latest diversionary tactic.

* You have the data - no complaints. You had a decade with my cited data
set, and you didn't do anything with it.

Feel free to argue you never looked for it or somehow missed exactly
that which you were looking for.

* You have the code - no complaints. You have not even attempted to
comment on the work.

What's up with that?

Do you not understand? Is this too hard for you?

* You have the results - complete radio silence. In fact, not only do
you deftly refuse to discuss this, you actually had the balls to beg for
others to do MORE of the same work for you while completely ignoring the
previous results.

Nobody cares how if you look at white dwarf data in the right light it
can kinda-sorta agree with your theory. Or anything else for that matter
because your theory is dead.

Feel free to change the subject again if you think it'd fool anyone
other than yourself.

Robert L. Oldershaw

unread,
Sep 20, 2011, 11:44:17 AM9/20/11
to
On Sep 20, 2:45 am, eric gisse <jowr.pi.ons...@gmail.com> wrote:
>
> This is something you should be able to answer yourself. Did you ever
> learn how to propagate error?
-------------------------------------------------------------------------------

If you are a master at propagating error, and I have seen much
evidence for this, then why not give us a tutorial.

Question: In your view are there often several ways to statistically
evaluate agreement between theoretical predictions and empirical
results? Or only one?

Question: Have you ever seen a case where a "very precise
determination" of something turns out to be wrong? Say, like the
radius of the proton? It was guaranteed to be 0.88 fermi, until
better experiments came out with an incompatible new value of 0.84
fermi. See how the real world works?

RLO
Discrete Scale Relativity

Steve Willner

unread,
Sep 20, 2011, 2:37:10 PM9/20/11
to
In article <mt2.0-26561...@hydra.herts.ac.uk>,
eric gisse <jowr.pi...@gmail.com> writes:
> I'm somewhat confused as what you said reinforces my point. The reduced
> chi squared, in my understanding, takes into account the differences in
> degrees of freedom.

Yes. Think of it this way: if the model is a good description and
has no free parameters, each data point will contribute about 1 to
the total chi-square. Of course this is only an average. Some data
points will agree perfectly with the model (and thus contribute
zero), but others will be off by one or two sigma (or more if you
have a lot of data points). But _on average_, each data point
contributes about one to the total chi-square, so the _reduced_
chi-square will be about one if the model is a good description.

If your model has free parameters, each of them reduces the total
chi-square by about 1. That's why "degrees of freedom" is the number
of data points minus the number of free model parameters.

Of course this is approximate. If you want to calculate probability,
there are tables or formulas for any given number of degrees of
freedom and chi-square. I've seen tables that use chi-square and
others that use reduced chi-square, so just be sure you know which
you have.

> If I use the bins of 0.145 M_sun as the degrees of freedom, I have 25
> degrees of freedom. Reduced chi squared is 98.4
>
> If I use the actual amount of stars, I have 185 degrees of freedom.
> Reduced chi squared is 13.3.

Are you calculating the number of stars per bin (expected minus
actual)? Or the distance of each star from the nearest "predicted"
mass value? In the latter case, I don't see why you need bins at
all. Either way, the explanation I've given should tell you how to
figure degrees of freedom.

--
Help keep our newsgroup healthy; please don't feed the trolls.
Steve Willner Phone 617-495-7123 swil...@cfa.harvard.edu
Cambridge, MA 02138 USA

eric gisse

unread,
Sep 21, 2011, 2:27:43 AM9/21/11
to
Steve Willner <wil...@cfa.harvard.edu> wrote in news:mt2.0-31193-
13165...@hydra.herts.ac.uk:

[...good stuff...]

I had convinced myself that of the correct answer and why, but didn't think
a followup on it was needed but I guess so. Your response is excellent
though, and would have saved me time and will likely save time on others.

Regardless, the calculation was on each star. I didn't care about the
binning beyond testing whether each star was an integral multiple (binned).

eric gisse

unread,
Sep 21, 2011, 2:31:13 AM9/21/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-9068...@hydra.herts.ac.uk:

> On Sep 20, 2:45 am, eric gisse <jowr.pi.ons...@gmail.com> wrote:
>>
>> This is something you should be able to answer yourself. Did you ever
>> learn how to propagate error?
> ----------------------------------------------------------------------
-
> --------
>
> If you are a master at propagating error, and I have seen much
> evidence for this, then why not give us a tutorial.

http://tinyurl.com/6bbs9us

This material was taught to me in freshman year lab courses.

If you are going to beg for an education in statistical anlaysis, don't
waste people's time by floating nonsense claims about mysterious and
unquantifiable systematic errors in all the observations that falsify
your theory.

>
> Question: In your view are there often several ways to statistically
> evaluate agreement between theoretical predictions and empirical
> results? Or only one?

Sure, several.

Don't embarass yourself by using the existence of multiple statistical
analysis methods as an attempt to delude yourself into thinking your
theory has a shot.

It is dead. Never coming back, not that it was ever a serious theory to
begin with.

>
> Question: Have you ever seen a case where a "very precise
> determination" of something turns out to be wrong? Say, like the
> radius of the proton?

Sure, that's one example.

The gravitational constant is another.

Thank you for reminding me about the proton radius thing. I had totally
forgotten about this.

Let's get in the w-w-wayback machine and trundle back to July of 2010
where you were spewing nonsense about how your theory was doing better
than the current QED approximation.

http://groups.google.com/group/sci.fractals/msg/a5a5e74f37a17874?
dmode=source

http://groups.google.com/group/sci.fractals/msg/61e75c7ed16c84c2?
dmode=source

In a year, the following is apparent:

1) You still do not understand the concept of the standard deviation,
systematic vs random errors, or statistical analysis in general.

2) You still do not understand significant digits.

3) You still do not understand why percentage based error estimates are
horrible to use.

4) You still simply ignore me when I prove your theory wrong.

5) You still think it it a good idea to bring up an observation that
disagrees with you by 41 standard deviations.


> It was guaranteed to be 0.88 fermi,

No it wasn't. Both the word "guranteed" and the number "0.88" are wrong.

Measurement settled upon 0.8768(69) fm, consistent with current QED
predictions. Absent systematic error in the measurement, that was the
answer. Then a new method of measuring is tried, and a systematic error
was discovered.

I am unsurprised to see you think "0.88" and "0.8768(69)" are the same
number.

> until
> better experiments came out with an incompatible new value of 0.84
> fermi. See how the real world works?

Yeah, observation disproves theories. The approximation to QED that
predicts the ~0.87 fm proton radius is wrong, just as your theory is
wrong.

I'm glad you took the time to remind everyone that your theory made
another prediction that was wrong by an amount of standard deviations
that takes two digits to express.

>
> RLO
> Discrete Scale Relativity
>

[Mod. note: we seem to be wandering away from our, admittedly tenuous,
grasp on astrophysics here -- please focus on that and not on the
proton radius, which belongs on some other group -- mjh]

eric gisse

unread,
Sep 21, 2011, 2:33:29 AM9/21/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-17850...@hydra.herts.ac.uk:

> On Sep 19, 5:43 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> This is the database *you* suggested I run the test on: the paper is
>> a good piece of work, standard in its field, and clearly provides the
>> 'definitive test' you wanted: I have done a test that any competent
>> undergraduate could do and the result is completely inconsistent with
>> your expectations: several of us have also provided you with the
>> tools you need to do the same test yourself, so you don't really have
>> any excuse to call bias. When the best available data conclusively
>> rule out a model, a good scientist thinks again. I think that's all I
>> need to say on the subject.
> ----------------------------------------------------------------------
-
> ------------
>
> Sincere thanks for your efforts on this sample, which I do not
> dispute. This sample does not manifest the predicted quantization.

I note that you forgot to thank me for doing all your research for you.

I've done more work on your theory in the last year than you have in the
past 15. Every time I give you a piece of literature that disproves your
theory is another piece of literature that mysteriously evaded your
sights.

>
> However, we know that the number of stars with masses below 1.00 solar
> mass and with errors at the 0.01 solar mass level is still quite small
> in this sample. So I am nowhere near ready to give up yet.

This sample? I just did the analysis on twelve thousand stars. Did you
not see it?

Your nonsensical requirement that a significant amount of stars be
measured to 0.01 M_sun level has been satisfied. There are 277 stars
within the sample given to you, with a range of 0.88 to 4.63 M_sun. That
sample only has a chi squared of about 3800.

Your theory is, unsurprisingly, still excluded to an 'indistinguishable
from 100%' probability. You, of course, do not seem too concerned about
this. Another day, still zero comment on my analysis.

If you won't be embarassed for yourself, I'll be embarassed for you.

>
> I have much less faith in the arguments you use to summarily dismiss a
> whole paradigm on the basis of one dubious sample,

YOU PICKED THE SAMPLE.

You *begged* him to do the analysis you are incapable of doing.

Only when the data shreds your theory do you break out the 'one dubious
sample' BS.

Do you believe you have any credibility here?

> having seen this
> kind of reasoning falsifed over and over again throughout the history
> of science. You know: disproving evolution because it could be
> mathematically "proven" that the Sun was less than a million years
> old; or proving mathematicaly that H had to be 100 +/- 10 km/sec/Mpc
> while simultaneously proving it had to be 50 +/- 5 km/sec/Mpc.

So much for the 'definitive test' you were crowing about until it was
actually done.

This is intellectually dishonest behavior, and it needs to stop
happening or at least stop appearing here. There are other newsgroups
which will let me use the words I want to use.

>
> If white dwarf samples are consistent with discrete masses, or at
> least show evidence for preferred masses, what do you say then?

Nice hedging, Robert.

The 'definitive prediction' of binning has now been excluded, so play
the coward and say 'at least show evidence for preferred masses'.

Regardless, what you ask is a hypothetical as you have not and will not
do the analysis for discrete masses and you have not and will not do the
analysis for preferred masses.

Can you please stop sending this stuff to sci.astro.research now? You
might as well go back to crossposting to half of the sci.* tree.

>
> RLO
> http://www3.amherst.edu/~rloldershaw

eric gisse

unread,
Sep 21, 2011, 2:34:38 AM9/21/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in news:mt2.0-17850-
13165...@hydra.herts.ac.uk:

What of the thousands and thousand of stars that prove your theory wrong?

Are you going to keep posting example after example that agrees with you,
while ignoring the thousands that don't?

Are you even going to directly respond to my analysis?

wlandsman

unread,
Sep 21, 2011, 2:53:39 PM9/21/11
to
On Monday, September 19, 2011 2:48:25 AM UTC-4, Robert L. Oldershaw wrote:

> No one seems to want to talk about the Tremblay et al SDSS white dwarf
> mass function.

OK, let's talk about the Tremblay et al results. First, I was
surprised to see you accept these spectroscopic results which make use
of a theoretical mass-radius relation. Comparison to the handful of
dynamical white dwarf masses, and gravitational redshift measurements
suggest that the masses are probably good to 0.01-0.02 Msun. But then
I don't understand why you wouldn't accept the theoretical
mass-luminosity relation for main-sequence stars, and the overwhelming
evidence for a continuous mass spectrum from the observed continuous
luminosity function of star clusters.

Second, a peak in the white dwarf mass spectrum is a strong prediction
of standard stellar evolution. The hot (>12,000 K) white dwarfs
observed by Tremblay et al. are degenerate cores of stars that have
recently "died" (i.e. passed through their planetary nebula phase and
ejected their envelopes). If you look at the old (~13 Gyr) globular
clusters, then the hot white dwarfs are all descendants of ~0.8 solar
mass stars. (Higher mass stars evolved more quickly and are now much
cooler white dwarfs, lower mass stars have not yet evolved to become
white dwarfs.) In globular clusters one finds a strong white dwarf
mass peak at 0.53 Msun (Kalarai et al 2009
http://arxiv.org/abs/0909.2253 ) indicating that 0.27 Msun is lost
through red giant winds and planetary nebula formation. When one looks
at younger star clusters such as NGC 3532 (Dobbie et al. 2009
http://adsabs.harvard.edu/abs/2009MNRAS.395.2248D ) with a turnoff
mass of 3-4 Msun, one finds higher mass (~0.8 Msun) white dwarfs.

Modeling the field star white dwarf population such as reported by
Tremblay et al. is more complicated. The population is dominated by
the 10 Gyr old disk but there has also been more recent star
formation. Catalan et al. (2008 http://arxiv.org/abs/0804.3034 ) show
examples of model fits (their Figure 10) assuming a continuous
power-law initial mass function and exponentially decreasing star
formation. They can find a good fit to the field white dwarf mass
function and reproduce the broad peak near 0.6 Msun, and the long tail
toward higher white dwarf masses.

[Mod. note: reformatted to <80 characters per line -- mjh]

eric gisse

unread,
Sep 22, 2011, 3:37:14 AM9/22/11
to
"Robert L. Oldershaw" <rlold...@amherst.edu> wrote in
news:mt2.0-26561...@hydra.herts.ac.uk:

> On Sep 18, 6:34 pm, Martin Hardcastle <m.j.hardcas...@xxx.xxx.xxx>
> wrote:
>>
>> So, again, the data are not telling you what you would like them to
>> tell you. It took me about ten minutes to find the data you referred
>> to, get them into the right format, modify and run my code, and do
>> the modifications needed to run it again on the sums of the masses.
>> Testing models, when they make quantitative predictions, is easy, and
>> it's a skill that any would-be-modeller ought to learn. The half-hour
>> or so I've spent on this today is enough for me, though.
> ----------------------------------------------------------------------
-
> ----
>
> When you get refreshed, maybe you could put in a half hour or so on
> white dwarf masses.
>
> No one seems to want to talk about the Tremblay et al SDSS white dwarf
> mass function.
>
> This is odd since it is a large, recent sample, and is carefully
> analysed.

Not by you.

BEEP BOOP...analyzing 57 stars with masses determined to 100% or better

Average standard deviation per star: 1.40
Average mass of star: 0.92 solar masses
Mass range of sample: 0.80 to 1.36 solar masses
Chi-squared of the expected binning hypothesis: 130.75
Reduced chi-squared: 2.2938596491228
The probability that the reduced chi-squared value
of 2.2938596491228 is larger than the value of 130.75
for 57 degrees of freedom is 0.00000010061.

So much for the nonsense about 'quantized masses'.

>
> It also has clear and statistically significant peaks at DSR's
> predicted values.

How do you know they are 'statistically significant', and why would it
matter if it did given you have literally zero explanation for why your
theory is wrong every time?

Robert L. Oldershaw

unread,
Sep 22, 2011, 3:38:04 AM9/22/11
to
On Sep 21, 2:53�pm, wlandsman <wlands...@gmail.com> wrote:

> formation. They can find a good fit to the field white dwarf mass
> function and reproduce the broad peak near 0.6 Msun, and the long tail
> toward higher white dwarf masses.

----------------------------------------------------------------------------

But I think that even after all the above discussion, you have to
grant that there are distinct peaks indistinguishable from the
predicted peaks at 0.435 solar masses and 0.580 solar masses in the
referenced Tremblay et al graph.

Nobody ever predicted that peak at 0 .435 solar mass, or explained
it, until Discrete Scale Relativity came along.

Now go to http://www3.amherst.edu/~rloldershaw , click on "Stellar
Scale Discreteness?"

There you will find SEVEN SAMPLES of white dwarf stars, planetary
nebula nuclei and main sequence stars that all show indications of the
quantization predicted by Discrete Scale Relativity.

I am not saying that the data on my website, or the huge white dwarf
data sample in Tremblay et al, or all the systems I have brought to
people's attention in the last 2 weeks, prove the predicted DSR
quantization.

What I am saying is that there is good empirical evidence that
supports the prediction and argues compellingly for astrophysicists to
keep an open mind on this issue. Nature will eventually yield the
necessary evidence for a definitive answer.

Those who say the matter was settled long ago, or recently, seem much
too sure of themselves and much to dismissive of the relevant
uncertainties that undercut their beliefs.

RLO
Discrete Fractal Cosmology

Phillip Helbig---undress to reply

unread,
Sep 24, 2011, 4:02:58 AM9/24/11
to
In article <mt2.0-26561...@hydra.herts.ac.uk>, "Robert L.
Oldershaw" <rlold...@amherst.edu> writes:

> There is something that should be borne in mind: the Sandage - de
> Vaucouleurs dust-up.
>
> If you will recall, the two protagonists battled long and hard over
> the value of the Hubble constant. Sandage insisted upon 50 km/sec/
> Mpc, while de Vaucouleurs insisted upon 100 km/sec/Mpc. The battle
> raged on for many years.
>
> Both camps had the same observational data to work with.
> Both camps had the same statistical methods to work with.
> Both camps included the best astrophysicists of the time.
> Both camps insisted that they were obviously right.
> Both camps insisted that the other side was wrong.
>
> If things can be unambiguoulsy decided with some data and some
> statistical analysis,

I think you underestimate the amount of detail needed to estimate the
errors in this sort of work (i.e. the traditional distance-ladder
determination of the Hubble constant); one reason other methods are so
interesting is that the error budget is better understood. Both
probably underestimated their errors; with realistic errors, they
marginally agreed.

One can also determine the Hubble constant from the time delay in a
multiply imaged quasar. A while back, there were two camps with two
rather different values, one led by Bill Press and another with no clear
leader but several groups touting the same value (so it's probably not a
big surprise when the latter camp proved to be right). Anyway, after
some heated discussion about this at a conference, Paul Schechter called
out "What's the problem? They agree at 3 sigma!"

Robert L. Oldershaw

unread,
Sep 24, 2011, 12:09:23 PM9/24/11
to
On Sep 24, 4:02 am, Phillip Helbig---undress to reply
<hel...@astro.multiCLOTHESvax.de> wrote:
>
> I think you underestimate the amount of detail needed to estimate the
> errors in this sort of work (i.e. the traditional distance-ladder
> determination of the Hubble constant); one reason other methods are so
> interesting is that the error budget is better understood.  Both
> probably underestimated their errors; with realistic errors, they
> marginally agreed.
--------------------------------------------------------------------------------------------

Nicely put. But couldn't the same argument be applied to older
estimates of stellar masses that mainly relied on mass-luminosity-
effective temperature-specific gravity relations, and also to
dynamical mass estimates if they involve unknown unknowns like unknown
systematic errors or unaccounted for low-luminosity companions in wide
orbits.

In science, it takes a while for experimental efforts to truly sort
things out. Take the Hubble constant for one, and the faster-than-
light neutrinos for another. The prudent scientist does not rush to
judgement and decide what is right/wrong before the matter is
scientifically settled.

RLO
http://www3.amherst.edu/~rloldershaw
0 new messages