On the subject of optical testing and rating:
(How Astro-Physics Inc. rates their optics)
by Roland Christen (chri...@aol.com)
This should be read by everyone on this group. Clear, concise, highly
informative, and (in my opinion) the 'word' not just on how Roland and his
company tests and rates it scopes, but something that should remove a lot of
misinformation that is now floating around on this subject. Thank you
Roland.
And thank you, once again, Allister St. Claire. Allister created the
CloudyNights review site, and runs it at his own expense and without
sponsorship or assistance from anyone (well, with occasional free legal
advice from me, which is worth every cent he has paid me for it.) Allister
has tried -- and has carried out, I believe -- his goal of creating a site
where all of us can share their experiences about a wide variety of the
products of this hobby. How he managed to get Roland to write this I do not
know, but I applaud him for adding this invaluable assest to an already
wonderful site.
How about a show of appreciation from us for Roland and for Allister?
Dave
I liked it (not least because the 3rd sentence of the last paragraph is
something I have been saying for ages <g>), and agree with your
assessment that it is concise, clear and informative.
>
>How about a show of appreciation from us for Roland and for Allister?
[clap, clap, clap...]
Noctis Gaudia Carpe,
Stephen
--
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Stephen Tonkin | ATM Resources; Astro-Tutorials; Astronomy Books +
+ (N50.9108 W1.830) | <http://www.aegis1.demon.co.uk> +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
To send email, substitute "aegis1" for "nospam"
--
Ron Wodaski
http://www.newastro.com
"Dave Novoselsky" <DN...@ix.netcom.com> wrote in message
news:919gsb$tq$1...@slb6.atl.mindspring.net...
CloudyNights is a very useful site. As noted in Dave's posting (below),
it is at www.cloudynights.com
Nice job Allister!
Ron Wodaski said. . . :
: I've never accessed cloudy nights before, and I haven't a clue where this
: >
: >
: >
--
Mark Wagner
Astronomy-Mall: http://Astronomy-Mall.com TAC: http://observers.org
La Caja de Los Gatos Observatory: 37:13:36N 121:58:25W
http://www.cloudynights.com/howto/AP%20testing.htm
Clear Skies
Dwight L Bogan
> This should be read by everyone on this group. Clear, concise, highly
> informative, and (in my opinion) the 'word' not just on how Roland and
> his company tests and rates it scopes, but something that should
> remove a lot of misinformation that is now floating around on this
> subject. Thank you Roland.
Im considering myself a newbie in this field but the writing of Roland
on cloudynights is indeed very informative and a must(read). I hope it
might help in clearing the number-bashing over telescope PV/RMS
ratios that happens too often. The only line that I cannot agree with is
the fact that in Roland's view customers can and will misinterpret data.
As a point against this I would want to invite the producers of
telescopes to help these customers and provide more information on their
products and production. The frase "diffraction-limited" is IMHO too
often used to simplify things. Rolands company makes at least a clear
statement as to how their Quality Control works and that is only good
for potential customers.
kind regards,
Cor Berrevoets
Ritthem, The Netherlands
Aberrator free star-testing software
http://aberrator.astronomy.net
Sent via Deja.com
http://www.deja.com/
P-V is the absolute value of the error across the aperture. If we are
going to use this term to compare scope quality and have it mean
anything, then we can't exclude the edges, a pesky zone, or use
averaging to get rid of those tall peaks and deep valleys in data sets.
I agree that a properly calculated RMS error value probably represents
true performance better than the P-V rating. It looks to me like the
Astro-Physics P-V rating is inferred from the calculated RMS error (X5 ,
X6, or whatever), and is not really a measured value at all. So why
claim _better than_ 1/10 wave P-V when there is no basis for it?
Wouldn't it be more useful to just claim that optics in AP scopes are
figured to be smooth and zone free, with an RMS error of 1/50 wave or better?
Jim McSheehy
I think before you cast stones, you should talk to someone like Peter Ceravolo
on the practical aspects of testing with an interferometer. No one is throwing
data out, excluding edges or getting rid of tall peaks or valleys. In fact, it
would be impossible to hide these defects with averaging data, they stand out
like sore thumbs. It is easier to hide a localized defect when only one fringe
set is read.
To give you a parallel, if you take one CCD exposure of a planet, you will
record the information of the surface features plus local atmospheric
distortions plus CCD noise. However, take many exposures, average them, and you
will have a true picture of the actual planetary features because the other
effects are random and tend to cancel out.
I know it is easy to be cynical in this world today, but I ask you to cut some
slack. No one is trying to pull the wool over your eyes.
Roland Christen
- Mike -
todd
Thanks, Roland.
I read the article with interest. I just have one question. I am
no optical expert, of course, but in regards to the P-V being overly
pessimistic, especially when a large number of sample points are
chosen, might it not make sense perhaps to have a 2-sigma P-V? That
is, one could find the 95th percentile and 5th percentile deviations
and use that as your P and V to determine the P-V value. That would
tend to reduce the effect of extreme outliers, and would still be a
metric somewhat independent of RMS error.
Interested to hear your (and indeed, anyone's) thoughts on this.
Brian Tung <br...@isi.edu>
Astronomy Corner at http://astro.isi.edu/
C5+ Home Page at http://astro.isi.edu/c5plus/
What I could see looks good, and cuts through some gobbledygook. But
I'm really frustrated not being able to see it all.
Thank you for the article, Roland, most of us sppreciate it.
- Mike -
While we all (or most of us) really appreciate Allister effort to keep the
Cloudy Nights site going and updated on his own time and expenses (clapping
of my hands) I don't see why prising so much on of the writer (Roland
Christen) without mentioning the efforts of so many others (with maybe not
so many titles to fame as Roland has) to keep the AA community informed of
what´s on in the marketplace and how this stuff performs.
Having read Roland´s article I don´t find it (just voicing my opinion, of
course) so much above the others to command a standing ovation and your
comments posted here sound a bit bordering on flattery to me (no offence
intended!). If anything, his article should reassure his customers (whether
this was needed or not I don't know) of the quality they are getting after
waiting 2 years (or about) for one of his coveted
telescopes/mounts/accessories. Not being one of those, I don't see it being
much of a thing or much above other articles one can find in Cloudy Nights.
Plus, I though the controversy over whether the PV ot RMS wavefront error
being better in describing the optical quality of a scope to be story these
days. Simply put, both must be quoted by any serious manufacturer or
aspiring so. Let the informed reader/buyer evaluate those figures on his/her
own.
Best regards
Andrea Tasselli
Clear skies, Alan
"mjc5" <mj...@psu.edu> wrote in message news:3A390C2B...@psu.edu...
I'm a technical writer by trade, and my campaign has always been to put the
information out there. You will get all kinds of reactions, many useful,
many useless, and some stupid or dangerous. But I am a firm believer that
knowledge is power, and that its better to pour out the knowledge and give
people the power to draw their own conclusions, to take what comes, than to
select what information to hold back. Never mind that it might be too
technical; anyone who feels the need always has the option to educate
themselves in theory if not in practice, though having both is of course
best.
When information is withheld, speculation runs rampant, claims get made and
disputed, all without adequate information. It has always been my own
preference to get the information out and let it live its own life. I'd
rather have the problems that come with disclosure than the problems that
come with withholding information. And there are problems both ways -- this
_is_ very technical information, and not everyone is going to "get" it.
At the very simplest level, it is more fun to watch people arguing about
publicly available information, where I can make up my own mind with
sufficient effort, than to observe endless debates that lack for factual,
specific, and available information. Distortion in the first case can be
proved; in the second case, there are never any answers and it becomes
really difficult to pick apart the facts from the distortions.
It's no accident that scientific papers are peppered with citations. Just as
it is a good thing to average the readings on a telescope objective to get a
t the "objective" truth, it would be very useful to average the numbers from
the various manufacturers to get at the truth of who's capable, who's not,
and the various grades in between. So what if the numbers aren't perfectly
comparable? What is? Is having not quite comparable numbers worse than
having no numbers at all? The real answer is to form a standards committee
so that tests mean something when laid side by side, but I don't exactly see
manufacturers rushing to do that.
Of course, numbers are just part of the information process; what I see with
my own eyes at the eyepiece is also an extremely important part. But that's
another argument for another time. <g>
--
Ron Wodaski
http://www.newastro.com
"Chris1011" <chri...@aol.com> wrote in message
news:20001214113756...@ng-md1.aol.com...
Clear Skies
Andrea
"mjc5" <mj...@psu.edu> schrieb im Newsbeitrag
news:3A3929C2...@psu.edu...
> > what愀 on in the marketplace and how this stuff performs.
(snip)
>I read the article with interest. I just have one question. I am
>no optical expert, of course, but in regards to the P-V being overly
>pessimistic, especially when a large number of sample points are
>chosen, might it not make sense perhaps to have a 2-sigma P-V? That
>is, one could find the 95th percentile and 5th percentile deviations
>and use that as your P and V to determine the P-V value. That would
>tend to reduce the effect of extreme outliers, and would still be a
>metric somewhat independent of RMS error.
I have some trouble understanding how P-V ratings mean _anything_ unless
information about _how_ the surface curve varies between the peak(s) and
valley(s). (I _think_ this was really at the crux of the argument about
whether the Mak mirrors really met APs specification).
Consider a case in the extreme: A small, say 1/2 mm, ball bearing sitting
on a 10 inch mirror surface. The P-V error would be about 2000 waves (as I
see it), but I doubt that it would have a visible effect on the image.
If one starts to invoke randomness in the shape of the surface as related
to P-V, so that one could do what you're suggesting, it would seem that
differentiating between P-V statistics and "normal" RMS errors would start
to lose their meaning.
It seems to me that P-V only has use when applied to a specific type (or
types) of aberrations, so that the underlying curve can be deduced and
applied to determining image quality from that.
Zane
I don't think you're serious. No one produces a mirror with that kind
of aberration. In any case, it wouldn't register on my 95/5 metric.
> If one starts to invoke randomness in the shape of the surface as related
> to P-V, so that one could do what you're suggesting, it would seem that
> differentiating between P-V statistics and "normal" RMS errors would start
> to lose their meaning.
Not necessarily. Suppose the only aberration were spherical aberration.
Wouldn't the relationship between the P-V and RMS errors depend on the
relative strengths of different orders of SA?
> It seems to me that P-V only has use when applied to a specific type (or
> types) of aberrations, so that the underlying curve can be deduced and
> applied to determining image quality from that.
Intuitively it seems to me that a high P-V/RMS error ratio would indicate
increased degradation in the MTF on the high-frequency end, but I haven't
done the math yet.
atasselli wrote:
> > > what´s on in the marketplace and how this stuff performs.
ITYM "Non illegitimi te carborunderunt".
>Zane Kurz wrote:
>> Consider a case in the extreme: A small, say 1/2 mm, ball bearing sitting
>> on a 10 inch mirror surface. The P-V error would be about 2000 waves (as I
>> see it), but I doubt that it would have a visible effect on the image.
>
>I don't think you're serious. No one produces a mirror with that kind
>of aberration. In any case, it wouldn't register on my 95/5 metric.
I know, but it illustrates the disconnect on P-V from reality unless the
shape of the variation is specified.
>> If one starts to invoke randomness in the shape of the surface as related
>> to P-V, so that one could do what you're suggesting, it would seem that
>> differentiating between P-V statistics and "normal" RMS errors would start
>> to lose their meaning.
>
>Not necessarily. Suppose the only aberration were spherical aberration.
>Wouldn't the relationship between the P-V and RMS errors depend on the
>relative strengths of different orders of SA?
Indeed. That's what I was saying in the paragraph just below. I don't
think that that's a statistical situation, though.
>> It seems to me that P-V only has use when applied to a specific type (or
>> types) of aberrations, so that the underlying curve can be deduced and
>> applied to determining image quality from that.
>
>Intuitively it seems to me that a high P-V/RMS error ratio would indicate
>increased degradation in the MTF on the high-frequency end, but I haven't
>done the math yet.
I don't see how you can use the P-V without more information than just the
two numbers. That's why I used the ball bearing analogy -- the ratio would
be absurdly high, but the MTF of the complete mirror would be pretty good.
Zane
Neither am I -- I just give that impression, but most of it is just
made-up dog-Latin. <g>
> Oh well, some of us
>who studied for the Bar learned Latin. Others did their Bar work in a bar.
>Two guesses where I did mine :-)
I couldn't possibly comment!
You are exactly right.
Roland Christen
That doesn't convince me. You might be right, but the ball-bearing
analogy doesn't convince me, because I never made the claim that such
a ratio *always* works. Just most of the time.
But like I said, I haven't done the math. It would be an interesting
exercise, but right now I don't have the time.
I see your point Dave. Further discussing is useless.
Clear Skies
Andrea
Zeiss never mentioned the PV, but only the RMS and the Strehl - at least
in the data sheet of the 100/640 APQ lens manufactured August '95, I
happened to get hold of:-)
Clear skies, Michael
Sent via Deja.com
http://www.deja.com/
Most wavefront errors in a refractor predominantly affect the high
frequency end of the MTF. Are you saying that you can calculate the MTF of
an optic knowing nothing more than those two numbers? An accurate
definition of the MTF requires that the point/line spread function be
completely defined.
(I don't think I'm trying to gore anybody's ox, BTW.)
Zane
>chri...@aol.com (Chris1011) wrote:
>
>>>>Intuitively it seems to me that a high P-V/RMS error ratio would indicate
>>increased degradation in the MTF on the high-frequency end, but I haven't
>>done the math yet.>>
>>
>>You are exactly right.
Never mind.
I think it's sunk into me that Brian and I are not talking about the same
thing exactly. I'm thinking in terms of a purchase specification for an
optic to assure it's performance, for example, not what is the most common
trend.
Zane
Thanks.
Roland
Dave, I'm glad you enjoyed and heartily recommend reading Roland's
posting to Cloudynights. Which part of his explaination of optical testing
really impressed you? Do you agree
with what he said, and if so, why?
-Rich
Please someone erase those awful "World Ads" from the
airwaves. Smug and robotic looking teenagers from around
the World lecture us on the "digital age."
"Look Mbuta earns $50/month and has a laptop!!!" BS!!!!!
> I know it is easy to be cynical in this world today, but I ask
>you to cut some slack. No one is trying to pull the wool over
>your eyes.
>
> Roland Christen
>
Of course it is to say - you are cynic (about JMC).
But who is cynical really? Let see:
The most recent re-test of your 5" F/6 shows:
Peak 0.075
Valley -0.119
P-V 0.194
RMS 0.036
Strehl 0.948
Good scope, but not as good as you cynically claims.
And where 1/10 P-V wave front? Where 0.02 RMS ?
Note, that this test was done with new(!) scope and
by very experienced optics manufacturer in the USA.
Valery Deryuzhin.
Todd
Some reasonable ideas were suggested earlier (the 95/5 exclusion etc.),
but doesn't this point out the difficulty in using statistical terms and
measures when there's no guarantee that the underlying data represent a
valid sample? We can generate an "RMS" value from five data points, but
it has much less validity than the value we'd get from a 100 point
sample. Roland's discussion of how AP gets their P-V numbers shows that
some of the test data are collected randomly, and some of the data
points are subject to the operator's discretion.
Here on saa, we've seen hundreds of posts comparing P-V ratings from
various companies as though these numbers represent the same measure of
performance. It's assumed that a scope with 1/6 wave P-V will be
inferior to another that's rated as 1/10 wave P-V. It's also generally
assumed here that using an interferometer will invariably give test
results that are less biased.
The reality is that each company has its own biases and techniques when
analyzing test data. There is nothing wrong with that, after all, the
goal is usually to ship a product, not test it to death. But it also
means that we really can't compare P-V ratings. The RMS values (assuming
a reasonable sample size) are much better indicators of performance when
comparing optics from different companies.
Jim McSheehy
Valery Deryuzhin wrote:
[snip]
Robert
--
Robert Provin
http://voltaire.csun.edu
The Strehl Ratio is even much better than the PV figure to qualify an
optics. Actually rms wavefront error and Strehl are closely related if
aberrations (that is wavefront distortions) are low enough to compare
grossly with the Reyleigh limit.
Clear Skies
Andrea
I don't think this difficulty has been pointed out exactly, but thanks
for bringing it up. :) And I think it's a very good point. In order
to apply any kind of statistical method, the sample must be valid. The
points must be both numerous and well-chosen.
> We can generate an "RMS" value from five data points, but
> it has much less validity than the value we'd get from a 100 point
> sample. Roland's discussion of how AP gets their P-V numbers shows that
> some of the test data are collected randomly, and some of the data
> points are subject to the operator's discretion.
I must have missed that. Is that in his original article on Cloudy
Nights, or in one of the posts to this thread?
> Here on saa, we've seen hundreds of posts comparing P-V ratings from
> various companies as though these numbers represent the same measure of
> performance. It's assumed that a scope with 1/6 wave P-V will be
> inferior to another that's rated as 1/10 wave P-V. It's also generally
> assumed here that using an interferometer will invariably give test
> results that are less biased.
>
> The reality is that each company has its own biases and techniques when
> analyzing test data. There is nothing wrong with that, after all, the
> goal is usually to ship a product, not test it to death. But it also
> means that we really can't compare P-V ratings. The RMS values (assuming
> a reasonable sample size) are much better indicators of performance when
> comparing optics from different companies.
If the sample size is sufficiently large and well-chosen, I think a 95/5
P-V rating is just as valid a bit of information as the RMS value. It's
just a different bit, that's all.
You can minimize random errors (like noise in an elettronic device) but you
can't cancel seeing out. What you do when you take lots of snapshots of
planets (optics permitting) is to freeze it out (or hope so) and increase
the S/N ratio by staking many of them (within the timeframe set by the
changing features of a rotating body).
Clear Skies
Andrea
I wasn't aware that we were restricting our discussion to refractors.
> Are you saying that you can calculate the MTF of
> an optic knowing nothing more than those two numbers?
Again, I don't think you're serious. I never claimed to be able to do
so, in principle or in practice. I only claimed that a 95/5 P-V rating
could add information not already present in the RMS value. That does
not require being able to determine an MTF from the two values.
> An accurate definition of the MTF requires that the point/line spread
> function be completely defined.
Agreed.
> (I don't think I'm trying to gore anybody's ox, BTW.)
I see no ox here. :)
If that is what you are talking about, then yes, I would say that we
are talking about somewhat different (but related) matters.
Thanks very much for the kind words, but at the risk of pricking my
rather carefully cultivated hubris, I must point out that I am far
from infallible. For example, Zane Kurz was kind enough to set me
aright on the nature of the danger of solar viewing through a scope.
I only try to avoid making the same mistake twice.
(If I am so unfortunate as to make it twice, however, it seems less
unlikely, sadly, for me to make it three, four, five times. <g>)
I think you are misinterpreting something. Again, being unfamiliar with actual
interferometer testing you are coming to wrong conclusions. The 5 or 6 fringes
that I referred to are not synonymous with 5 or 6 data points. Each fringe is
analyzed with perhaps 20 data points. Furthermore, the fringes are then shifted
either laterally or rotationally and then again analyzed with 20 data points
each.This is repeated as many as 10 times. In the end, the surface is
completely described with many hundreds of data points. The result is a set of
Zernicky polynomials, which the software then analyzes and assignes P-V, RMS
and Strehl numbers to. Why do you insist that this is not valid, or somehow not
accurate? On what authority do you base your assessments?
Roland Christen
Having just withdrawn from a fruitless discussion of cooling
telescopes, I offer my sympathies. For someone to directly challenge
your highly successful practice is ludicrous.
Del Johnson
In article <20001215133750...@ng-cs1.aol.com>,
chri...@aol.com (Chris1011) wrote:
>
> I think you are misinterpreting something. Again, being unfamiliar
with actual
> interferometer testing you are coming to wrong conclusions. The 5 or
6 fringes
> that I referred to are not synonymous with 5 or 6 data points. Each
fringe is
> analyzed with perhaps 20 data points. Furthermore, the fringes are
then shifted
> either laterally or rotationally and then again analyzed with 20 data
points
> each.This is repeated as many as 10 times. In the end, the surface is
> completely described with many hundreds of data points. The result is
a set of
> Zernicky polynomials, which the software then analyzes and assignes P-
V, RMS
> and Strehl numbers to. Why do you insist that this is not valid, or
somehow not
> accurate? On what authority do you base your assessments?
>
> Roland Christen
>
Challenge to anyone's business should come as new products, better made, not
worthless discussion. That is, however, not the function of SAA, rather it
seems that this newsgroup is here to provide opportunities to argue endlessly
about nothing.
Roland Christen
- Mike -
John Steinberg wrote:
>
> Film @ 11
>
> Sheesh...
>
- Mike -
atasselli wrote:
>
> Were Rolond not that Roland (AP) I strongly doubt that the same article
> would have rised so much fuss.
>
> Clear Skies
>
> Andrea
>
> "mjc5" <mj...@psu.edu> schrieb im Newsbeitrag
> news:3A3929C2...@psu.edu...
> > Well, I thank all the contributers then! I think its an omission of
> > timing. We were talking about this article, ergo any praise is directed
> > at that particular author.
John Steinberg wrote:
> Film @ 11
>
> Sheesh...
>
> Starry skies,
>
Nevermind...I was going to make a comment, but then
I realized I had read the title too quickly. It says
"SHIFTS in his chair."
> -John Steinberg
>
> email: manbytsdog at aol dot com
>
> NexStar 5: The Unofficial Resource Site (now fortified with Vitamin A)
> http://members.nbci.com/_XMCM/nexstar/index.html
Rockett
----------------------------------------------------------------------------
Capella's Observatory (CCD Imaging)
http://web2.airmail.net/capella
>I think another way to say that would be a lot of heat, but not something that
>does, nor is it intended to, shed light on anything. Or, lots of noise, little
>or no content. Dave
With all due respect, content or not can sometimes be in the eyes of the
beholders.
The subject of how to measure and rate optics, and the related accuracies
and uncertainties is, it would seem to me, a very important one for the
serious amateur. Simply buying one copy of everything and then making
selections based on using it isn't an option that's open to everyone.
These kinds of discussions have raised the level of understanding of many
here. I don't know whether you were around during the long and infamous
discussions of 1/27 wavefront (P-V) claims for a particular source of
Newtonian mirrors -- I'm sure Del remembers and I _think_ his understanding
has been expanded some, as has mine and many others.
To the specific discussion here, getting and interpreting interferometric
data is not a simple thing. If companies are going to use interferometric
ratings in marketing their equipment, it behooves us all to know something
about what they really mean. Roland's posts going into more detail about
how he does things are appreciated and very interesting to me, as are other
people's opinions. I don't know if we would understand it as well if some
people hadn't jumped him about it -- not to condone the spirit in which it
was done.
>Chris1011 wrote:
>
>> >>For someone to directly challenge
>> your highly successful practice is ludicrous.>>
>>
>> Challenge to anyone's business should come as new products, better made, not
>> worthless discussion. That is, however, not the function of SAA, rather it
>> seems that this newsgroup is here to provide opportunities to argue endlessly
>> about nothing.
I can understand your pique at being attacked by your competitors, but I
certainly don't agree that these are "worthless discussions" about
"nothing" from the amateur members of SAA. If that were so there wouldn't
be any disagreement among professionals about characterization of optics.
Zane
"David A. Novoselsky" wrote:
> Hmmmm, the weather must be bad where you are at too, Rockett. Dave
>
It is. ;^)
Rockett
>
> Rockett Crawford wrote:
>
> > John Steinberg wrote:
> >
> > > Film @ 11
> > >
> > > Sheesh...
> > >
> > > Starry skies,
> > >
> >
> > Nevermind...I was going to make a comment, but then
> > I realized I had read the title too quickly. It says
> > "SHIFTS in his chair."
> >
> > > -John Steinberg
> > >
> > > email: manbytsdog at aol dot com
> > >
> > > NexStar 5: The Unofficial Resource Site (now fortified with Vitamin A)
> > > http://members.nbci.com/_XMCM/nexstar/index.html
----------------------------------------------------------------------------
I wonder if that is how a person can distinguish between an attorney and a
scum-bag lawyer.
Hehe-heh :-)<smiley face>
rat
~( );>
It's one of my faves too. I hang around there whenever the weather is too
cloudy to observe.
rat
~( );>
Chas P.
Heh heh. How'd you know where I live, Chas? I've got an outside door to my
observing deck. You should come over some time. Kind of cold though. At least
we have this great hobby in common.
Clear skies to you,
rat
~( );>
Chas P.
It you want to make such a claim it would be nice to provide a few more
details. Roland has provided a very thorough description of how he tests
his lenses on the AP Users Group and at Cloudy Nights. Exactly how did the
"very experienced optics manufacturer" test this lens (and why don't you say
who did it)? How many such lenses has the company tested in the past? What
type of interferometer was used? What wavelength was it tested at? How was
the lens supported? Exactly how were the measurements done and analyzed?
You seem skeptical of Roland's claims, but is there any reason I should
believe your numbers simply because you have posted them here?
Clear skies, Alan
<ar...@selena.kherson.ua> wrote in message
news:91cfnu$9kt$1...@nnrp1.deja.com...
> In article <20001214113756...@ng-md1.aol.com>,
> chri...@aol.com (Chris1011) wrote:
>
> > I know it is easy to be cynical in this world today, but I ask
> >you to cut some slack. No one is trying to pull the wool over
> >your eyes.
> >
> > Roland Christen
> >
>
> Of course it is to say - you are cynic (about JMC).
> But who is cynical really? Let see:
>
> The most recent re-test of your 5" F/6 shows:
>
> Peak 0.075
> Valley -0.119
> P-V 0.194
> RMS 0.036
> Strehl 0.948
>
> Good scope, but not as good as you cynically claims.
> And where 1/10 P-V wave front? Where 0.02 RMS ?
>
> Note, that this test was done with new(!) scope and
> by very experienced optics manufacturer in the USA.
The ">"s are from your posting on Cloudynights.com:
>This is repeated a number of times with the fringes tilted to various
>angles, and the reference optics, mirrors and beamsplitters may be
rotated to
>eliminate any possibility of local errors being added to the test
optic. The
>results are averaged in order to get a more accurate and realistic
picture of the
>aberrations. These averaged results usually have the same RMS rating,
but may result
>in better P-V ratings due to the cancellation of systemic errors."
The reference optics may be tilted/rotated, or, maybe not at all, right?
That's
operator discretion. A left-handed tester might rotate them differently
than a
right-handed tester, etc., etc. The results of any of these
actions/errors by the
operator aren't random or repeatable. They represent a source of error somewhere
between zero and awful. Your guess is as good as mine.
>In day to day testing, the optician can pretty quickly tell whether a
>set of fringes will meet the performance goals or not. The fringe patterns
>that are recorded on the computer screen will not be absolutely clean,
>even if the optic under test is perfect. There always exists dust particles
>on the reference elements and autocollimating mirrors, as well as on the
>beam splitter cubes and laser collimating optics. Since interferometers
>are analog devices, this is akin to the clicks and pops that appear on
>vinyl phonograph records."
Yes, noise is always a problem, especially for low-level
measurements. Averaging and integration can get you to a better RMS
value, but where does this help you in determining a P-V rating? A systemic
error will bias all the values, and averaging will not reduce it at all.
>This "noise" can cause the software to add spurious data points to the fringes
>where none should be, and this will normally lower the P-V rating, but
>again, the RMS is unaffected. In order to get a fair rating for the
optics, I average
>multiple passes, something that Peter Ceravolo has recommended to do.
Just as we
>would not downgrade the performance of the Chicago Symphony for every little
>recording noise, so I do not downgrade the performance of our optics
because of
>interferometer noise."
Your analogy is fine for spurious events - we can ignore clicks and pops
in a 40 minute long symphony, but what about 60 cycle hum from a loose cable
that you hear all through the recording? It's hard to ignore that kind
of noise.
Unless a system has enough signal to noise ratio, you can't
rely on the data it spits out to determine a P-V value. We use a graph
(based on
equations in "Experimental Measurements: Precision Error, and Truth" ,
by N.C.
Barford , ISBN 0471907014) that shows error bounds versus S/N ratio. In
my
work, a 100:1 (20 dB) S/N ratio yields measurements with about +/-12%
(1 dB) of
uncertainty. I can average many measurements to get closer to a mean
value, but the accuracy of any individual measurement will always be
within a range of +/-12%. There is no way I could claim an absolute
(i.e.. P-V)
uncertainty of +/-5% with these data. The best I could claim is +/-12%.
For meaningful 1/10 wave P-V claims, the total error for the individual
measurements
(random + systemic + operator) has to be less than +/- 1/20 wave, and
probably more like
+/- 1/40 wave. That seems like a tall order for anyone outside of some
very specialized
measurement facilities. I'm not saying your optics aren't corrected to
0.02 wave RMS, I
just doubt you can claim 1/10 wave P-V max, based on what you've described.
Jim McSheehy
Can´t understand why posting replays then.
Besides, Roland, Zernicky ain´t no one while Zernicke is the one who
"invented" the polynomials bearing his name.
Clear Skies
Andrea Tasselli
Bill.
Only if you wear the jacket with the arms
that tie in the back.
-Rich
Please someone erase those awful "World Ads" from the
airwaves. Smug and robotic looking teenagers from around
the World lecture us on the "digital age."
"Look Mbuta earns $50/month and has a laptop!!!" BS!!!!!
atasselli wrote:
> You can minimize random errors (like noise in an elettronic device) but
you
> can't cancel seeing out. What you do when you take lots of snapshots of
> planets (optics permitting) is to freeze it out (or hope so) and increase
> the S/N ratio by staking many of them (within the timeframe set by the
> changing features of a rotating body).
Alan,
Not only me has sceptical doubts in 1/10 P-V wave front claims.
Then, what are the reasons why I have them? They are:
1. The absense of a final certificate of quality. The quality of 1/10
P-V wave front leave nothing to wish better if this combined with
good RMS like 0.02 . So, if such quality is reachable for ALL his
objectives, there are not any serious reason to not provide the
scopes with such sertificates.
2. Childrish explanations why the AP refused to supply final
certificates. I believe, ANY another firm will gladly supply
such sertificate if such quality can be achieved in each their
objective. In fact, certificates even for lesser quality than
1/10 P-V wave front , are supplied by another firms without
any problems.
3. Nature of an open system with a liquid inside. Humanity
still not found methods how to stop a leakage if a system is
open. Even pitch does leak, not say about gels and oils.
4. Our own experience with different gels and different oils.
If back to the scope I had mentioned, then I can say that
it was tested at vertical position (looks to zenit) with all
necessary in such cases precautions . The test was done
by the company which regularly use an interferometer with
a green lazer and provide most of their custom optical systems
by final certificates.
This scope didn't crossed the USA boundary and still in the
USA. It were not used and tested new.
So far, ALL oiled AP objectives re-tested in Europe (Germany,
France) and the USA show at least 2x less quality (in P-V)
as were claimed.
And, Alan, the question not in the quantity of objectives which
were tested. Let forgot about ALL re-tested scopes and
remember only about THIS one. Roland claims 1/10 wave
front quality for EACH of his oiled objectives. Is not strange
that THE VERY FIRST, NEW(!) ones shows 2x lesser quality
in P-V and 1.7x in RMS ? Let think that this is not typical.
Then the question why THE VERY FIRST one shows such
deviation. Ask any person who a bit familiar with such field of
mathematic as statistic what kind of conclution he can drow
from such case. I think that you can drow your own conclution
easily.
Valery Deryuzhin.
So far I was not allowed to call the name of this company.
But in a proper time, I belive I will receive the permission
to call the names and give all figures with the interferogram.
We already asked one of very reputable optical company
and one scientific institute to test the scope with permittion
to use the results publically. However, we were not able
to find unused scope. We would like to make clear
experiment - the scope must be unused too. In this case
no any explanation about difference between the claimed
and the real quality can be taken in the consideration.
We received several offers for our astromart request, but
not one was about unused scope.
And, John and others, don't take me wrong about quality
of AP oiled scopes. Such quality like 0.035 RMS is 2x
better than so called diffraction limit. But just 1.7x worser
than claimed.
1) Doesn't simple signal averaging reduce the effect of nonperiodic noise?
2) What do you believe could be a source of correlated noise (in reference
to your 60Hz analogy below) in an inteferometric measurement?
3) What is the signal measured in an inteferometric test?
Clear Skies,
John
Come'on Chas, you can do it! You a man or a mouse?
rat
~( );>
> But it also means that we really can't compare P-V ratings.
>The RMS values (assuming a reasonable sample size) are
>much better indicators of performance when
> comparing optics from different companies.
>
> Jim McSheehy
Jim,
And these RMS values must be given with quality certificate,
not just claimed. Don't you think?
Best, Howie
Jim,
All above are quite elementary things and if he don't understand
them, then how we can believe in all what he claims.
If he do understand, then why he claims things which are
contrary with all above.
About 0.02 RMS. Such level of an _oiled_ objective is
a real chanllenge and I am quite doubt that such performance
can be achieven for each of 300+ objectives per year
figured by one pair of hands. I can barely, but still believe,
that such quality can be achieved for 300+ objectives per
year if they has full spherical design and they are made by
a small team, not by one pair of hands. But as Roland
suggests, he do use the aspherization on two surfaces
for each his oiled objectives. This need enough time for
numerous testing, lenses cooling, assembling, centering,
collimation in double pass scheme, collimation in an
interferometer, receiving and processing of interferograms,
oiling objective, give it a time to relax, remove rest of oil,
clean it, finally test it and in the case an objective didn't
pass this 0.02 RMS - to re-work it. Don't forgot about
additional 1/10 P-V barrier.
So, I am personally don't believe that this is possible even
having Opticam center - to make all 300+ objectives per
year with such precision and only by two persons and only
by one pair of them on the figuring of 600+ aspherized
surfaces per year.
The only one way to convince even a bit familiar with
optics manufacturing peples that all these 300+ objectives
are really 1/10 P-V and 0.02 RMS - to supply test report
with each of them and if these test reports will be confirmed
by re-testing each time this will be done.
All another reasoning are simple tales for uninitiated.
Any, even the most clever explanations, can't prove an
optics performance, unless blind believers accept such
way of an optical quality provement.
Valery Deryuzhin.
>I'm not saying your optics aren't corrected to 0.02 wave
> RMS, I just doubt you can claim 1/10 wave P-V max,
>based on what you've described.
>
One more portion of doubts about each AP objectives
1/10 P-V and 0.02 RMS performance.
Several times were reported and on the s.a.a. pages as well,
that Traveller scopes are inferior vs another 4" high-end
scopes when compared on planets. We had heard the
explanations that this is normal and that Traveller were designed
for another tasks - for a wide field photography and CCD works.
Yes, n question about this. But let me state that if a given scope
really has 0.02 RMS wave front performance - it will be virtually
impossible to differ scopes with 0.01RMS and 0.02 RMS
performance. Most 4" high end scopes Travellers were compared
with, have around 0.025 - 0.02 RMS. Why, in this case,
Travellers were inferior?
> "John J. Kasianowicz" wrote:
>
> Hi JMc,
>
> 1) Doesn't simple signal averaging reduce the effect of nonperiodic noise?
Yes, but it can't reduce systemic or operator errors.
>
> 2) What do you believe could be a source of correlated noise (in reference
> to your 60Hz analogy below) in an inteferometric measurement?
(This isn't that Monty Python skit at the Chasm of Death, is it? Don't
ask me my favorite color!) I'd say vibration, temperature fluctuations,
instability in the laser cavity, etc.
>
> 3) What is the signal measured in an inteferometric test?
IIRC, it's phase shift relative to a flat or spherical reference surface.
We shouldn't try and take the audio analogy too far. Optical tests are
more complicated because of factors like coherence, polarization, etc.
My point was there could be errors/noise in a measurement that exceed
the P-V error we're trying to measure. If they can't be averaged out or
nulled, the best we can say is that the P-V error of the thing we're
testing is not greater than the noise.
After seeing the typical interferograms sold with amateur optics, and
learning how companies run their production tests, I'm more convinced
now that P-V numbers are less important than the RMS error when it comes
to rating optical quality.
Disclaimer: I'm not an optician - my work involves broad band antennas
and radar reflectors, and the test wavelength is usually around 3 cm.
Getting to 1/10 wave P-V surface accuracy is no big deal at 10 GHz ;-)
BTW, What is the air-speed velocity of an unladen swallow?
Jim McSheehy
I don't know the answer - doing business with the public is not an easy
thing. Takahashi and TMB don't claim any numbers, and by all reports,
their objectives are very good. It is always better to under-promise and
over-deliver. Maybe Roland is right about not giving the test data to
customers. They can't throw stones if he doesn't hand them any ;-)
Jim McSheehy
Who are you referring to?
As I'm sure you are aware, one of those claims was the result of an
uncontrolled test. I therefore discount its validity.
There are three hypothetical possibilities about the relative planetary
performance of an A-P Traveler vs. a high-end scope w/lesser specs.
1) The Traveler's performance is superior. If this is true, your argument is
moot.
2) The Traveler's performance is the same. Your argument may be moot because
either the test was not performed properly (the reviewer lacks sufficient
knowledge to test the optics properly), the seeing conditions never permit
critical testing in-focus testing (of planet images), or it simply may be
difficult to distinguish between two scopes w/close but excellent
specifications.
3) The Traveler's performance is the same or worse because it's colour focus
variation exceeds that of the longer f-ratio scope. In this case, your
argument may be irrelevant if the inteferometry is performed at one
wavelength, which I believe it is.
Clear Skies,
John
> Somewhere on the distant horizon looms a point.
>
Ninny! A point is a mathematical abstraction of zero dimensions! How could
that possibly "loom" on the horizon! Oh, the dolts I put up with around
here...
--
E-Mail: j...@joebergeron.com
Web site: www.joebergeron.com
But the core issue may be resolved* in the third dictionary definition of
Loom as a noun - that is, a "Loon", which is a British dialect derivation.
So, perhaps he meant to say that there was a pointless Loon sticking it's
head up over the horizon.
*use of the word "resolved" is meant to imply a loose referral to the topic
of astronomy.
I'm sorry, what was the point of all this?
No actual topic was harmed in the creation of this post and, without motive
nor knowledge of the preceeding thread, nothing personal was impugned or
implied.
-Paul S. Walsh
(My GOD, it's full of Clouds!)
"Joe Bergeron" <jose...@aol.com> wrote in message
news:josephb41-151...@10.0.1.2...
I strongly doubt that anyone can see the difference between an 1/8 PV
unobstructed telescope and the same with a 1/10 PV correction in the field,
even in outstanding seeing condition. Plus AP doesn't publish claims on PV
rating just rms and Strehl, as shown here below quoted from AP site:
>>
The finished lens is then coated and assembled in a precision cell which is
fully temperature-compensated. The cell is attached to the tube assembly and
the optical alignment is checked at high power on an artificial star. The
lens is serialized and all test data including final interferogram are
stored in a computer file along with the customer's name. Our extensive
hand-figuring techniques and the use of H3 quality blanks allows us to
guarantee that all production lenses will meet the 1/50 RMS (98.4% Strehl
ratio) minimum limit.
<<
I just wonder why not releasing the info about each scope to the buyer if it
is already there. If the above is true this is going to kill all the endless
discussions about AP quality for good.
Clear Skies
Andrea
"John J. Kasianowicz" <sur...@erols.com> schrieb im Newsbeitrag
news:91ekmn$h60$1...@bob.news.rcn.net...
Bill.
atasselli wrote:
> My point is that the analogy made by Roland doesn't hold.
>
Using the point of Joe's post, your thoughts are uncorrelated. ~8^)