Bug in Variance of Longslit Reduction

16 views
Skip to first unread message

J. Sebastian Pineda

unread,
Aug 18, 2014, 6:46:42 PM8/18/14
to mosfi...@googlegroups.com
Hey All,

I've been looking at the longslit reduction routines and there seems to
be an error with the way it is calculating the variance. I looked into
it since the longslit data i was reducing didn't look to have the
appropriate signal to noise.

The regular image data is fine but the variance is off. Instead of
computing

[ (A+B)*Gain + RN**2 ] / itime**2 ,

as the variance (as is done in the regular multi-slit mask reductions)
it computes

[ (A+B) + (RN/Gain)**2] * (Gain / itime) ,

where RN takes into account the MCDS mode for the number of reads.

On another note, as set up the longslit reductions produces a image of
the variance while the multi-slit mask reductions give images the
standard deviation instead. Should the longslit routines be modified to
give the same kind of outputs as the regular mask reductions?

thanks,

-Sebastian

--
J. Sebastian Pineda
Cahill Center for Astronomy and Astrophysics
California Institute of Technology
1200 California Blvd. MC 249-17
Pasadena CA, 91125

Office: (626) 395-6857

Reply all
Reply to author
Forward
0 new messages