Bug in DRP

200 views
Skip to first unread message

Nick Konidaris

unread,
Dec 11, 2012, 12:23:29 PM12/11/12
to mosfi...@googlegroups.com
All- Chuck Steidel found a bad bug in the DRP. I excerpted from Chuck's email below:

Hi NIck,
    I think there may be serious problems with the *ivar.fits files produced by the drp at least for the last few versions.
I was just looking at a file reduced on Oct 11, and the ivar image looks OK in general, except that "bad" regions
seem to have *larger* ivar values than good ones. For example, the areas of the slit not receiving full exposure time
are uniformly higher in ivar values than the central region, and bad pixels all over the image are uniformly higher
than good regions, which I think is the opposite to what they should be.

I will be looking into this in a few days.

Chuck Steidel

unread,
Dec 12, 2012, 12:17:01 PM12/12/12
to mosfi...@googlegroups.com, npk, Allison Strom
I've been looking through the code, and there are a few issues that might be the source of the problem. One is that bad pixels have not been flagged in the *itimes* images, so that all bad pixels (i.e. ones on the bad pixel mask) apparently received the maximum integration time in each A or B stack. Seems like these should be set to zero.

Then, if you look at the A and B variance images, bad pixels have small values (all set to 27.5 in the image I am looking at) compared to good ones-- not sure where this value comes from, but if I had to guess, it is from setting each pixel to "1" if it is bad (the logical image has 1's for the bad pixels and zeros elsewhere) and then the sum will just be the number of frames that went into it plus the constant read noise term.

When the rectified inverse variance image is produced, all of the flagged bad pixel regions (which seem to have been given the full exposure time as above) become huge numbers, e.g., (1800)**2/27.5 ~120,000.

It seems like the fix would be to apply the badpixel mask to the integration time arrays (so that flagged pixels have zero integration time), and then when the inverse variance array in sec^2/e^2 units, they too will have invar=0 (and thus will be ignored in stacks of A-B and B-A ). For stacks of A and B, the pixel would then have contributions only from the values contributed by "good" pixels. I think the most important thing to make things work is to replace bad pixels with zeros in the *itimes* arrays-- currently it appears that instead bad pixels have full exposure time.

Hope this helps...

Chuck Steidel

unread,
Dec 12, 2012, 1:13:00 PM12/12/12
to mosfi...@googlegroups.com, npk, Allison Strom
Further investigation shows that the latest version of the code produces "NaN" for the final invar where pixels had been masked on the bpm and set to zero using the "filled" array option, because invar is simply set to 1/var. I think we may need to keep track of the itimes to get the right final inverse variance images-- it might be better to read in the variance images (total e^2) and rectify and shift
them, add them, and then it needs to be adjusted to reflect that the integration time is 2x higher in
the center of the image. In other words, we need to convert back to true variance to do the arithmetic, and only convert to units of (e-/s)^2 at the end, using the actual itimes arrays manipulated in the same way as the var arrays...

Nick Konidaris

unread,
Feb 4, 2013, 5:43:51 PM2/4/13
to mosfi...@googlegroups.com, npk, Allison Strom


Hi Chuck+,

Andreas Faisst and I sat down for a few hours and I think that we fixed this bug.

Best
Nick
Reply all
Reply to author
Forward
This conversation is locked
You cannot reply and perform actions on locked conversations.
0 new messages