I've been looking through the code, and there are a few issues that might be the source of the problem. One is that bad pixels have not been flagged in the *itimes* images, so that all bad pixels (i.e. ones on the bad pixel mask) apparently received the maximum integration time in each A or B stack. Seems like these should be set to zero.
Then, if you look at the A and B variance images, bad pixels have small values (all set to 27.5 in the image I am looking at) compared to good ones-- not sure where this value comes from, but if I had to guess, it is from setting each pixel to "1" if it is bad (the logical image has 1's for the bad pixels and zeros elsewhere) and then the sum will just be the number of frames that went into it plus the constant read noise term.
When the rectified inverse variance image is produced, all of the flagged bad pixel regions (which seem to have been given the full exposure time as above) become huge numbers, e.g., (1800)**2/27.5 ~120,000.
It seems like the fix would be to apply the badpixel mask to the integration time arrays (so that flagged pixels have zero integration time), and then when the inverse variance array in sec^2/e^2 units, they too will have invar=0 (and thus will be ignored in stacks of A-B and B-A ). For stacks of A and B, the pixel would then have contributions only from the values contributed by "good" pixels. I think the most important thing to make things work is to replace bad pixels with zeros in the *itimes* arrays-- currently it appears that instead bad pixels have full exposure time.
Hope this helps...