The problem with stacking raw images is that you haven't dealt with vignetting, noise, etc. If you preprocess them you can get rid of vignetting, much of the noise, etc. But do you do any other image modification in your raw development: color correction, blackpoint, saturation, etc?
I would think the stacking process is increasing the signal to noise ratio. I am not sure if it is helpful to reduce the noise before the stacking process. Won't you risk loosing details if you do that. How about increasing contrast, microcontrast, vibrancy, saturation and so on?
You shouldn't do any preprocessing before stacking. Correct vignetting with flat frames, correct sensor issues with bias and dark frames, then stack. After stacking you can do all your background corrections which might include any gradients (perhaps not perfectly corrected vignetting or any light gradients from light pollution), then apply your histogram stretches and curves.
I think part of the OPs concern is lens corrections. Very few, if any, astro tools account for this. Then Roger's new workflow could be of benefit, if in fact linearity can be maintained before integration.
It also depends on what you want to achieve and how simply you want to achieve it. As limits are pushed further, limitations of processing can show. If you want natural color, different methods have advantages.
In old-school theory, the best results would be the traditional linear workflow and working in 32-bit floating point. This is the model of the traditional image processing programs like pixinsight, imagesplus, and partially in deep sky stacker. The idea in these programs is you feed the camera raw files to the program and the program decodes the raw data. You need to separately apply bias, field fields and dark frame corrections. What the typical workflow of these programs do not do is the color matrix correction required for Bayer Color sensors.
So if using the astro software, like pixinsight, you must do this by hand. Some programs, like deep sky stacker does not do this, so you would need to apply it by hand after stacking. Without doing the matrix correction, colors are bland because the filters in a Bayer color filter array have a broader spectral response than the human eye. Pixinsight "color calibration" does not fix this problem. As a result, colors from traditional processing are bland and people boost saturation to try and compensate. But because the color matrix correction is not done, the resulting colors are not accurate. As a result, the amateur astrophotography community has an inconsistent and bland color perception of the deep sky because the majority of those using the "traditional workflow" do not apply the matrix correction. But that isn't the only problem.
has several more traditional misconceptions. The problem is the ideas expressed in the article may have worked well in the CCD era, pre early 2000's. but sensor manufacturers have been moving ahead. CMOS sensors now include hardware in the pixel design to block dark current DURING the light exposure, obviating the need to dark frames. Some cameras are better at this than others, but all modern DSLRs and mirrorless cameras with CMOS sensors have this technology. The better cameras do not need darks or bias frames, and in fact if applied will increase noise in the final image. Bias is still needed, but is a single value for all pixels and gets subtracted with skyglow corrections.
Researchers used to have the idea of the traditional workflow, but those traditional ideas have been challenged for a number of years now and more processing is being put into the raw converter. See for example,
AN OVERVIEW OF STATE-OF-THE-ART DENOISING AND DEMOSAICKING TECHNIQUES:
TOWARD A UNIFIED FRAMEWORK FOR HANDLING ARTIFACTS DURING IMAGE
RECONSTRUCTION
Goossens et al., 2015 INTERNATIONAL IMAGE SENSOR WORKSHOP
The advantage of using a raw converter before stacking and sky subtraction is that the advanced Bayer color demosaicking algorithms can be applied, including noise reduction. That produces a lower noise result, perhaps equivalent to 4 or more times longer total exposure.
The technical problem of raw converter is a tone curve is typically applied. So when stacking then subtracting skyglow from the raw converted (and tone mapped) images, leads to a mathematical color shift, which Mark has illustrated.
But the problem with the color shift idea is that it can be avoided in real world applications. The color shift problem is is caused by subtracting skyglow from non-linear data (tone-mapped data from the raw converter). But by choosing a black point that maintains color consistency, effectively color shifts are avoided, as I have demonstrated.
Now in practical application, I see more color shift problems with people doing the traditional workflow, than I do from those going the modern raw convert workflow. This is caused by two significant facts: 1) lack of application of the color matrix correction, and 2) black point errors made by background subtraction algorithms.
Telltale signs of poor processing resulting in color shifts are commonly seen in astrophotos, even by experienced astrophotographers. For example, as a nebula fades into the background, we commonly see interstellar dust go from bland tan to blue. Blue would be caused by very sub-micron dust particles and a particular geometry for scattering light from a star. In the common processing, especially by pixinsight users applying dynamic background extraction, the color shift with scene intensity can't be explained by known physics. And examining histograms the source of the problem becomes obvious: black point errors.
The largest problems I see in processing is by the regular photographers teaching nightscape photography who really don't understand the basic image processing needed to produce consistent color. They use color balance to try and "reduce" light pollution. Color balance is a multiply, and light pollution is added light so needs to be subtracted. The result is extreme blue shifts and bluing and purpling with decreasing scene intensity. The result is total fake and unnatural colors, but is also popular these days because so many "pro" photographers teach this method that many people now think those are the "real" colors.
I see the most consistent results from people using raw converters, then stacking and subtracting sky glow. And I'm seeing the best results from rawtherapee. Rawtherapee allows one to use the latest Bayer matrix algorithms, color matrix conversion and sophisticated noise reduction. One can also apply a flat field but not an averaged flat field (that needs to be improved). But there are many lens profiles that include a flat field and you can create your own. It also seems you can output liner data (no tone curve), but I have not tested that. If so, that is probably the best solution.
Note about the Sky and Telescope article claiming CMOS is not linear. BS. The sensors are linear to within a couple of percent or better, except within 10% or so pf sensor saturation (which no consumer digital camera sees at ISO 100 and higher). Some online assessments get linearity wrong, because of very basic errors in black point. Some do not understand the hardware in the pixel dark current suppression technology and claim non-linearity. It's BS.
Whatever workflow you use, you should be able to apply it with images made like in the above link and get good color. If not, there is a problem with your workflow. On the above page are raw files so you can test your methods with them and see if you can get the accurate colors of the color chart.
I typically don't use flats or darks as stacking gives me low noise. I did find with the Sigma Art 35 1.4 it may need flats so next time I use that I'll be doing darks and flats and biases but then I am going for a long exposure image of the Large Magellanic Cloud to highlight some oddities about it that are quite faint.
1. I have not found a way to correct the vignetting of my 300PF that is better than using flats in DSS. Regular vignetting correction has problems working with the accuracy needed to later stretch the image as the light falloff for this lens is an unusual linear one from the center of the frame to the edge. I might revise if/when RawTherapee allows averaging multiple flat frames.
2. Not an absolute by far, but TIFF based workflow creates a huge amount of data requiring more disk space. Question: Do you use to keep the intermediate TIFFs of each frame, or do you throw them away after stacking?
7fc3f7cf58