I use PixInsight for all my processing. Apart from occasionally removing de-focus fringes around stars with Adobe Lightroom from pictures I make with regular lenses, it is in fact the ONLY tool I use for processing. Once I started the trial and got the first result I never turned back to any other tool.
PixInsight does have a steep learning curve however, especially if you just started with astrophotography. So I decided that I will start writing PixInsight tutorials on this blog. As a start I want to share my basic DSLR workflow, because I get asked about that a lot lately.
ImageIntegration with PixelRejectionThe ImageIntegration process will combine all your aligned light frames into one stacked image. There are many settings you can tweak, but for now you can use the default settings. Just make sure you use Pixel Rejection and pick a rejection algorithm to get rid of your hot pixels and some of the noise. Without going into much theoretics and detail about the different rejection algorithm, just use this rule of thumb;
Skip this step if you are a beginner. Deconvolution is the process to try and compensate for the blurring effect of the instability of the atmosphere and imperfections in your optics. You first need to build a mathematical model of this blurring by using the DynamicPSF process. After this you can use this PSF model in the Deconvolution process. A very good and detailed tutorial on this is available on the PixInsight website;
Deconvolution and Noise Reduction Example with M81 and M82
The MultiLinearTransfer process is very powerful to use for noise reduction at different scales. The previously mentioned tutorial also details out the usage of this process to reduce noise. Save the settings used here in the MultiLinearTransfer process for later usage, as in my experience these will be good in almost any situation.
Note that ALL steps here are optional! In most cases you will do some or all of these step, but I did none of them with my wide field image of the Pipe nebula for instance. This is mainly because this is a wide field of a part of the Milky Way where you have a lot of signal (hardly any regular background) and the data is of such good quality.
I deliberately talk about the stretching of the image as a separate stage, as this step where you go from Linear to non Linear is crucial for a good end result with preservation of small and colorful stars.
Basically I take a three-step approach when stretching:
This script will mask the stars while stretching your image in many iterations. Doing so will prevent (most) stars to end up as blown out white spots (or even blobs ;)) in your stretched image.
You might want to try different settings here for various images, but most of the time I use 75 iterations and a target median of 0.12
You can read a detailed article on the use of MaskedStretch here.
After the stretches of the previous step the image is no longer in the linear stage. In the non-linear stage we take the last steps to finalize the image and achieve our end result. One thing to note is that almost in every step in this stage you will have to work with (different) masks. In my opinion, one of the things that will greatly improve your results is the ability to create excellent masks for every step which is certainly not easy.
The most common steps for me in this stage are:
Would you clarify step c) under Preparation and Combination of your data? Does not the BatchPreProcessing register the images and generate a directory with aligned/registered images that can be used directly by the ImageIntegration process? You specify aligning images (unspecified which image set: registered, debayered, etc?) using ImageRegistration but there are three of those to choose from: CometAlignment, DynamicAlignment, and StarAlignment.
Thank you for your time. This lesson has been been very helpful.
This is very helpful. I recently purchased and have been trying out PixInsight. My go to software to this point has been Photoshop but frankly PI blows my socks off. Is there any guidance or steps that you can give on processing Narrowband images through PI? Thank you.
There are many ways to process narrowband images in PI. Fortunately there are some great scripts for that, so definitely check them out (NBRGBCombination and the AIP scripts). You can also do some quick and easy stuff with PixelMath that is especially useful for creating bi-color images (from Ha and OIII). You could do Red=Ha, Green= 0.8*Ha+0.2*OIII and Blue = OIII (or 0.2*Ha+0.8*OIII) and try different values and see how it works.
Also try and see what results you get to do the channel combination in the linear stages vs non-linear. For me it seems to differ on the data what gives me the best results. If you combine non-linear data make sure the channels are stretched (more or less) to the same intensity and make sure you already take good care of stars (shrink them in OIII!) and bright features (apply HDRMultiscaleTransform and/or LocalHistogram etc. already on the individual channel)
This short tutorial is good because I find that most YouTube introduction too fast for me to follow. Secondly your tips allows me to explore other processing tweaks. I have some decent light frames acquired recently and I can now play around!
Hallo Chris
Ein sehr umfangreicher Workflow ber PixInsight.
Besteht die Mglichkeit fr jene die kein Englisch wie ich knnen sich den kompletten Workflow allen Themen auf deutsch aus zu drucken.
cS Mario Richter
As you embark on your astrophotography journey, you will quickly realize that there is no single approach for image processing. Every imaging rig is different, every imaging night is different, and every object requires a slightly different approach. The processes you use for a globular cluster will be different than if you are imaging a galaxy or an emission nebula. But hare are some PixInsight workflows for many types of objects.
PixInsight is a great solution, allowing you to highlight even the faintest objects within your image; but when starting out, it is overwhelming. These workflows will help you get started with a baseline image processing techniques as well as advanced topics. These workflows were used to create the following
Broadband Workflow for Galaxies and Nebula: Focused on one-shot color or monochrome cameras using red, green, and blue filters. This workflow is used for galaxies and reflection nebula. Workflow is sometimes referred to as RGB or LRGB images.
Narrowband Workflow: Uses hydrogen-alpha, oxygen-III, and sulfur-ii (optional) filters to create a color image. This workflow is appropriate for emission nebula. Workflow is sometimes referred to as SHO images (by using the Hubble Palette).
Lunar Workflow: Lunar imaging is a completely different process than imaging galaxies, clusters, and nebula. Lunar imaging uses a method called lucky imaging, which requires the use of additional tools.
Adding HA to RGB Images: Many broadband objects can be further enhanced by integrating one or two narrowband channels, typically the hydrogen-alpha channel. This is especially true for galaxies with the HA channel can better highlight star forming regions.
Star Masks: Within astrophotography, creating a star mask serves a crucial role in enhancing the final image. By creating a mask that isolates the stars, it is possible to manipulate (sharpen, saturate, brighten) the background or non-stellar objects without affecting the stars. With a star mask, it is also possible to reduce the stars impact on the overall astrophoto. Learn how to use the PixInsight Star Mask process (manual option) as well as StarNet+ (automated) to create star masks.
Luminance Masks: Luminance masks are primarily used to target adjustments based on the brightness (luminance) values of the pixels in your image. A greater strength mask is applied to high signal areas while a lower strength mask is applied to low signal areas.
Range Masks: Similar to Luminance Masks, range masks are used to target adjustments based on pixel brightness values while offering more flexibility by defining a specific range of brightness values.
These videos take you through the workflows of different image sets. This experience binds together much of the material presented in PixInsight Fundamentals - as you can see it all in action. Please do review the other sections before watching this workflow series. However, if you cannot wait- these workflows can be followed as "recipes" for how to create images in PixInsight. CMOS and DSLR examples are also available below!
This image processing tutorial includes a workflow guide as it follows standard processing in general. There are however some key observations that will enhance an understanding of how to handle less-than-perfect data (just as this set is... ).
Shown here is an up-to-date workflow of faint nebulosity with the latest tools and techniques. This demonstration also highlights important concepts concerning normalization of data. The example outlined here solves a specific problem and you can instantly use the information to get the most out of your own images (and not throw them away unnecessarily).
I'd like some constructive criticism on my workflow and thoughts on the my final image of M81 and M82. I've technically been an astrophotographer for a few years, but as a parent in the cloudy UK, this is the first target where I feel I have enough data at least, though from a less than ideal focal length and image scale.
I put together a new travel rig (as in take on a plane travel rig) which has compromises on pixel scale, but hopefully that drizzle can resolve. Giving it a test run, seeing as it was galaxy season taking a picture of M81 and M82 seemed sensible, though on this rig it'll be quite small.
My own critique of the final image is that it's a little cold, and the detail is lost compared to photos I've seen online, I think because this rig isn't really the best fit for such small objects. Any steps I've missed?
c01484d022