Using Deconvolution in Pixinsight - Part 3: Workflow Considerations

March 26, 2022


Table of Contents Show (Click on lines to navigate)

    Back To Top

    Where in the Workflow Should Deconvolution Be Done?

    All of the work associated with Deconvolution is best done in the Linear Processing stage - before the fundamental relationships related to the PSF have been distorted by stretching. In a linear state, the estimated PFS model is meaningful for all pixel code values except for those at the extreme ends of the scale where clipping has occurred. After stretching, this is no longer true.

    Deconvolution should be done very early in your processing chain. Typically the only steps that might proceed it are:

    • Dynamic Cropping - get rid of those ragged edges

    • Dynamic Background Extraction (DBE) - removing gradients

    • And maybe an application of very light linear Noise Reduction - to "knock the fizz off the image"

    Most people apply deconvolution before any linear noise reduction has been done. But I have noticed that some folks are beginning to do the linear noise reduction step first. Since Deconvolution is very sensitive to noise, this may make some sense. You can try it both ways and make up your own mind. I will be showing some examples of this later on.

    What Images Should Decon Be Applied to?

    The next big question is what images should you apply deconvolution to?

    Deconvolution can be applied to all of the stacked images you have in your set. I have done narrowband images where I deconvolved each filter set separately. I have also done this with LRGB files.

    After a lot of research and experimentation, I would avoid this for a couple of reasons:

    • It makes for a lot of work.

    • Deconvolution will reduce star sizes. If this reduction is different for the various filter layers, this can lead to stars that have color rings that become more evident in later processing steps once the image is stretched.

    It is often best to apply Deconvolution to a Luminance image that will later be rolled into a color image using the LRGBCombination, or other process.

    This is a very common form of processing where the Luminance image is processed to maximize sharpness and detail, while the color image is processed to maximize color and manage noise. When combined, the final image inherits the best of both worlds.

    L image processed for detail and sharpness

    RGB image processed for deeper color prior to L image injection and for lower noise

    FInal image after L image folded in.

    The one exception I might make for this is the case where you are dealing with Chromatic Aberration. In that case, it may make perfect sense to separate out the RGB images, create a custom PSF model for each color, and apply Deconvolution to each layer with the hope of improving things.

    Having said that, The basic Luminance vs. Color workflow is probably best and would look something like this:

    • Do all pre-processing steps

      • Apply Dynamic Crop to get rid of rough edges

      • Apply DBE to remove gradients.

      • Do linear Noise Reduction to “knock the fizz off”. For this you can use Mure Denoise, MLT, or EZ-Denoise

    • Process the linear L image

      • Prepare for Deconvolution

      • Experiment with Deconvolution settings

      • Apply Deconvolution

      • Go nonlinear (I typically use MaskedStretch followed by CurveTransform tweaks)

    • Process the Color Image

      • Use ChannelCombination to create a color image if you don’t already have one

      • Do Color Correction if this is an RGB image. The traditional approach is to do BackgroundNeutralization followed by ColorCorrection - but my favorite method is PCC, which does both

      • Go nonlinear with the color image (Typically use a combination of MaskedStretch followed by ArcsinhStretch for color images that will have an L image folded in later)

    • Use LRGBCombination tool to fold the L image back into the color image

    • Finish your non-linear processing.

    So, How Does This Play Out in Different Imaging Scenarios?

    One-Shot Color (OSC) Images

    CFA Arrangement used my OSC Camera.

    I have the least experience working with OSC images but I believe the best approach here is to extract a Luminance image, work with that, and then fold the results back into the color image.

    Or you can change the target of Deconvolution to use RGB/K - this then will internally extract the L image, apply is, and reintegrate the result. (Note: remember - you should still use RGBWorkingspace set to 1.0, 1.0, 1.0 for this).

    I don’t have experience doing this, so I would recommend the Luminance Extraction method that would look something like this:

    • Prep the RGB Linear Master image

      • Dynamic Crop to get rid of the rough edges

      • Run DBE to get rid of gradients

    • Set RGBWorking space to 1.0, 1.0, 1.0. This will balance the colors properly when extracting the Luminance image.

    • Extract a Luminance image from the Master RGB image

    • Process the Linear L image

      • Use this image for preparing for Deconvolution.

      • Use this image to determine the best Deconvolution parameters

      • Apply Deconvolution to this image.

      • Finish your Linear Processing on both L and RGB images

    • Process the Linear RGB Master image

      • Run PCC to Neutralize the background and color correct

      • Do any other linear processing steps you want

    • Take the Luminance and the Master RGB image Non-linear

    • Use LRGBCombination tool to fold in Luminance image back into the color image

    • Finish your non-linear processing.

    LRGB Images

    Mono Camera and a filter wheels are eh primary source of LRGB images .

    Since you already have LRGB Master Images, you can pretty much follow the workflow already laid out.

    One variation on this is LHaRGB images, where you are also folding in a Ha layer. In this case, I tend to combine the Ha and R image with a blend, and then carry on from there. One handy tool for doing that is the SHO_AIP script, which provides for a panel that allows you to blend images together and see a prototype of what that blend would look like.

    Once this is done you can treat the HaRGB image blend as the color image and run through the Color processing steps. You can then process the L image, including Deconvolution. And then finally you can take both images nonlinear and use LRGBCombination to fold them together.

     

    The SHO_AIP Tool is very handy for doing image blends - a very easy way to fold Ha data into an LRGB image

     

    Narrow Band

    Narrowband Filter Set.

    In many of my images, I have applied Deconvolution to all of the Master images. In general, this worked well for me but, as I have mentioned, I have had some issues with color rings around the stars that needed to be addressed later in the processing.

    I am now applying it to a Synthetic Luminance image created by some combination of Ha, O3, and S3 images. The best method for this is ImageIntegration. Alternatively, if you have a very dominant Ha image, you can treat it directly as the Luminance image:

    • Do all pre-processing steps

    • For all images

      • Apply Dynamic Crop to get rid of rough edges

      • Apply DBE to remove gradients.

      • Do linear Noise Reduction to “knock the fizz off”. For this you can use Mure Denoise, ML, or EZ-Denoise

    • Process the linear L image

      • Create a synthetic L image using either ImageIntegration and the Ha, O3, and S2 images, or just a copy of the Ha image, if that image is dominant.

      • Prepare for Deconvolution

      • Experiment with Deconvolution settings

      • Apply Deconvolution

      • Go nonlinear

    • Process the Color Image

      • Take the master images (Ha, O3, S2) nonlinear

      • Balance out the Ha, O3, S2 images using the Linfit or do it by eye using the HT tool

      • Use ChannelCombination to create a color image.

      • Color process your image (typically I do an SCNR for green, then use ColorMask to select the Magenta layer and tweak that with the CT tool as a start)

    • Use LRGBCombination to fold the Synthetic or Ha “L” image into the color Image

    • Finish your nonlinear processing

    Now, let’s look at you need to prepare before being ready to apply Deconvolution.



    Patrick A. Cosgrove

    A retired technology geek leveraging his background and skills in Imaging Systems and Computers to pursue the challenging realm of Astrophotography. This has been a fascinating journey where Art and Technology confront the beauty and scale of a universe that boggles the mind…. It’s all about capturing ancient light - those whispering photons that have traveled long and far….

    https://cosgrovescosmos.com/
    Previous
    Previous

    Using Deconvolution in Pixinsight - Part 2: An Overview of PFS and Deconvolution

    Next
    Next

    Using Deconvolution in Pixinsight - Part 4: Preparing for Deconvolution