Post Processing
How are these images edited after capture & calibration?
I want to first clarify that Astrophotography is a “Garbage In, Garbage Out” process. All of the strategies listed in Part 1 can only do so much if your data still ended up with poor quality or quantity. If taking static (untracked) shots of the sky, your Long Exposure images won’t really be that long of an exposure, limiting the initial SNR of your images. If clouds came in and you could only take 9 images, Image Stacking & Outlier Rejection won’t be a silver bullet to your Read Noise. If your focus was a little off your image may have great SNR - yet still yield a subpar result. The point is you can only improve data of poor quality to a limited extent, and typically the worse the data is, the more time is often required in post-processing to correct it - because you are trying to make it into something it is not. A stacked photo made from numerous, well-calibrated, and focused images from dark skies will be a pleasure to process because the SNR is already much more optimal to start.
To put it in terms of another form of art, the renaissance artist Michaelangelo once said “Every block of stone has a statue inside it and it is the task of the sculptor to discover it.” Astrophotography is no different, except that we are also tasked with building our block of marble before we carve it.
Anyway, this section will cover how an astrophoto is edited after calibration and stacking. The steps here are not necessarily present in order, and various types of data (False color Narrowband, Luminance data, One Shot Color, etc) may require only some of these steps, but this is more of an overview than in-depth tutorial.
Non-Linear Stretch
First off, your Master Light image will be dark. Not to worry, the detail is all there, its just in a linear state. This means the data in the image is proportional to the charge amount of each pixel. Practically, this means that the nebula - which is much dimmer than a given bright collection of stars in the photo - will barely be visible, if at all. Our eyes don’t work in a linear fashion, they respond more so in a logarithmic manner, so an object we perceive as twice as bright as another is really more like 10x the luminosity (not an exact value of course, just another of many overly simplified examples on this page).
So, the linear photo must be given a stretch to roughly match a logarithmic curve, which basically means the contrast is increased in an amount inversely proportional to the pixel value of the linear image. In other words, dark areas will be brightened the most while bright areas are not as affected. The trouble with using the term “brighten” is that it may imply that some kind of artificial enhancement is added to the image, but this is not the case - the image detail is all there, all we are doing is revealing it. This process compresses some of the data to the “right side” of the histogram (and if overdone, will destroy details by clipping white) but also reveals the background nebula. I’ll also note that this also removes about any scientific value the image may have had in its linear form, but hey, its all art anyway.
For Photoshop users this is often done as the first step, but PixInsight is able to handle much of the image processes listed further on down below while the image is still in its linear form.
Gradient Reduction
In part one I mentioned the use of Flat Frames to reduce gradients and flatten the field of an image, but this only corrects for gradients inherent to the telescope and camera - it does nothing against the effects of light pollution or airglow since these are real photons from moonlight or streetlamps (and even with good Flats, the image may still have some minor vignetting or uneven fields as they are not always perfectly captured or applied). However, much like using Flat Frames to model and correct for vignetting and dust motes, the background of the image itself can be used to counter any gradients present within it.
The process of Gradient Reduction is usually completed through selecting portions of the image which contain empty background space (no nebula). From these selections, a model of the background is inferred and then subtracted from the Master Light image. Sometimes an offset is added to avoid making the background excessively dark, but the result is the image gradients are removed. The specifics of how this is done can depend on the photo processing software in use. In Photoshop, clone-stamping out the deep sky object, running the Dust & Scratch filter on the stars, heavily blurring the now-empty image, and subtracting it from the original is usually an effective method with some experimentation, though the $60 plugin GradientXTerminator will do this automatically. For those using PixInsight, the two processes Automatic/Dynamic Background Extraction (ABE/DBE) work much the same way, with the latter DBE allowing the finest control via placement of sample points throughout the image.
These processes can be easy to run on apparently smaller or more defined objects like galaxies since much of the image is likely flat, dark, and empty background space. The difficulty starts when shooting an image where there really isn’t much in the way of a true background, which is a common issue when shooting expansive emission complexes. The best bet here is to only choose the darkest regions of the image for sampling, but care should be taken to avoid accidental reduction of fainter background nebula.
Color Calibration / White Balancing:
Thanks to diffraction of light through our atmosphere, gradients from local light pollution, and the camera’s inherent efficiency in capturing photons or varying wavelengths, many images require some sort of color correction. In Photoshop this can sometimes be done using a grey point to select the background, since the blank expanse of outer space tends to be neutral in color. Other programs offer “smarter” options, including the use of plate solving. Using known star positions, PixInsight’s Photometric Color Calibration is able to determine the location of the image and then balance the color by targeting a star of a specified spectral type which has a known color.
Just as gradient subtraction sampling risks targeting background nebula for subtraction, care must be taken in choosing sample points for balancing the image color. In the case of emission nebulae, which are typically pinkish to burnt orange in color, choosing a grey point on any of this faint nebulosity could compromise the overall image color.
Color balancing is largely used when shooting in broadband or true color, meaning the camera was shooting the same wavelengths seen by our eyes. For false color images, color calibration typically does not offer much benefit since “accurate” color is not the goal of false color in the first place, though a grey point can still be useful in ensuring the background is neutral in color.
Narrowband Color Contribution:
Many objects in space emit photons along emission lines, meaning they contain large amounts of ionized atoms which produce light in a specific color. Two of the most common emission lines (for amateur purposes, anyway) are caused by ionized Hydrogen and Oxygen. The advantage of shooting with narrowband filters is that they enable capture of high-contrast imagery, even in light-polluted areas. As these narrowband filters are used for capturing wavelengths of specific colors, they can also be used to boost the color channels of a typical broadband RGB image. This is typically accomplished with Hα being applied to the Red channel, and in the case of Oiii it usually contributes to Green and Blue. To learn more about filters and why astrophotographers use them, be sure to check out the Filters page on the FAQ.
As usual, the specifics on how this is done depend on the software in use. In Photoshop, this process can be as simple as layering the narrowband image over a color channel and adjusting opacity until the desired result is achieved. However narrowband images still have visible stars, leading to the potential problem of causing all the stars in your combined image to become discolored. One solution is to simply mask out the stars using various Star Masking processes or StarNet.
Deconvolution
The problem with photographing space from Earth is that we sit on the floor of an ocean. Our atmosphere will diffract the light from space before it enters our telescope and this can blur the resulting photo. Deconvolution is the process of reversing or reducing this blur. This should not be confused with sharpening, which can broadly (again, much of this is generalization to keep things simpler) be described as a series of small local contrast enhancements.
In somewhat simple terms, Deconvolution (Decon) works by comparing the light curve of a perfect star (a point of light, or Point Spread Function / PSF) to the stars in the image, modeling the difference, and applying this to the entire image. The effect is that nebula detail appear much sharper and some of the smaller stars may be both tightened up and brightened. This process can be seen in the comparison below. As with color calibration this change can be subtle, but look closely at the smaller stars and edges of nebula detail and you may notice an increase in contrast.
Noise Reduction
Wait, didn’t we already get rid of the noise through Bias/Dark frames and image stacking? Well, technically yes but actually no. Noise can be reduced to minimal levels, but it will never be reduced to 0. Noise reduction is often still needed, especially if the image is stretched to show every possible pixel of dim background nebula. Noise Reduction algorithms at their very basest work like selective blurring, where specific parts of an image are targeted for more blurring than others. Smarter process can sometimes tell when noise ends and detail begin through analyzing sudden changes in pixel values at certain scales. At middling pixel scales, light and dark pixels next to each other may indicate the edge of a nebula, while the smallest scales of contrast could indicate noise.
Noise reduction can easily be overdone. When applied too harshly or without sufficient masking, the image may become posturized, meaning contrast between pixel values is reduced enough that the photo takes on a plastic-like appearance. This process can be controlled through use of masks which limit which parts of an image are the most or least affected, and one of the more common methods of providing proportional image protection is to copy the image itself as a mask and invert it. The inverted image will automatically protect the bright (now dark) stars as well as the brighter areas of nebulosity in the image.
Sharpening
As mentioned in the Deconvolution section, sharpening basically works by comparing two pixel values and increasing one while decreasing the other to provide higher contrast. This can be applied at various scales. Smaller pixel scales often including the edges of dust lanes in a nebula, the spiral arms of a galaxy, or the atmospheric banding of Jupiter. Larger pixel scales could be sharpened to enhance larger features like entire areas of nebulosity against background space.
Care must be taken to protect the stars with a mask during this process. As stars offer a typically high amount of contrast over small scales many sharpening processes will mercilessly target them until they are given “raccoon eyes.” Noise can also be targeted since, depending on how far the image was stretched, the contrast of background noise itself may have been increased to noticeable levels. Sharpening, particularly when targeting smaller pixel scales, may increase this further, so noise reduction is usually completed before sharpening.
While most of the focus in these FAQ pages is on Deep Sky targets, sharpening is probably most noticeable when applied to high-speed capture of the Lunar surface:
Star Reduction
One of the final steps in image processing can be to reduce the sizes of the stars (particularly brighter ones). Artistically this allows any nebula in the image to stand out better, but it could be argued that it also makes the image more accurate.
The problem with bright stars is they can quickly saturate the pixels on a camera sensor, and once the pixels covering the star itself reach the top end of their capacity, the signal can spill over into neighboring areas of the sensor. This is why stars appear in different sizes in astrophotos - it is not necessarily that they are closer, just brighter. A good star mask is important to protect the rest of the image from being improperly affected, but thankfully these have become much easier to generate through the use of StarNet.
Aesthetic Changes
The final steps of editing any astrophoto may involve small tweaks of contrast, saturation increase, reversal and re-edits of past editing steps, and a thousand more small changes. I’ve often spent hours and even a day or two making small changes to the background color balance or to the saturation value of the stars, and one major reason is eye fatigue. After some time spent editing, stepping away and looking at something else can be a good idea. After returning to the monitor, if the image looks about the same, I usually like to refer to the quote that;
“any difference that makes no difference - is no difference.”
I haven’t included any example photos for this final section since they wildly range in the changes needed or made. Every photo is going to be a little different in this regard. A star cluster may only require some subtle coloration increase, but a bright and dynamic object like the Andromeda galaxy may require special touches across the entire images to balance dynamic range of the bright core, star coloration, nebula coloration, and more. At this point it all comes down to your own tastes since since each photographer has their own goals and aesthetic preferences. About the only thing left to do after this point is to save your self a nice JPG to share on social media or to send to your local printing store so you can hang it on your wall.
As I said far up at the top, this is more of a high-level overview of the process than in in-depth tutorial, but there are many resources from other astrophotographers online which go into much more depth on every aspect of the steps listed above. If you started reading on this page and would like to learn more about the process of taking these photos and the equipment in use be sure to check out the other pages on the FAQ.