The Top 10 Common Errors Beginners Make in Astrophotography

( I know because I made them all! )

Recently, I passed the second anniversary of the date when I took my very first Astrophoto. You can see that image HERE. Over the past two years, I have undergone a significant learning curve, so it's safe to say that my perspective on Astrophotography now is quite different than it was when I first started.

When I decided to create this website, I knew that I would have to migrate all of my images (and their stories) over to it. This would be a significant effort, but it was necessary. In doing that, I revisited some of my very early images which I had not looked at for some time.

While reviewing those images, I could still remember how I felt about them at the time I took them. It was pure excitement and I was pretty darned proud of myself!

But as I look at them now, I cringe because they are in fact, pretty rough. I guess this should not be all that surprising. The journey into Astrophotography is about learning and growth - I see things very differently today than I did two years ago.

The problems I see in my early images are caused by a handful of issues. Over time I realized this and my images began to improve. But the problems I see in my early images are also problems that I often see in the work of many budding Astrophotographers who are in the early stages of their own journeys. I thought it might be helpful to call out some of the mistakes I made and encourage others to avoid them when they can.

I should emphasize that this list is inspired by looking at my early images and remembering back how I thought about things at the beginning - there are a lot of other common beginner mistakes that we could also talk about (selection of first gear, polar alignment, camera setup, etc.), but I am not going to try and cover those here (though it may make for an interesting future article).

So without further ado, I introduce to you my list of the top 10 most common beginner errors in Astrophotography.

#1. Integrations are too short

Most of my early efforts consisted of very short integrations of around an hour or less, some with exposures on my subframes that were also quite short.

Why only an hour? Frankly, I don’t think I trusted myself or the equipment. Going after more subs seemed like I would be pushing my luck and asking for trouble. But the truth is that if you have gotten far enough along that you can get 1-hour integrations that are looking good (i.e., nice round stars, etc.), going to 2, 3 or 4 hours is really not that much of a stretch. Extending your integration times will make a world of difference in your final images! Astrophotography is a game of signal vs. noise, and the best way to improve your SNR is to get as many good subs as you can. As a beginner, I often thought in terms of how many targets I could shoot in a night. Now I think about how many nights I can get on a target. Pushing for longer integrations may be the single most important piece of advice I could give you.

Why the short subframe exposure times? The same reason basically. I was afraid that longer exposures on my subs were inviting problems so I tended to keep them short. I had no special reason why I picked the exposure times that I did, yet when you think about it, the exposure time you choose for your subs is one of those critical choices you make when you are setting up a sequence. If you choose too short a time, you may have a problem pulling the signal out of the read noise. You are just not allowing enough time for the signal to build up. Conversely, if you pick too long of an exposure time for your sub, you can blow out more and more stars. They will just saturate the sensor and be seen as clipped white blobs. So how do you determine the optimal exposure? There are articles you can search for that will suggest very analytical methods for calculating the read noise of your sensor and the optimal exposure level relative to that. This may be overwhelming to beginning astrophotographers. But there are a few simple things you can do.

You could search for images on Astrobin.com that were taken with the same or similar scope and camera combination that you are using. Look at images that have great quality - what exposure was used for those images? That might be a good starting point. Another thing you can do is take a few test exposures. Most sequence control applications provide some way to take a single exposure at a specified time and display the statistics from the resulting image. Some even allow you to roll the cursor over the image and see the values under the pixel the cursor is on. This would allow you to experiment with sub exposure times and probe the resulting images. Find one that seems to capture good detail on the faint structures in the target while not saturating too many stars.

So - don't be afraid to push on your integration times - shoot for as many subs as you can for a given session. Also, be willing to experiment a bit around choosing your exposure times for your subs. These can have a huge impact on the quality of your captured image. Trust me - you will be happy that you did!

During processing, I had to really work at getting detail on the Witch’s Broom. What I really needed is more integration time! (click to enlarge)

The same target taken a year later - this time the integration is 2.5 time longer and my image processing skills had developed. The extra integration was key though! (click to enlarge)

The Crescent Nebula has wonderful detail, but not if you have too little capture time on it! (click to enlarge)

This was taken a year later with a longer integration with a mono camera producing a bicolor image with Ha and OIII narrowband filters. (click to enlarge)

#2. Targets are centered but not composed

Platesolving is like magic. You point your scope at the sky, do a platesolve, sync your mount, and now you can point to almost anywhere with confidence. You can enter the name or designation of almost any target and most sequence control software packages can look up the coordinates and center the scope to those coordinates. They then confirm that your target is located in the center of the field and tweak positioning as needed.

For a long time, that was all I did. I would run a sequence with a target centered, and only later - after stacking - did I see what I ended up with. Any composition done was done after the fact - usually in Photoshop. Of course, this is not an option if part of your target is outside the frame.

Finally, it dawned on me that if I was going to spend time capturing image data and processing it - perhaps it would be worth a bit of time and effort to ensure that my framing and composition were what I wanted it to be.

You can take a single test exposure and see how the target is framed in your camera field. You can determine how to rotate your camera relative to the scope to better frame the field, and you can take another exposure to confirm that framing. It’s really that simple!

Later on, I began using Sequence Generator Pro (SGP) and learned about the ‘Framing and Mosaic Wizard.’ This would pull up survey plate images of the target you are interested in and allow you to rotate the field to set the framing you want. Cool! Then when SGP was centering the frame, it would prompt you to rotate the camera to get the framing you specified in the wizard. I found this to be very convenient. So much so that I installed a camera rotator so that this framing can be dialed in automatically and confirmed via platesolving. Even more convenient!

Proper framing is not hard and can greatly improve your final image. Take the time to get it the way you want it - it is well worth the effort!

The center of M33 is just where I wanted it and I was really happy with the detail I captured. The framing? Not so much. I really wish I had carefully composed the image before I started. (click to enlarge)

This is a later effort, where I took the time to frame the composition that I wanted. I am much happier with this version. (Click to enlarge)

#3. Focus Issues

When I first started, I focused the camera by eye. It was slow and painful - and usually wrong! I scrapped some of my very first images because the stars were huge soft blobs.

Then I read about using Bahtinov screens to focus a scope. Hmmmm - that sounded good! Lo and behold, my William Optics Scope had one built-in to the lens cap. How handy is that? Let's use that! I still had a heck of a time getting used to doing this, and the result was that sometimes my focus was off. As a rookie, I found it hard to judge where the diffraction spikes needed to be for a good focus. Some sequence control software has tools that will help with the Bahtinov focusing process. APT has a Bahtinov helper screen that guides the process with metrics and a graphical representation of how far off from perfect focus you are. This was quite helpful.

Sequence Generator Pro has a star measure tool that will calculate and display the HFR (Half Flux Radius) of the stars at your current focus position. This provides an analytical method for guiding your manual focusing efforts.

Good focus is a prerequisite to a good image. Take the time to learn how to do it right. Do it just before you start your sequence, and if you are now pushing the number of subs in your integrations (as recommended in Item #1), you should plan on pausing the sequence every hour or so to check your focus. As your scope cools in the night air, it will contract, and this is enough to throw things off.

Manual focus without any tools is hard to accomplish. This image of Messier 8 suffers from several problems but the focus is clearly a major issue here. (click to zoom)

He is another image of Messier 8, taken in 2020. Focus is done well, the composition is what I wanted - I’ve avoided (mostly!) most of the problems in this list. (Click to enlarge)

#4. Waiting too long before leveraging calibration data

There is a lot to learn before you get to the point where you can take well-formed subs, and getting to that point is a real accomplishment. Once I got there, I was all about capturing the subs, but I did not really understand what I needed for cal frames. It's not hard to do - certainly simpler than getting everything else working together well. But it took a while before calibration exposures were a routine part of the capture process.

First, I started with Darks and Biases - they were easy to do - and they can be reused for different projects for a while.

It took me a bit longer before I had flats down - part of that was building a LED panel-based flat flight source. But Flats turned out to have the most significant impact on my images. They can compensate for the center-to-edge fall-off in the optical system and for dust that may find its way onto the filters or the sensor window of the camera. It can make a big difference and can greatly simplify your processing.

Cal frames are not that hard to do - take the effort as early as possible to make cal frames an essential part of your capture process.

This is one of my simple flat light sources. It uses an inexpensive LED Tracing panel from Amazon. I packaged it in closed-cell foam to protect and make a circular cut-out that can be removed so that I can hang this off the end of my scope. (click to enlarge)

The flat light source in position on my small scope platform. (click to enlarge)

 

Here is a flat from my AP130 platform - in this case using the ASI1600MM-Pro camera. You can clearly see the optical center-to-edge fall-off and the dust donuts that Flat calibration will deal with. (click to enlarge)

 

#5. Not taking flats for every session

This is related to the previous item. Flat frames can be used to compensate for dust and for fall-off. Right? Yes, however, keep the following in mind:

  • Dust moves. It can move when you bump your scope or when you are moving it around during setup. It can move when the scope is slewing or doing a meridian flip. What happens when you take a flat and then the dust moves? You will be correcting for the wrong dust patterns!

  • Cameras move. Or at least they can if you are doing good composition (see Item #2). If you take a flat and then rotate the camera, the scope fall-off patterns, and perhaps even some of the dust motes will move as well. Once that happens, flats taken before rotating your camera are no longer useful.

So what does this mean? - taking flats for every session is a good practice. Other cal frames like darks and biases, and even flat darks, can be captured once and used for a more extended period of time. They are tied to the electronic response of your system and that does not change quickly or randomly. However, flats are related to the optical configuration of your imaging chain. They are based on the rotation angle of your camera and the dust on your optical surfaces. Since these can change over time, you really can’t re-use them. Using flats taken a month ago may be convenient, but they will not do what you really need them to do.

The best practice here is to take flats for every session.

This flat is from my AP 130 mm scope and the ASI1600MM-Pro camera. (click to enlarge)

This flat was taken on the same platform but the camera was rotated to a different position and the dust has moved around. (click to enlarge)

#6. Not reviewing your subs carefully before processing

Once I had a collection of subs, I couldn’t wait to see what the stacked image looked like. This meant that the very next morning, I would load all of the images into Deepsky Stacker, run it, take the final integrated image and jump into Photoshop and never look back. Perhaps in some cases, I would do a very cursory inspection looking for satellite trails and such so that I could remove the frames (another mistake - see it Item #7).

It’s a shame because I missed a real opportunity there. You really should do a close inspect your image data before beginning to work on the image.

Why is this important? The data we collect in our subs is the foundation of the final image. I should have taken the time to review every sub and every cal frame and learned more about the data I was collecting, how it looked and how it varied under different conditions. How would you even know if you have a problem if you are unfamiliar with what the low-level data looks like? What are min code values? What are the max code values? Do all the frames look the same? How are the star images - tight and round? Any tracking issues? Are there clouds coming through the images? Any gradients in the image? Do the cal images seem consistent? Are the min, means, and maxes pretty similar across the frames collected? Are there consistent patterns or patterns that change? Are there any light leaks getting through when the dark cal frames are taken?

This is your chance to understand where you have issues that can only be fixed by changing your capture methodology or setup. It’s also a chance to identify problems that may show up in your images that may have to be dealt with during preprocessing or processing.

So how do you go about inspecting this data? Deepsky stacker provides you with tools to help you with this - once you load the subframes and cal frames, you can step through each one and view it. Metrics can be computed to highlight any frame issues.

In Pixinsight, the blink tool is a great way of inspecting your data. The SubFrameSelector calculates metrics and creates plots of those metrics across your subs. This is great for calling out subs that have specific issues compared to the other subs in the set.

Take the time to know your data!

Blink of a group of 300-second Ha subs for Messier 16 from a recent project that spanned 3 nights.

#7. Throwing out hard-won frames just because they have a satellite or airplane trail in them

As I indicated in Item #6, if I looked at my data and saw anything wrong with it, I threw that frame away.

You work hard to capture frames, and even ones with trails have useful information that can be still be used to enhance the signal-to-noise of the final integrated image. Let's say that you had 30 subs. Take any one pixel in the image at a given X, Y location. With 30 subs, you have 30 different measures of that pixel. In its simplest form, stacking is just using the average value of those 30 samples for the final integrated master image for that pixel. But those 30 values also form a statistical distribution - and you can do various significance tests to determine if any pixel value is an outlier and should not be considered as a legitimate member of that distribution. This kind of computation is done to reject pixel values that seem odd or very dissimilar to other measures for that specific pixel. Satellite or airplane trails are odd-ball events. You may see them on one frame and not in the other frames.

Rejection algorithms can cull the affected pixels from the averaging process, so don’t remove those frames!

 

This is a rejection image created by Pixinsight’s ImageIntegration process. It shows the pixels that were rejected during the integration process. Seeing them here means that they will not be showing up in your final image! (clic to enlarge)

 

#8. Clipping your background sky to black

When I first started trying to process my images, I was using Photoshop. The first issue I wrestled with was the color balance of the image. It was often green and weird. I often dealt with it by adjusting the background sky to black. I just took the minimum code values for R, G, and B and mapped them to zero. The color balance looked better, and the background sky was black. Space is black - right? That's the ticket!

No - that most certainly is not the ticket.

The background sky has its own brightness, and it can vary across your frame. A good Astrophoto will always show the background sky as something other than black. Look at the sample images below. Which one looks more natural? The one on the first is clipped to black where the second one on is not.

There is another reason to make the background black - and that’s to hide things. Maybe you have a lot of noise in the background - driving it to black will clip and hide that noise! Maybe you did not take Flat cal files, and now you have "dust donuts" in your background - no problem - bury it in the back! Right?

Again, no!

The best way to manage noise is by smart application of the right noise reductions techniques. The best way to handle "dust donuts" is to remove them via flat calibration files. Burying your artifacts in black to hide them is often seen as a rookie move. It is best to never clip your data if you have that option!

The Fireworks Galaxy - I drove the background to black on this one. (click to enlarge)

This later version of the FIreworks Galaxy never clips the sky brightness to black and I think it looks a lot more natural. (click to enarge)

#9. Getting the color wrong

As suggested above, I had a lot of early problems with color. My master images often had color issues, and part of the problem was that I just did not know what the right color should be. Even if I did know, I was not always sure of the best way to correcting the balance.

So let’s address both issues.

What should the color be? Well, you have some degree of freedom here in how you choose to create a look for your image, but there are some things that are right, and some things that are just plain wrong.

For example, One-Shot-Color cameras - often used by beginning astrophotographers - use have a Bayer color filter pattern of RGGB. This means every block of 4 pixels has two measures of green, one red and one of blue. Since green is double sampled compared to red and blue, is it any surprise that your master image looks green?

How many astronomical targets are actually green in color? Almost none! You can get some green in some planetary nebulae, but typically any Astroimage with a predominant green balance is just wrong.

Then how should it look? Background sky for the most part should be neutral, so that’s one cue.

You can also try seeing what other astrophotographers have done with that target. Astrobin.com is great for this. Of course, there will be variances in color balance - but you can get a feel of where you should be taking the image.

How do you adjust the image so that has the right color? That depends on what software you are using for your image processing. There are many ways you can do this, but shifting R, G, & B histograms relative to one another is until color biases are removed is one. If you are using Pixinsight, there are a few good processes and one magical one designed to adjust the color of an image. The magical one is a process called Photometric Color Calibration. This routine identifies the stars in your image, looks up their actual photometric colors as measured by scientific plate surveys, compares the correct coloration for those with the colors that your image currently has, and comes up with color transform to normalize the color. Now you have accurate colors - Pretty cool!

Here is my first attempt at M51. It still looks pretty green here. (click to enlarge)

At some point, I recognized that he color was off. I went back in and tried to adjust things a bit. The color is a lot better here, but not perfect. (click to enlarge)

This is my most recent attempt. Photometric Color Calibration nailed the colors here. (click to enlarge)

#10. Processing is overdone, stars are blown out

I spent a good chunk of my career focused on developing digital enhancement algorithms for consumer imaging. I used to say that the best part of digital imaging was that you could modify an image in any way you want. I would then go on to say the worst part of digital imaging is that you can modify an image in any way you want! Going too heavy on image processing will often create an image that does not look natural.

Now don't get me wrong - you have to do a lot of image processing to tease the signal from the noise on an Astroimage in order to produce an image that is pleasing to the human eye.

But it is so very easy to just go too far:

  • Noise reduction. You do have to reduce noise. Go too far, and your image becomes unnaturally smooth - almost plastic-looking! The image will look much better if you leave some noise in.

  • Contrast too high or unnatural. It is easy to create non-linear transforms to emphasize different aspects of an image’s tone scale. However, go too far or change the slope of the tone scale curve too drastically over a narrow range of code values can cause images that look weird and unnatural.

  • Color Saturation. For most astrophotos - you do need to boost color saturation to bring out the colors that are there. But you can go too far. Fluorescent colors can be overdone and garish. While this does enter into the realm of personal preference, there are boundaries. For example, - I am a color guy - I tend to go for a strong color position (probably a leftover from my consumer imaging work), but I know other Astrophotographers that like a much more subdued look. So this is somewhat dependent on what your vision is for an image - but if you go too far, a large percentage of your audience will be turned off. Try to keep your images going natural!

A lot of the interesting stuff in Astronomy is exceedingly faint. To bring those features out, you must scale the contrast greatly. However, stars can be quite bright and already near the top of the tone scale range for your image. Bright stars may have already been blown out during capture (i.e., sensor saturation). There is not a lot you can do about those stars. There will also be a lot of stars that are bright but not yet blown out.

But if you subject the entire frame to the tone scale boost you need to bring up the faint and fuzzy aspects of an image, you will likely scale your stars along with it, causing them to look blown-out and colorless even when they did not start out that way.

Early on, it is good to start thinking about protecting your stars when boosting the image's faint structures. This can be challenging. I know I still wrestle with it in every image I work on. But beginners are often somewhat oblivious of this. Just as many beginners bury problems by clipping them in the black portions of the tone scale, they will also tend to drive too many stars to be bloated and white - killing their aesthetics. It pays to really watch this and begin learning about the use of star masks to protect your stars from operations that might damage them

This image, taken in 2019, has all of the problems talked about above. Sky clipped to black, stars blown-out, colors exaggerated and extreme. (click to enlarge)

One year later in 2020, I shot M16 again and this time I corrected many of my errors and I was able to use some more sophisticated processing techniques. Far from perfect, but an improvement. (click to enlarge)

This most recent example was taken this year, 2021. This time in with a mono camera and narrowband filters. I did not get all of the integration that I wanted, but my image processing has evolved and allowed me to get a decent image from a limited data set. (click to enlarge).

Conclusion

This list was inspired by reviewing my early images and thinking back to how I was approaching things in the beginning. So maybe this list is very “me” specific. But I often see similar problems with images made by beginners so I suspect I am not the only person to make these mistakes.

Once you have gotten down the mechanics of reliably capturing decent subs, you start the next big learning curve, which involves the calibration and processing of that data.

As soon as you have gotten to this part, you have begun the next critical phase of your journey - and this is a phase that may never really end. You must work hard to develop your image processing skills. Knowing what to do, and how and when to do it, is critical here. In parallel with this, you will be developing and calibrating your inner “eye” - which will guide you in the creation of an image with your own signature look.

Developing your ability to process images is often what separates the ‘average’ Astrophotographer from the exceptional one. If you are to develop as an Astrophogrpaher you need to understand the importance of this. You need to apply yourself towards developing these skills. There are many resources out there to assist you in this. There are some great free videos on Youtube that are excellent learning resources. There are books and websites dedicated to it. Many start using Photoshop or GIMP but soon realize that they do not have the power and scope necessary for serious astronomical image processing - and finally migrate to Pixinsight. I know I went over to Pixinsight kicking and screaming the whole way. I was a long-time Photoshop user and I found the Pixinsight environment strange and unintuitive. But I now wish I had jumped over to Pixinsight much sooner and worked harder early on to develop my skills there. I can’t imagine processing my images now without using Pixisight.

You may notice that many astrophotographers tend to keep the entire data set they collected for the long run. Why do that if you are done processing an image? It’s because they realize that in another six months or a year, they will be able to do a better job with that data set having further developed their own image processing skills and methods.

Part of this is starting to overcoming some of the early challenges and missteps discussed here.

The items on this list cannot be conquered overnight, but becoming aware of these issues is the first step in overcoming them in your work and an important part of the long-term goal of developing your skills as an astrophotographer.


I hope you found this post interesting and helpful!

Are there other rookie mistakes that I should add to this list? If so, leave a comment below.

Patrick A. Cosgrove

A retired technology geek leveraging his background and skills in Imaging Systems and Computers to pursue the challenging realm of Astrophotography. This has been a fascinating journey where Art and Technology confront the beauty and scale of a universe that boggles the mind…. It’s all about capturing ancient light - those whispering photons that have traveled long and far….

https://cosgrovescosmos.com/
Previous
Previous

Another Way to Share Your Astrophotography: Video Slide Shows