[This is just one of many articles in the author’s Astronomy Digest.]
This is a somewhat tricky subject that depends on the camera, the darkness of the imaging site, the dynamic range of the object and whether or not a tracking mount is used.
A first point to consider that overrides anything that follows is that one should not over expose parts of the image that is being taken. This is a real problem when imaging M42, the Orion Nebula. The central region surrounding the ‘Trapezium’ is very bright and one sees many images where this is blown out. In fact, when using an ISO of 800, I had to use an exposure of just 12 seconds to prevent over exposing this region and thus had to take very many sub frames (subs) to be stacked in order to be able to bring out the far fainter nebulosity surrounding it. [It would have been better to use a lower ISO and take fewer longer exposures as with very short exposures the readout noise of the camera becomes significant. The alternative is to combine a number of different exposure images, effectively making a ‘HDR’ (High Dynamic Range) image. The sensors in modern Nikon and Sony cameras as said to be ISO invariant and there can be an advantage in using lower ISO values. An image taken with these cameras shows little difference if an image is taken at ISO 800 or taken at ISO 100 and then brightened by 3 stops in post processing. This can even be better, as the ISO 100 image can be ‘stretched’ so that fainter parts of the image are brightened more that the brighter regions thus increasing the effective dynamic range of the image ̶ see article’ What ISO to use for Astrophotography’.]
Using a fixed tripod.
Other than the point above, there is only one consideration to take account of ̶ preventing star trailing (unless one is taking a star trails image). The maximum exposure time to prevent star trails firstly depends on the effective focal length of the lens ̶ taking into account the crop factor (for a Nikon APSC sensor the crop factor is 1.5, for a Canon APSC sensor 1.6 and for a Micro 4/3rds sensor 2.) The basic rule is to divide a fixed number by the effective focal length of the lens. This number is widely quoted as 500 but as the resolution of newer camera sensors has improved over the years ̶ a better number to use is 300. With the latest 24 megapixel cameras, some are now using 200 for a full frame camera or 133 for an APSC camera. [Using these latter numbers the results tend to agree with the calculator linked to below.] One would often downsize the resulting image from 24 megapixels to perhaps 6 megapixels (a reduction to 50%) and then a 270 (APSC) or 400 (Full Frame) rule would, I think, be adequate.]
So, if using the 300 rule, simply divide 300 by the effective focal length of the lens. For example:
18 mm lens: 16 seconds
24 mm lens: 12.5 seconds
35 mm lens: 8.6 seconds
50 mm lens: 6 seconds
90 mm lens: 3.3 seconds
However, though not often mentioned, the declination of the region of sky being imaged is also a factor as the sky moves faster across the sensor at low declinations (for example the Orion region centred at DEC 0) than at high declinations (for example the Plough centred at DEC +65) so somewhat longer exposures can be used when imaging higher declination regions of sky. If so, one should be able to increase the time calculated by the 300 (or other number) rule by the factors given below using the factor 1/Cos(DEC):
~40 degrees declination x 1.3
~50 degrees declination x 1.5
~60 degrees declination x 2
~70 degrees declination x 3
An on line calculator
This take into account the pixel size of the camera (one selects from a very wide range of cameras) and rather than using the declination of the object uses the latitude of the observer, direction of imaging and angular height above the horizon. From these last three it calculates the declination of the imaging region and then uses the 1/cos(DEC) formula I have used above. The calculator even specifies the aperture of the lens that is used so that it can calculate the size of the Airey diffraction pattern! I suspect that this formula aims to provide absolutely pin-point stars. [If your camera is not on the list, find a full frame or APSC camera that has a similar number of pixels.]
Removing slight star trailing in Photoshop or GIMP
It is actually quite easy to eliminate a little star trailing ̶ which makes stars look like a small sausages. One should first increase the size of the image to 200% as this makes the correction easier. Then the image should be duplicated to give two layers and the blending mode should be set to ‘Darken’. Clicking on the ‘move’ tool (which in Photoshopis is at the top of the tools column) one can then use the up/down/left/right keys to move the upper layer over the lower one a pixel at a time. It becomes obvious as how to use them. When the stars have become point like, simply flatten the two layers and then reduce the image size to 50% to bring it back to nominal size.
Having written all this, the best advice is to purchase a tracking mount. There is an article in the digest (‘Three Tracking Mounts’) which addresses their use.
Using a tracking mount
Things now get significantly more complicated!
A quick conclusion when using recent CMOS cameras is this: firstly, do not use the ‘In Camera Long Exposure Noise Reduction’ mode then, when in light polluted locations, use 30 second exposures but in dark locations increase this to 1 to 2 minutes (providing that the tracking is good.)
[This may not agree with much that you may have read. Longer exposures will give better results when using CCD cameras as their read noise is considerably higher and their read times much longer than with CMOS sensors. Short sub exposures increase the effect of read out noise (which are typically 8-10 times greater than with CMOS sensors) and reduces the efficiency of the imaging process. For example, my SBIG CCD camera takes ~10 seconds to read out its 8 megapixel sensor.]
Let’s try to justify this conclusion and consider an example where one aims to take a total exposure of 30 minutes. If the camera and tracking were perfect and there was no light glow (pollution) or air glow, then theoretically it would not matter whether a single 30 minute exposure is taken (provided that this did not over expose parts of the image) or 120, 15 second, subs are taken instead which are later stacked in ‘Deep Sky Stacker’ or ‘Sequator’.
Neither of these extremes is sensible in the real world. It is unlikely that the tracking would be perfect over 30 minutes unless autotracking were used. It is also highly likely that aircraft will fly across the imaging region and these would need be cloned out from the image. It’s also possible that a gust of wind might upset the image or one could bump the tripod. (Do not laugh, many of us have done it.)
Conversely, after each 15 second sub perhaps 2 seconds is required for the data, usually raw+Jpeg, to be read out to the SD card, thus the 120 sub exposures will take a further 4 minutes in total. The good thing about using short exposures is that if a plane (or the Space Station!) was visible in one or two frames, one could simply remove them from the stack without any real loss. This is why it is good to take both raw and Jpeg files as one can quickly run through all the frames to look for problems without having to process all the raw frames. An alternative in Deep Sky Stacker is to use the ‘kappa-sigma’ clipping mode. This find the average value for each pixel in the resultant stack of frames and rejects any value in a single frames greater than some deviation from the mean and replaces that value with the mean value ̶ so removing the intrusion.
There is another advantage if a DSLR or mirrorless camera is to be used with a telescope of medium to long focal length. In this case, if the atmosphere is somewhat turbulent, some of the captured frames can be slightly blurred compared to the majority, and a better result with tighter stars will result if these frames are removed from the stack provided, of course, that a reasonable number of ‘good’ frames are left.
To try and determine the optimum exposure length within these two extremes one needs to consider the sources of noise in a real, rather than perfect, camera whilst imaging under skies that may be not be fully dark. There are then three sources of noise; sky noise (lightglow and airglow), dark current and readout noise. These add vectorially into the final stacked image and, if one dominates, the other two can be essentially ignored. This is the case if imaging under light polluted skies or partial moon light. Sky Noise will build up a pedestal of noise in the stacked image which will reduce the dynamic range of the image and mask out stars and (particularly) nebulae whose brightness is less than that of the light pollution. The only real solution is to find a really dark sky location ̶ but then the other sources of noise become significant.
There is one fundamental difference in taking a one single long exposure and stacking a number of sub frames in, for example, Deep Sky Stacker. The pixel well in the sensor can only hold a given number of electrons and this can easily be overwhelmed in a long exposure by sky noise or dark current. If, however, short exposures are used which do not saturate the pixel wells and which are read out with 14 or 16 bit analogue to digital converters with the data being stacked later, the stacking program can accumulate these in what, I suspect, are 32-bit bit memory locations. If this is true, these could accept 16,000 or more sub frames without over filling! The average value of each pixel is then output with a precision of either 16-bit or 32-bit as Tiff files.
A typical DSLR pixel well can hold 60,000 electrons. So if the sensor has a quantum efficiency of 50% it would saturate after 120,000 photons had fallen on it. Let us suppose, for example, that 30,000 of the photons that would be recorded in a 30 minute exposure were due to light pollution so the light pollution would not overwhelm the desired image. This equates to 1,000 photons per minute. The light pollution, were a single 30 minute exposure taken, would show a random noise contribution equivalent to the square root of this total number which is ~173. Now let’s assume that 1 minute subs are used, so ~1,000 photons are captured in each frame. This will have a noise level equivalent to its square root which is ~31 photons. Now if 30 of these frames are stacked, the noise would fall by a further factor of the square root of 30 which is 5.4, so the final stacked light pollution would have a noise level of 31/5.4 which is 5.7 photons – vastly less and effectively insignificant. So, to minimise the random noise in the light pollution, shorter subs are better. [The dark current will still, however, contribute 1/4 of the output image.]
A similar result applies to the sensors dark current. In fact with a single 30 minute exposure the sensor may well become saturated. In this case, however, the noise within the sensor is not quite random and may well have a fixed pattern along with some variability from sub to sub. This one very good reason for ‘dithering’ the pointing of the camera during the set of subs so that the dark current is smoothed somewhat.
The dark current increases by a factor of 2 for each ~6 degrees Celsius which it is why cameras should be kept as cool as possible. [I can mount a small icepack against the back wall of my Sony A5000 camera.] However the latest CMOS sensors at room temperature have as low a dark current as older CCD cameras when cooled to -20 Celsius and, if there is any significant light pollution present, can probably be ignored. This is why in the head of the article I suggested that one should not normally use ‘In Camera Long Exposure Noise Reduction’ which would halve time imaging the sky.
The final noise contribution is the readout noise which adds some noise to the signal as it is read out from the sensor. For a given total exposure, the greater the number of sub exposures used, the larger will be its contribution to the final image and, where there is little or no light pollution, this can become significant. So under very dark skies the very lowest noise images would be achieved with longer exposures, so reducing the number of sensor readouts, and exposures of one to two minutes will be better than, say, the 30 seconds that I suggested under light polluted skies. Honestly, there is no point is taking sub exposures longer than this when using modern CMOS sensors.