Imaging the ‘Heart of Andromeda’: thoughts about Jpeg and raw files and the use of dark frames.

 

The Heart of Andromeda

In October 2015, I spent some time imaging the Andromeda Galaxy, M31, using my Canon 1100D DSLR (modified to be more sensitive to H-alpha emission) coupled to my Takahashi FS102, f/8, fluorite refractor.  As can be seen in the Field of View calculator image below, the field would not have encompassed the outer parts of the galaxy.

I took as set of 88 light frames each of 100 seconds and saved as both raw (.CR2) and Jpeg files.  This was followed with the taking of 14 dark frames.  All were at a stable sensor temperature of ~18C.

Using Deep Sky Stacker (DSS) I processed the raw frames both with and without the dark frames and also the Jpeg frames (when dark frames cannot be used).  I found the results interesting and hope that you might as well.  First some general thoughts about Dark Frames and the use of Jpeg files followed by the results of stacking the three possibilities and finally how the the output from DSS was processed in Adobe Photoshop.  (This used an interesting technique.)

Thoughts about Dark Frames

The first thing to point out is that, ideally, dark frames should be taken with the sensor at the same temperature as when the light frames were taken.  This is not a problem when a cooled CCD camera is used having ‘setpoint’ Peltier cooling, but is a problem when a DSLR or Compact System Camera (CSC) is used.  I have written an extensive article about the use of Dark Frames when using these cameras which could be well worth reading.  If a long sequence of light frames are taken then the sensor temperature will increase over a period of 30 minutes to an hour and will then stabilise at some temperature (perhaps 12 C above ambient).  So dark frames taken at the end of an imaging session could be used successfully it the sensor had stabilised before the light frames were taken.  Otherwise, the use of the ‘Long Exposure Noise Reduction’ mode can be used in which case a dark frame of the same exposure length is taken immediately following the light frame. The problem is, of course, that this halves the time taken collecting photons from the heavens and, in winter, it may well be best to ignore this imaging mode.  What is less well known is that the dark frames taken in this mode actually add noise into each light frame which is why one is, instead, recommended to take a number of dark frames, at least 10, which will be averaged in DSS to make a ‘Masterdark’ frame.

It is obviously worthwhile keeping the camera as cool a possible.  Cameras with lifting rear screens help as this reduces the heat path from the sensor and electronics to the outside.  In this case, one could attach a heat sink to the rear of the camera or (as described in my article about the Sony A5000 camera) enable a small icepack to be held against its rear.  I have even used a plastic box to surround the camera with ice packs within an insulated layer.  Any of these will help reduce the effects of the dark current that the use of dark frames is aimed to ameliorate.

The camera was controlled over a USB cable from Astrophotography Tools.  One nice feature is that it allows an internal camera temperature to be appended to the raw file names.  This is associated with the processing electronics but, if stable, implies that the sensor temperature will be stable too.  If, as was the case, a set of light frames were taken at a stable temperature – in this case 18C – then it was possible to follow the light frames with a number of dark frames at the same temperature and a total of 14 dark frames were taken.  This telescope, with a focal ratio of f/8 barely vignettes towards the corners of the frame so flat frames were not deemed necessary.   Though not as widely known as it might be, provided the dark frames are taken at the same temperature as the light frames (as is ideal) there is no need to take any bias frames as the dark frames include the bias current.  Deep Sky Stacker will only use bias frames to help estimate how dark frames at a certain temperature would be like if dark frames taken at a different temperature are used.

Thoughts about the use of Jpeg files

One good thing about Jpeg images is that they can be quickly scanned through to see if any frames ought to be eliminated from the stack.

One might think that the use of raw frames with a 12 or 14-bit depth should give a better results when stacked in DSS than Jpegs having only an 8-bit depth.  However, if many 8-bit frames are stacked where there is noise in the image (as there will be) the result (I promise you) is to increase the effective bit depth (see appendix at end of essay) and so there may not be much advantage to be gained, if any, by using raw frames.  I know that this is somewhat heretical  – but see below.

The images below show the DSS outputs for the three cases:

Raw frames without Dark Frames

The DSS output screenshot was very odd and it proved impossible to create an image from the output file.  I have absolutely no idea why this was the case.

Raw frames using the Dark Frames

The DSS output screenshot was quite normal and one could adjust the colour balance using the sliders.  The DSS output file could then be processed as described below.

Initial output from DSS

Having colour balanced using the sliders

Jpeg files

The DSS output was again normal and the output file could be processed in the same way as the output from the ‘raw frames’.  The final result was essentially the same as that derived from the raw files so giving some credence to my statement that the use of many Jpeg files can rival the results derived from raw files.

Processing the output from DSS using Adobe Photoshop

As usual, the output from DSS has to be stretched.  I made several applications of the ‘Curves’ function’ in Adobe Photoshop.   One could now use the free program GIMP to do this or do an initial stretching in IRIS and then output an 8-bit per channel (.BMP) image to further process in, say, Photoshop Elements

I then used a somewhat unusual method to continue and separated the galaxy and the foreground stars to give two separate images.  I could then enhance the galaxy image without affecting the star images.  One can then add the stars back in using the ‘Screen’ blending mode.

The ‘Dust and Scratches filter’ was applied to the stretched image with a radius of 10 pixels.  The filter thinks most stars are dust and removes them!  I did not want to use a larger radius as this would blur parts of the dust lanes in the galaxy.    I then selected the area of the image that did not contain the galaxies (by selecting the galaxy regions and then inverting the selection) and used the larger radius of 27 pixels for the filter so removing the brighter stars leaving just the three galaxies.  The ‘Gaussian Blur tool’ was then used to ‘clean and smooth’ the areas of the image away from the three galaxies.

Using the ‘Dust and Scratches’ filter to remove the stars from the image

This image was saved as ‘Galaxy’.  It was copied, (Ctrl A. Ctrl C), and then pasted (Ctrl V) over the stretched image (stars and galaxy) which had been reopened.  The two images were then flattened with the ‘Difference’ blending mode to give an image of just the stars, called ‘Stars’.  The stars in the centre of the image were nice and circular (showing excellent tracking during the 100 second exposures), but towards the corners of the image appeared somewhat elongated. (I did not have a field flattener at this time.)  Areas where the stars we elongated in the same direction were selected and the image duplicated.  The ‘Darken’ blending mode was selected and small adjustments made to the position of the upper layer relative to the lower layer using the up/down/left/right keys.  This circularises them but tends to make them fainter so, after flattening the two layers, the ‘Brightness/ Contrast’ tool can be used to restore their brightness.  The image was then saved as ‘Stars’.

The stars within the image

The ‘Galaxy’ image was opened and enhanced by the use of the ‘Unsharp mask’ filter with a large radius and small amount.  This filter applies ‘local contrast enhancement’ to the image and made the dust lanes stand out with the radius and amount adjusted to give a suitable result.  The ‘Match Colour’ tool was then used to enhance the colour somewhat.

The enhanced image of the three galaxies

Finally, the ‘Stars’ image was copied and pasted over the enhanced galaxy image.  The two were then flattened using the ‘Screen’ blending mode to give the final result shown at the top of the essay.

This is, by no means, a brilliant image of M31.   But I was actually quite pleased with it considering that a, relatively old, DSLR had been used to capture the images.

Perhaps the surprising result of this processing exercise was that DSS could not cope with .CR2 raw light frames without the addition of the dark frames.  I was not so surprised that an equally good result was obtained by simply using the Jpeg files without the use of any dark frames − there were, after all, 88 frames to stack.

One possible conclusion is that if one is using a DSLR and are not sure what temperature the sensor will have during the taking of the data, using a large number of short exposure Jpeg frames may well give the best results.

Appendix: increasing bit depth by averaging numbers of 8-bit Jpeg files.

Suppose an analogue to digital converter (there is one at the output of the sensor) had levels ending 10, 11, 12and 13 so XXXX10, XXXX11, XXXX12, XXXX13 and we have a signal whose actual value is  XXXX11.5.   A single reading if there was no noise in the system will always give one value which might be XXXX11 or XXXX12.  Now let us add some noise into the signal so the value measured by the converter might range both above and below the true value.  The converter will then give large numbers of the values XXXX11 and XXXX12 with, if the noise were Gaussian, lesser numbers of XXXX10 and XXXX13.   If one makes many readings, the numbers of XXXX11 and XXXX12 should be equal with fewer but equal numbers of XXXX10 and XXXX13.   I trust that it is fairly obvious that the average of a large number of readings will be XXXX11.5.  Even by averaging a relatively small number values one will gain one significant digit − equivalent to ~3 bits.  So it is not difficult for 8 bits to become 11 and, with a reasonable number of values to be averaged, that can easily become 14 bits − the same bit depth as many raw files.