NOVEMBER 2021

Autoguiding or not – some thoughts

[This is just one of many articles in the author’s Astronomy Digest.]

The aim of autoguiding is to correct for tracking errors in the mount and so allow longer exposures to be taken.  When astroimagers were using CCD cameras, their relatively high read noise did mean that longer exposures gave a better result than a similar total exposure made up of a greater number of shorter exposures. There was a second problem. Using a first generation USB connection, the upload times were quite long – 8 seconds for my CCD camera and this meant that unless quite long exposures were used the actual time imaging would be reduced.  So autoguiding was a very useful tool. 

Things have changed.  The latest CMOS camera have a far lower read noise and, using a USB3 interface, upload images far faster so using short exposures – say of 30 to 60 seconds – works pretty well and, depending on the sky glow (light pollution) levels, may even be better.  [The ‘Brain’ in SharpCap can analyse an image of the sky background and suggest a suitable length of exposure. Looking north towards Manchester, it suggested an exposure of 10 seconds!]

So it may well be that, when using CMOS cameras, autoguidingwill not be necessary as most accurately aligned mounts can track sufficientlywell to allow exposures of up to 60 seconds without introducing star trailingwhen used with telescope of less than 800 mm. In fact, in this case there can actually be some advantages of not using autoguiding.  Unless the tracking and alignment of themount is perfect (which it will not be unless one is using a very highperformance mount perhaps using absolute encoders) then during, say, a total ofone hours total exposure the image will slowly drift across the camerasensor.  This is a good thing.  If using an RGB sensor, it will help removethe effects of what Tony Hallas calls ‘Colour Mottling’ – slight variations inpixel sensitivity over regions of the sensor of order 20 pixels across – sothat the final image background is not a uniform grey.  If hot pixels are present, they will producea trail across the image but these will not integrate up in brightness as will thefaint stars and so may well be lost in the noise background.  [This is not to say that one should not takeand use ‘Dark Frames’ as these will remove both hot pixels and ‘amp glow’.]  Because the image is being sampled withdifferent combinations of the RGB colour matrix, the image quality can actuallybe improved were a one shot colour camera being used and, as the movementacross the sensor gives the effect of ‘dithering’, it can even be possible toimprove the resolution of the image somewhat by using a ‘x2 drizzle’ mode whenaligning and stacking the short exposure subframes.  [This technique was used to enhance the HSTimages when the first wide field camera was under sampling the image.]  Finally, when short exposure subframes areused there is less chance of an individual exposure being ‘photobombed’ by aplane or satellite.  [In Deep SkyStacker, using the Kappa-Sigma stacking mode, satellite trails should beremoved as then, if in one subframe a pixel is significantly different inbrightness than the average value for that pixel in the resulting stack, itsbrightness is replaced with the average value – but this does require that areasonable number of frames have been captured so shorter exposures can help.]

Obviously the drift across the sensor must not be so much that star trailing becomes apparent.  This does mean that polar alignment must be pretty accurate.  iOptron with their iPolar camera and QHY with their PoleMaster camera can quickly reach a very good alignment as can SharpCap using a guide scope and camera (or the iPolar and PoleMaster cameras).  In fact, I find the SharpCap plate solving polar alignment  software easier to use than the QHY software.  Looking at a single captured frame one can easily see if star trailing is present but one test is to add the first and last images of a sequence each with 50% opacity.   It will be obvious how a star has shifted across the sensor and, by cropping the image to just contain the 2 stars, the movement in pixels can be judged.  If this movement is divided by the total time of the imaging sequence in seconds one finds the pixel movement per second.  Multiplying this by the exposure time gives the movement per exposure.  If this is less than ~2 arc seconds then there should be no problem. 

There is, however, one result of not autoguiding.  The area of the image exposed by all of the subframeswill be reduced so the final image will need to be cropped somewhat – how muchdepending on how well the mount is tracking. [If the tracking is not that good, it may be necessary to realign thetelescope at times during a long imaging session.]

As an example, when imaging M31 with a 90 mm telescope, Irealigned the telescope on M31 after each 100 frames of the total 317frames.  In fact my alignment was notthat accurate and the shift across the frame was quite significant, but thepixel movement in one 25 second exposure was still just 2 arc seconds.

To avoid the problems of colour mottling when autoguiding isemployed, dithering is usually employed if the mount is being controlled bycomputer using, perhaps, PHd2 guiding. 

© Ian Morison