Are the colours in the NFO deep-sky images “real”??

I touched on the answer to this one earlier when I commented on the public’s misunderstanding of the use of Photoshop in image processing.  To recap very briefly on the use of Photoshop in digital image processing deep-sky images, nothing is “added” to the original data with the possible exception of star spikes – the skill of the digital processor is to bring out the faint detail that is present in the raw data without “blowing out” stars or other bright regions of the image, and also in removing (note removing data rather than adding!) any light pollution, plane/satellite trails that might have been captured.  The deep-sky imager’s use of Photoshop is in stark contrast to the public’s understanding of its use in creating promotional images, where many liberties are often taken with the original data from the camera.

The question of colour is considered slightly more controversial simply because (I believe) we are often not comparing like with like.  The short answer to the question of are the colours “real” is YES, the NFO images show objects in the colour representative of that wavelength being emitted/reflected by the deep-sky object.  So the RED Hydrogen-alpha regions really are emitting red light at that specific wavelength and similarly for other emission lines (SII, OIII, H-beta etc.) the lines all have very well defined wavelengths, and therefore colours.

The controversy about “true colour” in deep-sky objects comes about due to a basic misunderstanding regarding the physiology of the eye, and the operation of the CCD.  Now we are all very well aware that the human eye is not good at discerning colour under low-light conditions.  This is why when we go out at night everything goes into greyscale mode, apart from bright objects (plenty of incoming photons) which can then retain their “true” colour identity – so Sodium street lights (Sodium line emission) appear to be that “Sodium-yellow” colour to the human eye, lots of incoming photons to work with.  The CCD is like a very sensitive eye in that it doesn’t suffer from the loss of colour reception at low-light levels – so one shot colour CCD cameras accurately represent what the eye could see IF only it were sensitive to low-level photon fluxes (which it isn’t).  The fact that the eye is not sensitive to low photon fluxes and therefore cannot see the “true” colour of deep-sky objects, even though very large telescopes (which can send many more photons to the retina than the naked eye can collect) is neither here nor there.  Would you consider a monochrome image of the Rosette nebula to be a more accurate representation of the object as “that is what the eye sees” or do you find the rich red hydrogen alpha emission to be a more realistic representation of the object?

I actually fail to understand how this “controversy” arises – as there is no actual controversy – the problem lies I think at a fundamental level where the Physics – (or rather the spectroscopy) –  involved is simply not understood or appreciated.  Where I do have a real problem is with the “false colour” images from telescopes like Hubble which have neither the “true” colour representation of the object as defined by its emission wavelength, nor does the image look anything like the eye would see if it were sensitive to low-light intensities.

This entry was posted in News. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *