I'm not quite sure why it was BBC website front page news (although I'm glad it was and is pretty cool especially if you're astronomiclly inclined), but there's an interesting story about a couple of research groups produing high resolution images from ground based telescopes. The reason that this is cool is that normally from the ground the resolution of the images you can produce (i.e. the ability the distinguish between separate objects - like stars - or pick out detail) is limit by the turbulent motions of the atmosphere. Patches of air with different denisties along the line of sight to the object will bend (refract) the light (like light passing through a glass prism) by different amount leading to the image at the telescope jittering around and varying in intensity (basically the same as the twinkling of stars when you look at them with you eye). To image faint objects you need fairly long camera exposure times; these variations in the atmosphere happen on far shorter timescales, so the jittering/brightness variations of the source will smear it out on the final image. In general there's a rule which says that the bigger a telescope you have the better resolution you can get, so an 8.2m telescope like one of those at the VLT should be able to tell two objects seperated by ~0.02 arcseconds apart (i.e. two objects seperated by about 30 metre on the moon), but is in practice limited to a resolution of ~0.5 acrseonds by the atmosphere - that doesn't mean that bigger telescopes on the ground aren't better than smaller ones as they still collect more light and can therefore see fainter objects.) To get round the effects of the atmosphere the Hubble Space Telescope was built, which was able to achieve its full theoretical resolution of ~0.5 acrseonds with its 2.4m mirror, however Hubble was very expensive and is hard to maintain - being in space and all - so people have been trying to think of way to get around the atmospheric effect with ground based telescopes.
The article above talks of two ways of doing this, which have both seperately been around for a few years (the adaptive optics idea for longer), but seem to have finally been used together. The first idea is that of adaptive optics, which basically monitor the effect of the atmospheric distortions on the image and then corrects for these by applying and opposite distortion to one of the secondary mirrors in the telescope thereby correcting for the atmospheric effects. The monitoring and corrections have to be performed on a millisecond timescale. The second idea, which is now also making use of very efficient CCD cameras, is Lucky imaging. This basically comprises of taking lots and lots of photos of the object with short exposure times (hence the need for the very efficient cameras, so as to catch as much light as possible in a short time.) Some of these lucky images will have been taken when the atmospheric distotions were small, so you keep these and thow away the bad ones. You can then stack up the lucky images to help build up a stronger image. It's actually rather simple!
Anyway that was a rather unexpected astronomy post and the main reason I started it was so that I could show this cool movie of the Crab pulsar (which I do research on) taken using Lucky Imaging - you can see the flashing star with a bright pulse and then a fainter interpulse as radiation beamed from the star's poles intercept Earth once per rotation - the image is slowed down from the actual rotation rate of 30 times per second.
No comments:
Post a Comment