How CCDs work. QSI 520c

Add to My manuals
87 Pages

advertisement

How CCDs work. QSI 520c | Manualzz

Q S I 5 0 0 S E R I E S U S E R G U I D E Section 3

CCD Imaging Overview

This section is intended only as a brief overview of CCDs and CCD Imaging. If you are new to CCD imaging there are a number of excellent books that you can use to gain a deeper understanding of the issues and techniques. Two very well regarded books that we recommend are:

 The New CCD Astronomy by Ron Wodaski

 The Handbook of Astronomical Image Processing by Richard Berry and Jim Burnell

How CCDs work

Charge Coupled Devices (CCD) work by converting photons into electrons which are then stored in individual pixels. A CCD is organized in a two-dimensional array of pixels. The

CCDs used in the QSI 500 Series cameras at the time of printing range from roughly

400,000 pixels (768W x 512H) to 8.3 million pixels (3326W x 2504H).

Each pixel can hold some maximum number of electrons. CCDs currently used in the QSI

500 Series can hold from 25,500 to as many as 100,000 electrons depending on the specific model of CCD. While integrating (exposing) an image, photons strike individual pixels and are converted to electrons and stored in each pixel well. The effectiveness of this process is referred to as Quantum Efficiency (QE). The number of electrons stored in each pixel “well” is proportional to the number of photons that struck that pixel. This linear response is one of the key traits that make CCDs exceptionally well suited to astronomical imaging. A subject that is twice as bright will build up twice as many electrons in the CCD.

After an exposure is complete, the electrons in each pixel are shifted out of the CCD and converted to a number, indicating how dark or light each particular pixel was. Those brightness values for each pixel are then stored in the image file, typically a FITS file for astronomical imaging.

Types of CCDs

CCDs are available in a variety of designs and technologies. QSI 500 Series cameras currently employ two different types of CCDs, Full Frame and Interline Transfer, with numerous optional features.

Full-Frame CCDs

Full-Frame CCDs generally provide the highest sensitivity and the widest linear response range of these two types of CCDs. These characteristics make full-frame CCDs ideally suited to astronomical imaging. Full-frame CCDs must employ a mechanical shutter to

28

Q S I 5 0 0 S E R I E S U S E R G U I D E prevent light from falling on the CCD surface while the image is being shifted out of the

CCD.

Interline Transfer CCDs

Interline transfer CCDs work somewhat differently. In an interline transfer CCD, next to every column of pixels is a specialized storage column that is covered by a mask to prevent light from hitting the storage 'pixels' underneath. When an exposure is complete, the entire image is shifted in a single operation into this masked storage column. The pixels which are now under the mask stop building additional charge and are shifted out of the CCD in the same fashion as a full-frame CCD. Interline transfer CCDs give up some sensitivity because a sizable portion of the potential light gathering surface of the CCD is occupied by the masked storage columns. The key benefit of interline transfer CCDs is that the shifting of the image into the masked storage column acts like a very precise electronic shutter allowing short, accurate exposures.

Anti-Blooming CCDs

CCDs are subject to an electronic artifact called “blooming” that results in bright vertical streaks leading from bright objects.

The 60-second image above shows a portion of M42, the great nebula in Orion. The stars that make up the center of the nebula are much brighter than the surrounding nebula. Taking an exposure long enough to show detail in the nebula causes the bright stars to bloom. Note that some of the other brighter stars around the image also show varying amounts of blooming.

Blooming occurs when taking images of bright objects because when a pixel reaches its full well capacity, say 100,000 electrons, the electrons literally overflow into adjoining pixels eventually causing them to fill and overflow as well. In a severely bloomed image, the bright blooming trail can lead all the way to the edge of the image. Data under a “bloom” is lost although there are a variety of processing techniques that can be used to hide pixel blooms in a final processed image.

Anti-blooming is a feature available on many full-frame and most interline transfer CCDs.

Anti-blooming technology limits the number of electrons that can accumulate in a pixel by

29

Q S I 5 0 0 S E R I E S U S E R G U I D E draining off excess electrons before they exceed the capacity of the pixel. This can increase the dynamic rage of the CCD by as much as 300 times or more. This increase in dynamic range greatly reduces the difficulty of imaging bright objects.

Anti-blooming CCDs make astrophotography more convenient, but with tradeoffs in quantum efficiency (QE) and linearity. Anti-blooming protection requires additional circuitry on the surface of the CCD, reducing the physical size and consequently the light gathering area of each pixel. Anti-blooming CCDs also have a non-linear response to light. This nonlinearity becomes significant as a pixel fills beyond 50%. The closer a pixel gets to full-well capacity, the greater the rate of electron drainage in order to prevent blooming. This generally isn’t a problem if your goal is producing great-looking pictures of the night sky, but anti-blooming CCDs are generally not appropriate for photometric and other scientific use where accurately recording the relative brightness of objects is important.

Microlenses

CCDs only record the light that hits the photosensitive portion of the CCD. Most CCDs are

“front illuminated” meaning that the light strikes the top surface of the integrated circuit forming the CCD. A portion of the surface of the CCD is covered with the electronic circuits that make a CCD work. Light striking a part of the CCD covered by a circuit will not get recorded by the CCD.

The surface of some CCDs is covered with microlenses which focus more of the light striking the surface of the CCD onto the photosensitive area away from the circuits.

The amount of the CCD surface covered in circuits is one factor in determining the quantum efficiency (QE) of the CCD. QE is a measure of how efficiently the CCD converts photons striking the CCD into electrons stored in any given pixel. QE varies by type of CCD and by the wavelength of light. Adding microlenses to a front-illuminated CCD will raise the quantum efficiency of the CCD. Typical peak QE values for the CCDs used in QSI 500

Series cameras range from 35% to over 80%. Microlens models tend to have the highest

QE, while anti-blooming gate models tend to have the lowest QE. Here is a graph showing the QE of the CCDs available in QSI 500 Series cameras at the time of printing.

Note that the non-anti-blooming, full frame KAF-3200 and KAF-1603 have the highest QE, peaking toward the red end of the spectrum around

650nm. The anti-blooming, interline transfer KAI-2020 and KAI-04022 have the lowest QE, peaking toward the blue end of the visible spectrum around 4 50nm.

Single-shot color CCDs

CCDs are inherently monochrome devices with varying response to different frequencies of light. That varying response can be seen in the quantum efficiency graph above. Color images are normally produced with CCD cameras by taking three (or more) images through

30

Q S I 5 0 0 S E R I E S U S E R G U I D E red, green and blue filters. The resulting images are then combined using computer image processing programs into a final color image.

Single-shot color CCDs, like those found in almost all general use digital cameras, are made by placing red, green and blue filters over adjacent pixels in the CCD. The image processing program then has to separate the three different color images and recombine them into a single color image.

Single-shot color CCDs use a “Bayer filter” with alternating red, green and blue pixels covering adjacent pixels in a checker board pattern as shown in the image to the right.

50% of the pixels are covered in a green filter, 25% are covered in a blue filter and

25% are covered in a red filter. This arrangement is used because the human eye is most sensitive to green light. The green pixels correspond to luminance and record the greatest detail while the red and blue filters record c hrominance.

Courtesy Wikipedia

After the raw image is read from the

CCD, a demosaicing algorithm must be applied to the image to produce a comple te set of red, green and blue images by interpolating the missing pixel values. This is exactly what normal digital cameras do, but it’s all hidden inside the camera’s electronics. You only se e the final processed image. With a CCD camera, the raw image is read into the camera control program and then processed on your computer. This has the advantage that you can directly manipulate the raw image to, for instan ce, vary the color balance.

Single-shot color models offer the easiest way to take color images of the night sky. The trade off is reduced QE and detail because of the demosaicing and pixel interpolation.

Signal versus noise

For an astronomer, “signal” is the photons coming from the stars in the night sky. In an ideal world, there would be steady stream of photons from every bright object and every photon striking a pixel would be converted into exactly one electron in the CCD. Then the number of electrons would be precisely counted and converted to a number telling the photographer exactly how much light struck each pixel. Unfortunately, the process of converting light to pixel values in a CCD image is governed by some fundamental physical laws and other factors that introduce “noise” into an image. Noise is unwanted variations in pixel values that make the image a less than exact representation of the original scene.

Noise in CCD images can manifest itself in multiple ways, including “graininess” in darker background areas, “hot” pixels, faint horizontal or vertical lines that become visible in low signal areas of the image, blotchy gradients between darker and lighter regions in a nebula, a gradient from dark to light from one corner or side of an image to the other, and especially as low contrast images — the result of a reduced signal to noise ratio. Achieving high dynamic range, low noise images from a cooled CCD camera requires a basic understanding of how CCDs work and the different sources of noise that can reduce the quality of your images.

31

Q S I 5 0 0 S E R I E S U S E R G U I D E

Reducing noise in CCD images

CCD imagers have developed a standard set of calibration techniques to reduce or eliminate different types of noise from CCD images. Calibrating CCD images requires taking some special kinds of exposures that are then applied to the “light frames” taken of the night sky. The calibration frames are called Dark Frames, Flat Fields and Bias Frames.

MaxIm LE and other CCD camera control software help gather these extra frames. After the frames are gathered, MaxIm allows you to calibrate your images either automatically or manually.

All the calibration frames should be collected during each imaging session with the CCD at the same temperature used for the light frames. This will ensure the best possible calibration of the final images. Many CCD imagers plan their night of observing to begin taking the calibration frames as dawn approaches. That way, you don’t waste precious dark time.

The image above is a single raw 6-minute image of the diffuse nebula M78 in Orion. Some bright stars are clearly visible along with some nebulosity but there are also scattered bright spots around the image caused by “hot” pixels.

Dark Frames

Dark frames are used to subtract the build up of dark current from a CCD image. Dark current is caused by heat. Similar to how CCDs convert the energy from a photon into a stored electron, CCDs also convert the energy from heat into stored electrons. CCDs build up “dark current” whether the CCD is being exposed to light or not. The rate that dark current builds up is dependent on the temperature of the CCD and can be dramatically reduced by cooling the CCD. Dark current builds up more slowly as the temperature of the

CCD is reduced.

Most pixels on a CCD build up dark current at a constant rate but that rate will vary slightly from pixel to pixel. A subset of the pixels in a CCD will build up dark current at a dramatically different rate from the average. These pixels are called “hot pixels” or “dark pixels”. Hot pixels and dark pixels are both the result of slight imperfections introduced into the silicon substrate of the CCD during the manufacturing process. Hot pixels are very easy to see in a raw CCD image as a series of bright dots placed randomly around the image.

32

Q S I 5 0 0 S E R I E S U S E R G U I D E

6-minute Dark Frame

Above is a 6-minute dark frame taken during the same imaging session with the above image of M78. Notice the brighter pixels scattered randomly around the image.

Note: The pixel values in this image have been stretched significantly to show the variations in the dark frame. In reality this image is almost completely black with perhaps a few hundred “hot” pixels. This is completely normal and a natural consequence of how CCDs are manufactured.

MaxIm LE automatically scales the visible range of pixels to match the underlying data. In the dark frame shown above the average pixel value is just 203 out of a possible 16-bit dynamic range of 0-65,535. Seeing an automatically scaled dark frame or bias frame can be a bit disconcerting for a new imager. Fear not, this “noise” will be almost completely eliminated by subtracting a dark frame from your images.

Dark frames are subtracted from a light frame to remove the dark current from the image.

This subtraction removes the slight differences in dark current build-up from pixel to pixel along with the larger variations caused by hot or dark pixels.

In general you’ll want to take at least 5 dark frames at each exposure used for your light frames. If all your light frames were taken with 5-minute exposures, you’ll need to collect a set of 5-minute dark frames. If you took both 5-minute and 10-minute light frames, you’ll need a set of 5-minute dark frames and a set of 10 minute dark frames. There is a way to reduce the number of dark frames you collect by using a set of bias frames but, in general, you’ll achieve the best results taking dark frames with the same exposure as your light frames.

33

Q S I 5 0 0 S E R I E S U S E R G U I D E

Original image

Original image minus dark frame

Look at the two images above. The top image is the original image as it came out of the camera. The bottom image has had the average of 5 dark frames subtracted from it. Note that the bright pixels have been virtually eliminated leaving a smooth black sky background.

Flat Fields

Flat fields are used to correct for any irregularities in your optical system, such as vignetting or dust motes, and to adjust for any pixel non-uniformity inherent in the CCD. Pixels in a

CCD all respond slightly differently to light, typically within 1% to 2% across a CCD.

All optical systems have a “signature” which gets recorded on the CCD. This unique signature is caused by how light travels through the telescope illuminating the CCD and how each pixel responds to that illumination.

34

Q S I 5 0 0 S E R I E S U S E R G U I D E

The image above has been manipulated to highlight the effect of dust motes on a filter or CCD cover glass. Note the

3 darker circles. Because dust will tend to stay in one place over a night of imaging, the variation in pixel values caused by the dust can be easily eliminated by properly applying a Flat Field.

A flat field is created by taking an image of an evenly illuminated subject. There are four common ways to create flat fields.

Lightbox flats – Using a lightbox is usually the easiest way to create good flat fields.

There are a few commercial lightbox solutions, but many astronomers make their own. You can find plans in The New CCD

Astronomy and The Handbook of Astronomical Image Processing as well as online. Search for “Telescope light box”.

Twilight flats – There is a brief time after the sun sets or just before it rises when the sky is appropriate for creating flat frames. Too early and the sky is too bright. Too late and stars will begin to show up in the image.

Dome flats – If your telescope is in an observatory, you can take dome flats. A dome flat is created by aiming your telescope at a white card placed somewhere on the inside of the dome.

Sky flats – Taking sky flats requires taking dozens or hundreds of images with the telescope pointed at the sky with tracking turned off. All the images are combined into a master flat to remove the effect of any stars moving through the field. Sky flats require more time than the other three options so few amateurs take sky flats. We recommend reading the section on Sky Flats in the Handbook of Astronomical

Image Processing for additional details on this technique.

Good flat fields require an exposure time such that the pixel wells are filled to approximately half their full capacity. With a QSI 500 Series camera you should strive to achieve average pixel values between 20,000 and 30,000 out of a total of roughly 65,000. You should experiment with exposure times to yield that result. Pixel values are commonly called

“ADUs”, short for Analog to Digital Units.

You’ll need to take enough flat fields to average out the noise and then take a series of dark frames (called flat-darks) using the same exposure you used for your flat fields. Just as with light frames, the flat-darks are subtracted from the flat fields to remove any contribution from dark current. Taking 16 flat fields and 16 flat-darks will yield excellent results. Luckily

35

Q S I 5 0 0 S E R I E S U S E R G U I D E because flat fields tend to use fairly short exposures, you can often take a full series of flat fields and flat-darks in just a few minutes.

The resulting master Flat Field is used to scale the pixel values in the light frame, eliminating the effects of pixel non-uniformity, optical vignetting and dust on the optical surfaces.

Bias Frames

A Bias Frame is a zero-length (dark) exposure intended to measure just the difference between the pixels plus any additional noise added during the process of reading the image from the CCD and converting it into a digital image file. Because the CCD pixels are emptied immediately before the image is read from the CCD, only a small amount of dark current has had a chance to build up, but that rate of accumulation varies slightly for every pixel. Also, reading an image from a CCD is not instantaneous. Pixels near the bottom of the CCD are read later than pixels closer to the top of the CCD so pixels toward the bottom tend to have slightly higher pixel values than pixels closer to the top.

One common use of bias frames is for scaling dark frames. By subtracting a bias frame from a dark frame, you end up with a “thermal frame.” A thermal frame contains pixel values showing just the effect of dark current. Because dark current in any given pixel accumulates at a constant rate, a thermal frame allows you to predict with reasonable accuracy how much dark current there would be for different length exposures. However, given the opportunity, you’re generally better off taking dark frames that match the exposure times of your light frames.

Here is an example bias frame. Note again that this image has been automatically stretched to show variations in the pixel values. MaxIm LE does this automatically when you view an image file. All the pixel values in the original image fall between 181 and 221 out of a possible range of 0-65,535, meaning that the unstretched image would appear almost perfectly and uniformly black.

Bias frames can also be used to analyze the read noise in a CCD camera. You can learn more about that process on the QSI web site at http://www.qsimaging.com/ccd_noise.html

.

Taking bias frames is easy and takes only a couple of minutes. When you’re taking your dark frames and flat fields, also take a series of at least 16 bias frames. That completes your full set of calibration frames.

36

Q S I 5 0 0 S E R I E S U S E R G U I D E

Stacking Images

After calibrating each of your raw images with dark frames, flat fields and bias frames, combining or “stacking” multiple sub-exposures can be used to further reduce the noise in your images. Stacking multiple images with a pixel-by-pixel average or median combine tends to increase the signal to noise ratio (SNR) of the combined image. This is because random variations in pixel values tend to cancel each other out when multiple images are combined, resulting in a smooth background, while non-random pixels, the bright objects in the night sky you’re trying to take a picture of, reinforce each other getting you closer to a true representation of the patch of sky you’re imaging.

The benefits of stacking images can be clearly seen by comparing an individual frame to a pixel-by-pixel average of multiple frames.

Individual dark-subtracted image of M78

Average combine of 9 dark-subtracted images of M78

Averaging 9 separate images increases the signal to noise ratio of the final image, allowing the faint nebulosity in M78 to become visible and smoothing the black sky background.

Adding more frames would further improve the results although you do end up in a situation

37

Q S I 5 0 0 S E R I E S U S E R G U I D E of diminishing returns. Combining 18 frames will not yield a final image twice as good as combining 9 frames.

Also note that in some cases, doing a “median combine” rather than an “average combine” may yield better results. A median combine is recommended if several of the individual frames have unique anomalies such as bright pixels caused by cosmic rays, satellites, airplanes, etc. With at least 5 images, a median combine completely eliminates extreme pixel values that occur in individual frames.

Color images

Unless you’re using a single-shot color camera such as the QSI 583c, producing color images requires taking separate exposures through different colored filters and then electronically combining the separate color channels. The most common method used by amateur astronomers for color imaging is called LRGB, where separate color images are taken through red, green and blue filters and combined with a set of “luminance” images taken through a luminance filter. The luminance filter is required because CCDs are generally responsive to frequencies of light that can’t be seen by the human eye. The luminance filter blocks the infrared (IR) and ultraviolet (UV) frequencies that fall outside the range of human vision.

The luminance filter transmits most of the visible light coming from the object. Because the individual frames taken through the red, green and blue filters block roughly ⅔ of the total visible light, the luminance image will often reveal subtle details not apparent in the individual color frames. This actually works out quite well since the human eye is much more sensitive to changes in brightness than it is to changes in color. Combining a colorbalanced RGB image with a luminance image will yield an LRBG image that the human eye would perceive as being very close to the true colors of the object with more fine detail than is present in the RGB image on its own.

MaxIm LE can be used with your QSI 500 Series camera to collect and catalog the various filtered images that you’ll need to create LRGB images. As with any images, you’ll want to collect multiple frames through each filter and then calibrate and combine them in order to reduce the major sources of noise. After calibration, you’ll have a master luminance image plus master red, green and blue images. Those master color frames are combined into your final image. In addition to the color image tools in MaxIm you can do further processing of your images in Adobe Photoshop or similar photo manipulation programs to yield impressive final results.

38

advertisement

Related manuals

advertisement

Table of contents