Why ccd is used




















Obtaining the best images within the constraints imposed by a particular specimen or experiment typically requires a compromise among the criteria listed, which often exert contradictory demands. For example, capturing time-lapse sequences of live fluorescently-labeled specimens may require reducing the total exposure time to minimize photobleaching and phototoxicity.

Several methods can be utilized to accomplish this, although each involves a degradation of some aspect of imaging performance. If the specimen is exposed less frequently, temporal resolution is reduced; applying pixel binning to allow shorter exposures reduces spatial resolution; and increasing electronic gain compromises dynamic range and signal-to-noise ratio. Different situations often require completely different imaging rationales for optimum results.

In contrast to the previous example, in order to maximize dynamic range in a single image of a specimen that requires a short exposure time, the application of binning or a gain increase may accomplish the goal without significant negative effects on the image.

Performing efficient digital imaging requires the microscopist to be completely familiar with the crucial image quality criteria, and the practical aspects of balancing camera acquisition parameters to maximize the most significant factors in a particular situation. A small number of CCD performance factors and camera operational parameters dominate the major aspects of digital image quality in microscopy, and their effects overlap to a great extent. Factors that are most significant in the context of practical CCD camera use, and discussed further in the following sections, include detector noise sources and signal-to-noise ratio, frame rate and temporal resolution, pixel size and spatial resolution, spectral range and quantum efficiency, and dynamic range.

Camera sensitivity, in terms of the minimum detectable signal, is determined by both the photon statistical shot noise and electronic noise arising in the CCD. A conservative estimation is that a signal can only be discriminated from accompanying noise if it exceeds the noise by a factor of approximately 2.

The minimum signal that can theoretically yield a given SNR value is determined by random variations of the photon flux, an inherent noise source associated with the signal, even with an ideal noiseless detector.

This photon statistical noise is equal to the square root of the number of signal photons, and since it cannot be eliminated, it determines the maximum achievable SNR for a noise-free detector. If a SNR value of 2. In practice, other noise components, which are not associated with the specimen photon signal, are contributed by the CCD and camera system electronics, and add to the inherent photon statistical noise. Once accumulated in collection wells, charge arising from noise sources cannot be distinguished from photon-derived signal.

Most of the system noise results from readout amplifier noise and thermal electron generation in the silicon of the detector chip. The thermal noise is attributable to kinetic vibrations of silicon atoms in the CCD substrate that liberate electrons or holes even when the device is in total darkness, and which subsequently accumulate in the potential wells.

For this reason, the noise is referred to as dark noise , and represents the uncertainty in the magnitude of dark charge accumulation during a specified time interval. The rate of generation of dark charge, termed dark current , is unrelated to photon-induced signal but is highly temperature dependent.

In similarity to photon noise, dark noise follows a statistical square-root relationship to dark current, and therefore it cannot simply be subtracted from the signal. Cooling the CCD reduces dark charge accumulation by an order of magnitude for every degree Celsius temperature decrease, and high-performance cameras are usually cooled during use. Cooling even to 0 degrees is highly advantageous, and at degrees, dark noise is reduced to a negligible value for nearly any microscopy application.

Providing that the CCD is cooled, the remaining major electronic noise component is read noise , primarily originating with the on-chip preamplifier during the process of converting charge carriers into a voltage signal. Although the read noise is added uniformly to every pixel of the detector, its magnitude cannot be precisely determined, but only approximated by an average value, in units of electrons root-mean-square or rms per pixel.

Some types of readout amplifier noise are frequency dependent, and in general, read noise increases with the speed of measurement of the charge in each pixel. The increase in noise at high readout and frame rates is partially a result of the greater amplifier bandwidth required at higher pixel clock rates.

Cooling the CCD reduces the readout amplifier noise to some extent, although not to an insignificant level. A number of design enhancements are incorporated in current high-performance camera systems that greatly reduce the significance of read noise, however. One strategy for achieving high readout and frame rates without increasing noise is to electrically divide the CCD into two or more segments in order to shift charge in the parallel register toward multiple output amplifiers located at opposite edges or corners of the chip.

This procedure allows charge to be read out from the array at a greater overall speed without excessively increasing the read rate and noise of the individual amplifiers. Cooling the CCD in order to reduce dark noise provides the additional advantage of improving the charge transfer efficiency CTE of the device.

This performance factor has become increasingly important due to the large pixel-array sizes employed in many current CCD imagers, as well as the faster readout rates required for investigations of rapid dynamic processes.

With each shift of a charge packet along the transfer channels during the CCD readout process, a small portion may be left behind. While individual transfer losses at each pixel are miniscule in most cases, the large number of transfers required, especially in megapixel sensors, can result in significant losses for pixels at the greatest distance from the CCD readout amplifier s unless the charge transfer efficiency is extremely high.

The occurrence of incomplete charge transfer can lead to image blurring due to the intermixing of charge from adjacent pixels. In addition, cumulative charge loss at each pixel transfer, particularly with large arrays, can result in the phenomenon of image shading , in which regions of images farthest away from the CCD output amplifier appear dimmer than those adjacent to the serial register. Charge transfer efficiency values in cooled CCDs can be 0.

Both hardware and software methods are available to compensate for image intensity shading. A software correction is implemented by capturing an image of a uniform-intensity field, which is then utilized by the imaging system to generate a pixel-by-pixel correction map that can be applied to subsequent specimen images to eliminate nonuniformity due to shading.

Software correction techniques are generally satisfactory in systems that do not require correction factors greater than approximately percent of the local intensity. Larger corrections, up to approximately fivefold, can be handled by hardware methods through the adjustment of gain factors for individual pixel rows.

The required gain adjustment is determined by sampling signal intensities in five or six masked reference pixels located outside the image area at the end of each pixel row. Voltage values obtained from the columns of reference pixels at the parallel register edge serve as controls for charge transfer loss, and produce correction factors for each pixel row that are applied to voltages obtained from that row during readout. Correction factors are large in regions of some sensors, such as areas distant from the output amplifier in video-rate cameras, and noise levels may be substantially increased for these image areas.

Although the hardware correction process removes shading effects without apparent signal reduction, it should be realized that the resulting signal-to-noise ratio is not uniform over the entire image. In many applications, an image capture system capable of providing high temporal resolution is a primary requirement. For example, if the kinetics of a process being studied necessitates video-rate imaging at moderate resolution, a camera capable of delivering superb resolution is, nevertheless, of no benefit if it only provides that performance at slow-scan rates, and performs marginally or not at all at high frame rates.

Full-frame slow-scan cameras do not deliver high resolution at video rates, requiring approximately one second per frame for a large pixel array, depending upon the digitization rate of the electronics.

If specimen signal brightness is sufficiently high to allow short exposure times on the order of 10 milliseconds , the use of binning and subarray selection makes it possible to acquire about 10 frames per second at reduced resolution and frame size with cameras having electromechanical shutters. Faster frame rates generally necessitate the use of interline-transfer or frame-transfer cameras, which do not require shutters and typically can also operate at higher digitization rates.

The latest generation of high-performance cameras of this design can capture full-frame bit images at near video rates.

The now-excellent spatial resolution of CCD imaging systems is coupled directly to pixel size, and has improved consistently due to technological improvements that have allowed CCD pixels to be made increasingly smaller while maintaining other performance characteristics of the imagers. In comparison to typical film grain sizes approximately 10 micrometers , the pixels of many CCD cameras employed in biological microscopy are smaller and provide more than adequate resolution when coupled with commonly used high-magnification objectives that project relatively large-radii diffraction Airy disks onto the CCD surface.

Interline-transfer scientific-grade CCD cameras are now available having pixels smaller than 5 micrometers, making them suitable for high-resolution imaging even with low-magnification objectives. The relationship of detector element size to relevant optical resolution criteria is an important consideration in choosing a digital camera if the spatial resolution of the optical system is to be maintained.

The Nyquist sampling criterion is commonly utilized to determine the adequacy of detector pixel size with regard to the resolution capabilities of the microscope optics. Nyquist's theorem specifies that the smallest diffraction disk radius produced by the optical system must be sampled by at least two pixels in the imaging array in order to preserve the optical resolution and avoid aliasing.

As an example, consider a CCD having pixel dimensions of 6. At this sampling frequency, sufficient margin is available that the Nyquist criterion is nearly satisfied even with 2 x 2 pixel binning. Detector quantum efficiency QE is a measure of the likelihood that a photon having a particular wavelength will be captured in the active region of the device to enable liberation of charge carriers.

The parameter represents the effectiveness of a CCD imager in generating charge from incident photons, and is therefore a major determinant of the minimum detectable signal for a camera system, particularly when performing low-light-level imaging.

No charge is generated if a photon never reaches the semiconductor depletion layer or if it passes completely through without transfer of significant energy. The nature of interaction between a photon and the detector depends upon the photon's energy and corresponding wavelength, and is directly related to the detector's spectral sensitivity range.

Although conventional front-illuminated CCD detectors are highly sensitive and efficient, none have percent quantum efficiencies at any wavelength. Image sensors typically employed in fluorescence microscopy can detect photons within the spectral range of nanometers, with peak sensitivity normally in the range of nanometers. Maximum QE values are only about percent, except in the newest designs, which may reach 80 percent efficiency. Figure 10 illustrates the spectral sensitivity of a number of popular CCDs in a graph that plots quantum efficiency as a function of incident light wavelength.

Most CCDs used in scientific imaging are of the interline-transfer type, and because the interline mask severely limits the photosensitive surface area, many older versions exhibit very low QE values. With the advent of the surface microlens technology to direct more incident light to the photosensitive regions between transfer channels, newer interline sensors are much more efficient and many have quantum efficiency values of percent.

Sensor spectral range and quantum efficiency are further enhanced in the ultraviolet, visible, and near-infrared wavelength regions by various additional design strategies in several high-performance CCDs. Because aluminum surface transfer gates absorb or reflect much of the blue and ultraviolet wavelengths, many newer designs employ other materials, such as indium-tin oxide, to improve transmission and quantum efficiency over a broader spectral range.

Even higher QE values can be obtained with specialized back-thinned CCDs, which are constructed to allow illumination from the rear side, avoiding the surface electrode structure entirely. To make this possible, most of the silicon substrate is removed by etching, and although the resulting device is delicate and relatively expensive, quantum efficiencies of approximately 90 percent can routinely be achieved. Other surface treatments and construction materials may be utilized to gain additional spectral-range benefits.

Performance of back-thinned CCDs in the ultraviolet wavelength region is enhanced by the application of specialized antireflection coatings. Modified semiconductor materials are used in some detectors to improve quantum efficiency in the near-infrared.

Sensitivity to wavelengths outside the normal spectral range of conventional front-illuminated CCDs can be achieved by the application of wavelength-conversion phosphors to the detector face. Phosphors for this purpose are chosen to absorb photon energy in the spectral region of interest and emit light within the spectral sensitivity region of the CCD. As an example of this strategy, if a specimen or fluorophore of interest emits light at nanometers where sensitivity of any CCD is minimal , a conversion phosphor can be employed on the detector surface that absorbs efficiently at nanometers and emits at nanometers, within the peak sensitivity range of the CCD.

A term referred to as the dynamic range of a CCD detector expresses the maximum signal intensity variation that can be quantified by the sensor.

The quantity is specified numerically by most CCD camera manufacturers as the ratio of pixel full well capacity FWC to the read noise, with the rationale that this value represents the limiting condition in which intrascene brightness ranges from regions that are just at pixel saturation level to regions that are barely lost in noise. The sensor dynamic range determines the maximum number of resolvable gray-level steps into which the detected signal can be divided. To take full advantage of a CCD's dynamic range, it is appropriate to match the analog-to-digital converter's bit depth to the dynamic range in order to allow discrimination of as many gray scale steps as possible.

Analog-to-digital converters with bit depths of 10 and 11 are capable of discriminating and gray levels, respectively. As stated previously, because a computer bit can only assume one of two possible states, the number of intensity steps that can be encoded by a digital processor ADC reflects its resolution bit depth , and is equal to 2 raised to the value of the bit depth specification.

Therefore, 8, 10, 12, and bit processors can encode a maximum of , , , or gray levels. A simultaneous positive charge or holes are generated as well. If nothing is done the hole and the electrons will recombine and release energy in the form of heat. Small thermal fluctuations are very difficult to measure and it is thus preferable to gather electrons in the place they were generated and count them to create an image.

This is accomplished by positively biasing discrete areas to attract electrons generated while the photons strike the surface. The substrate of a CCD is made of silicon, but photons coming from above the gate strike the epitaxial layer — essentially silicon with different elements doped into it — and generate photoelectrons.

The gate is held at a positive charge in relation to the rest of the device, which attracts the electrons. The figure to the right shows how electrons are held in place and moved to where they can be quantified.

The top black line represents the potential well for the electrons that are represented by the blue color and is low , or downhill , where the potential is high since opposites attract. Electrons are shifted in two directions on a CCD, called the parallel or serial direction. One parallel shift occurs from the right to the left shown at left. The serial shift is performed from top to bottom and directs the electron packets to the measurement electronics. In the example to the left, the image is split up into 2 and then 4 different sections and read-out.

The method of reading this voltage is called dual slope integration DSI and is used when the absolute lowest noise possible is required. Generally speaking, the faster a pixel is read, the more noise is introduced into the measurement. If the gain of the measurement is known the ADU number for each pixel generated can be directly correlated to the number of electrons found in that pixel. All Spectral Instruments cameras come with a detailed test report showing the gain at a given readout speed.

Thus, a bit camera can never show more than 65, ADUs in any given pixel. Linearity On the whole, the eye is not a linear detector except over very small variations in intensity and has a logarithmic response. An important consideration in a detector is its ability to respond linearly to any image it views. In such a situation, we say that the detector has a linear response. Such a response is obviously very useful as there is no need for any additional processing on the image to determine the 'true' intensity of different objects in an image.

Noise One of the most important aspects of CCD performance is its noise response. There are a number of contributions to the noise performance of a CCD, these are briefly listed here: Dark current - i. At room temperature, the noise performance of a CCD can be as much as thousands of electrons per pixel per second. Consequently, the full well capacity of each pixel will be reached in a few seconds and the CCD will be saturated. Dark current can be massively reduced by cooling.

For example, the noise performance of the CCD could be reduced from thousands of electrons at room temperature to only tens of electrons per pixel per second at degrees C. By cooling down to temperatures below about degrees C dark current can be virtually eliminated substantially below one electron per pixel per second. This technique can reduce the dark current to very low levels a few hundred electrons per pixel per second at room temperature.

Readout noise - the ultimate noise limit of the CCD is the readout noise. The magnitude of this noise depends on the size of the output node. A large amount of effort has been dedicated to reducing the CCD readout noise, as this noise value will ultimately determine the dynamic range and should be as low as possible, particularly when detecting very faint sources for example, detecting photons at X ray energies such as in the XMM-Newton mission. Noise values of electrons rms root mean square are now typical for many CCDs but some companies have recently claimed a noise resolution of under 1 electron rms.

When the CCD is used as part of a camera for astronomical imaging, other sources of noise must also be included such as the random shot noise present on the image itself, along with noise introduced by the camera electronics. However, these noise sources are discussed elsewhere. Power CCDs themselves consume very little power. The second element will detect light from the next spectral position, and so on.

The last element will detect light from the high cm -1 edge of the spectrum. CCDs require some degree of cooling to make them suitable for high-grade spectroscopy. This is typically done using either Peltier cooling, which is suitable for temperatures down to o C, and liquid nitrogen cryogenic cooling. Most Raman systems use Peltier cooled detectors, but liquid nitrogen cooled detectors still have advantages for certain specialized applications.

It is able to detect single photon events without an image intensifier, using a unique electron multiplying structure built into the chip. EMCCD cameras are designed to overcome a fundamental physical constraint to deliver high sensitivity with high speed. Traditional CCD cameras offered high sensitivity, with low readout noises, but at the expense of slow readout. EMCCD overcame this by amplifying the signal.



0コメント

  • 1000 / 1000