Follow us on Facebook

Header Ads

A New Color Filter Array With Optimal Properties for Noiseless and Noisy Color Image Acquisition


A New Color Filter Array With Optimal Properties
for Noiseless and Noisy Color Image Acquisition

ABSTRACT:

Digital color cameras acquire color images by means of a sensor on which a color filter array (CFA) is overlaid. The Bayer CFA dominates the consumer market, but there has recently been a renewed interest for the design of CFAs [2]–[6]. However, robustness to noise is often neglected in the design, though it is crucial in practice. In this paper, we present a new 2 x 3-periodic CFA which provides, by construction, the optimal tradeoff between robustness to aliasing, chrominance noise and luminance noise. Moreover, a simple and efficient linear demosaicking algorithm is described, which fully exploits the spectral properties of the CFA. Practical experiments confirm the superiority of our design, both in noiseless and noisy scenarios.

EXISTING SYSTEM:

So far, emphasis in CFA design and demosaicking has been put on the minimization of the aliasing artifacts due to spectral overlap of the modulated color channels in the mosaicked image. But with the always increasing resolution of the sensors, aliasing has become a minor issue. In most cases, the optical system is the limiting factor, so that the scene which is sampled by the sensor is bandlimited and moiré artifacts never appear. On the other side, in high-end digital single-lens reflex cameras equipped with expensive and high-quality lenses, an anti-aliasing filter is overlaid on the sensor to get rid of aliasing issues, typically a layer of birefringent material. Still, robustness to aliasing is an important criterion in CFA design, not so much because of potential moiré artifacts, but because it determines the intrinsic resolution of the imaging system.

PROPOSED SYSTEM:

We argue that robustness to noise is more important than robustness to aliasing. High sensitivity properties allow, when acquiring a given picture, to reduce the exposure time (for less blur due to camera shake), to increase the aperture (for increased depth-of-field, hence less out-of-focus blur), or to use a lower ISO setting and a less destructive denoising process. This is particularly important for photography in low light level environments. Hence, there is a real need for new CFAs with improved sensitivity, so that maximum energy of the color scene is packed into the mosaicked image

Hardware Requirements  & Software Requirements:

Hardware Requirements

                     SYSTEM                    : Pentium IV 2.4 GHz
                     HARD DISK              : 40 GB
                     FLOPPY DRIVE       : 1.44 MB
                     MONITOR                 : 15 VGA colour
                     MOUSE                      : Logitech.
                     RAM                           : 256 MB
                     KEYBOARD : 110 keys enhanced.

Software Requirements

                     Operating system        :-  Windows XP Professional
                     Front End                    :-  Microsoft Visual Studio .Net 2005
                     Coding Language       :-  C#  2.0

Modules 
  • Load Image/Save Image
  • Image processing techniques
  • Color Filters
  • HSL Color Space
  • Binarization
  • Morphology
  • Convolution and Correlation
  • Edge Detectors
  • Histogram
  • Gamma Correction filter

Module Description

Load Image/Save Image
Loading the particular image for the image processing, in the particular bitmap. This is by opening the dialog box and selecting the particular image file. After alteration, can save the particular image.

Image processing techniques
Various processing technique are included in the project (invert, grayscale, brightness, contrast, gamma and color).

Color Filters
The color filters are filters placed over the pixel sensors of an image sensor to capture color information. Color filters are needed because the typical photosensors detect light intensity with little or no wavelength specificity, and therefore cannot separate color information. The color filters filter the light by wavelength range, such that the separate filtered intensities include information about the color of light. For example, the Bayer filter gives information about the intensity of light in red, green, and blue (RGB) wavelength regions. The raw image data captured by the image sensor is then converted to a full-color image (with intensities of all three primary colors represented at each pixel) by a demosaicing algorithm which is tailored for each type of color filter. The spectral transmittance of the CFA elements along with the demosaicing algorithm jointly determine the color rendition. The sensor's passband quantum efficiency and span of the CFA's spectral responses are typically wider than the visible spectrum, thus all visible colors can be distinguished. The responses of the filters do not generally correspond to the CIE color matching functions, so a color translation is required to convert the tristimulus values into a common, absolute color space.

HSL Color Space:
HSL and HSV are the two most common cylindrical-coordinate representations of points in an RGB color model, which rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation. They are used for color pickers, in color-modification tools in image editing software, and less commonly for image analysis and computer vision.

HSL stands for hue, saturation, and lightness, and is often also called HLS. HSV stands for hue, saturation, and value, and is also often called HSB (B for brightness). A third model, common in computer vision applications, is HSI, for hue, saturation, and intensity. Unfortunately, while typically consistent, these definitions are not standardized, and any of these abbreviations might be used for any of these three or several other related cylindrical models

Binarization:

Image binarization converts an image of up to 256 gray levels to a black and white image. Frequently, binarization is used as a pre-processor before OCR. In fact, most OCR packages on the market work only on bi-level (black & white) images.

The simplest way to use image binarization is to choose a threshold value, and classify all pixels with values above this threshold as white, and all other pixels as black. The problem then is how to select the correct threshold. In many cases, finding one threshold compatible to the entire image is very difficult, and in many cases even impossible. Therefore, adaptive image binarization is needed where an optimal threshold is chosen for each image area.

Morphology:

Morphological operators often take a binary image and a structuring element as input and combine them using a set operator (intersection, union, inclusion, complement). They process objects in the input image based on characteristics of its shape, which are encoded in the structuring element.
Usually, the structuring element is sized 3×3 and has its origin at the center pixel. It is shifted over the image and at each pixel of the image its elements are compared with the set of the underlying pixels. If the two sets of elements match the condition defined by the set operator (e.g. if the set of pixels in the structuring element is a subset of the underlying image pixels), the pixel underneath the origin of the structuring element is set to a pre-defined value (0 or 1 for binary images). A morphological operator is therefore defined by its structuring element and the applied set operator.

Convolution and Correlation:
Convolution is a very important operation in image processing. It basically involves calculating the weighted sum of a neighbourhood of pixels. The weights are taken from a convolution kernel. Each value from the neighbourhood of pixels is multiplied with its opposite on the matrix. For example, the top-left of the neighbour is multiplied by the bottom-right of the kernel. All these values are summed up, the this is the result of the convolution.
This operation can be mathematically represented as:


Correlation is nearly identical to convolution bar one minor difference:

Spot the difference? Instead of multiplying the pixel by the opposite in the kernel, you multiply it by the equivalent (top-left multiplied by top-left). Using our example above, we can calculate that the result of a correlation is -63.

Edge Detectors:

Edge detection module is for feature detection and feature extraction, which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuities in 1D signals is known as step detection.

Histogram:

An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.

The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the number of pixels in that particular tone. The left side of the horizontal axis represents the black and dark areas, the middle represents medium grey and the right hand side represents light and pure white areas. The vertical axis represents the size of the area that is captured in each one of these zones. Thus, the histogram for a very bright image with few dark areas and/or shadows will have most of its data points on the right side and center of the graph. Conversely, the histogram for a very dark image will have the majority of its data points on the left side and center of the graph.

Gamma Correction filter:
Luminance of each of the linear-light red, green, and blue (tristimulus) components is transformed to a nonlinear video signal by gamma correction, which is universally done at the camera. The Rec. 709 transfer function takes linear-light tristimulus value (here L) to a nonlinear component (here E'), for example, voltage in a video system:

The linear segment near black minimizes the effect of sensor noise in practical cameras and scanners. Here is a graph of the Rec. 709 transfer function, for a signal range from zero to unity:

An idealized monitor inverts the transform:

Real monitors are not as exact as this equation suggests, and have no linear segment, but the precise definition is necessary for accurate intermediate processing in the linear-light domain. In a color system, an identical transfer function is applied to each of the three tristimulus (linear-light) RGB components.

REFERENCE:

Laurent Condat, “A New Color Filter Array With Optimal Proposertis for Noiseless and Noisy Color Image Acquistion”, IEEE Transcations on Image Processing, Vol. 20, No.8, August 2011.