A SURVEY ON FEATURES AND TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL

Content-based image retrieval (CBIR) is widely adopted method for finding images from vast collection of images in the database. As the collections of images are growing at a rapid rate, demand for efficient and effective tools for retrieval of query images from database is increased significantly. Among them, content-based image retrieval systems (CBIR) have become very popular for browsing, searching and retrieving images from a large database of digital images as it requires relatively less human intervention. The requirement for development of CBIR is enhanced due to tremendous growth in volume of images as well as the widespread application in multiple fields. Texture, color, shaped, contours etc are the important entities to represent and search the images. These features of images are extracted and implemented for a similarity check among images. In this paper, we have conducted a survey on the CBIR techniques and its approaches and their usage in various domains.


ISSN 2277-3061 INTRODUCTION
Content-based image retrieval, a technique which uses visual contents to search images from large scale image databases according to users' interests, has been an active and fast advancing research area since the 1990s. During the past decade, remarkable progress has been made in both theoretical research and system development. However, there remain many challenging research problems that continue to attract researchers from multiple disciplines. Content-based image retrieval [1], uses the visual contents of an image such as color, shape, texture, and spatial layout to represent and index the image. In typical content-based image retrieval systems (Figure 1), the visual contents of the images in the database are extracted and described by multi-dimensional feature vectors. The feature vectors of the images in the database form a feature database. To retrieve images, users provide the retrieval system with example images or sketched figures. The system then changes these examples into its internal representation of feature vectors. The similarities /distances between the feature vectors of the query example or sketch and those of the images in the database are then calculated and retrieval is performed with the aid of an indexing scheme. The indexing scheme provides an efficient way to search for the image database. Recent retrieval systems have incorporated users' relevance feedback to modify the retrieval process in order to generate perceptually and semantically more meaningful retrieval results. Technique based on image or visual contents usually referred as features for the purpose of searching images with respect to request and interest of user from large image databases. Since 1990s [2] with the emergence and advancement of this field makes it possible to represent image by using low-level features instead of keywords. In both theoretical research and system development remarkable progress has been made during past few years. Still there are many unsolved problems in the area which continue to attract the attention of researchers from various fields [1]. For various applications the deployment of huge databases has now become realisable as power of processors increases and memories become cheaper.
Professional areas like architecture, geography, medicine, publishing databases of art works and satellite are attracting more users to access and utilize images is need of time now. For CBIR technology few strong applications could be identified as architecture design [2], art & craft museums [3], archaeology [4], medical imaging and geographic info system [5,6], trademark databases, classification [7,11], image search over the Internet [3] and remote sensing field for indexing biomedical images by contents [4,5].

Figure 1. Content Based Image Retrieval System
Content-based retrieval uses the contents of images to represent and access the images from the large database. A typical content-based retrieval system is divided into two types: off-line feature extraction and online image retrieval. Figure 1 shows architecture for content-based image retrieval.In off-line stage, the system automatically extracts visual attributes (colour, shape and texture) of each image in the databasebased on its pixel values and stores them in a different database within the system called a feature vector database. The feature data [8] (also known as image signature or image features for each of the visual attributes of each image is very much smaller in size compared to the image data,thus the feature database contains an compact form of the images in theimage database. Significant compression can be achieved using feature vector representation of image database over the original pixel values.
In on-line image retrieval, the user submit a query image to the CBIR system insearch of desired images. The system represents this query image with a feature vector. Thesimilarities between the feature vectors of the query example and those ofthe images in the feature database are then computed and ranked. Retrieval [6] is computed by applying an indexing scheme to provide an efficient way of searching the imagedatabase. Finally, the system ranks the retrieval results and then returns the images that aremost similar to the query images.
Feature extraction techniques affect the retrieval rate of the CBIR system. A feature vector is a set of numeric parameters describing an image [9]. The majority of such vectors represent oneimage feature, such as colour, texture, or shape of the object. Feature vectors generated by the same algorithm form a space of feature vectors. Text annotations for image description areclassified as high-level features. Features, such as colour and texture, are called as low-level features. Shapes of objects in the image,which can be obtained by analyzing regions present in the image are classified as a low levelfeatures.
The important issues of content based image retrieval system, which are: 1. Selection of image database, 2. Similarity measurement, 3. Performance evaluation of the retrieval process and 4. Low-level image features extraction. Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. The most common evaluation measures used in CBIR are precision and recall [10]

IMAGE CONTENT DESCRIPTORS
Generally speaking, image content may include both visual and semantic content. Visual content can be very general or domain specific. General visual content include color, texture, shape, spatial relationship, etc. Domain specific visual content, like human faces, is application dependent and may involve domain knowledge. Semantic content is obtained either by textual annotation or by complex inference [11] procedures based on visual content. Only a good visual content descriptor should be invariant to the accidental variance introduced by the imaging process (e.g., the variation of the illuminant of the scene). However, there is a tradeoff between the invariance and the discriminative power of visual features, since a very wide class of invariance loses the ability to discriminate between essential differences. Invariant description has been largely investigated in computer vision (like object recognition), but is relatively new in image retrieval [8]. A visual content descriptor can be either global or local. A global descriptor uses the visual features of the whole image, whereas a local descriptor uses the visual features of regions or objects to describe the image content. To obtain the local visual descriptors, an image is often divided into parts first. The simplest way of dividing an image is to use a partition, which cuts the image into tiles of equal size and shape. A simple partition does not generate perceptually meaningful regions but is a way of representing the global features of the image at a finer resolution. A better method is to divide the image into homogenous regions according to some criterion using region segmentation algorithms that have been extensively investigated in computer vision. A more complex way of dividing an image, is to undertake a complete object segmentation to obtain semantically meaningful objects (like ball, car, horse).

FEATURE EXTRACTION
Most systems perform feature extraction as a preprocessing step, obtaining global image features like color histogram or local descriptors like shape and texture. A region based dominant color descriptor indexed in 3-D space along with their percentage coverage within the regions is proposed in [12], and shown to be more computationally efficient in similarity based retrieval than traditional color histograms. The authors argue that this compact representation is more efficient than highdimensional histograms in terms of search and retrieval, and it also gets around some of the drawbacks associated with earlier propositions such as dimension reduction and color moment descriptors. In [15], a multi-resolution histogramcapturing spatial image information has been shown to be effective in retrieving textured images, while retaining the typical advantages of histograms. In [14], Gaussian mixture vector quantization (GMVQ) is used to extract color histograms and is shown to yield better retrieval than uniform quantization and vector quantization with squared error. A set of color and texture descriptors rigorously tested for inclusion in the MPEG-7 standard, and well suited to natural images and video, is described in [13]. These include histogram-based descriptors, dominant color descriptors, spatial color descriptors and texture descriptors suited for browsing and retrieval. Texture features have been modeled on the marginal distribution of wavelet coefficients using generalized Gaussian distributions. Milestone of CBIR system is low level feature extraction. Feature Extraction can be done from region or the entire image. Mostly, users are concerned with particular region within the image rather than the whole image. In general, CBIR algorithms are region specific. Representation of image is similar to system of human being perception at region level. Retrieval based on global features is comparatively simpler. Region-based image retrieval is focused in this paper. Firstly image segmentation implement then from these segmented regions color, texture, shape or spatial location can be extracted. CBIR proposed in exploited two different levels of features that is global and local to achieve better accuracy.

EXTRACTION OF COLOR
Color is the most extensively used visual content for image retrieval. Its three-dimensional values make its discrimination potentiality superior to the single dimensional gray values of images. Before selecting an appropriate color description, color space must be determined first.
The color feature [16] has widely been used in CBIR systems, because of its easy and fast computation. Color is also an intuitive feature and plays an important role in image matching. The extraction of color features from digital images depends on an understanding of the theory of color and the representation of color in digital images. The color histogram is one of the most commonly used color feature representation in image retrieval.

Color Space
Each pixel of the image can be represented as a point in a 3D color space. Commonly used color space for image retrieval include RGB, Munsell, CIE L*a*b*, CIE L*u*v*, HSV (or HSL, HSB), and opponent color space. There is no agreement on which is the best. However, one of the desirable characteristics of an appropriate color space for image retrieval is its uniformity. Uniformity means that two color pairs that are equal in similarity distance in a color space are perceived as equal by viewers. In other words, the measured proximity among the colors must be directly related to the psychological similarity among them. RGB space is a widely used color space for image display. It is composed of three color components red, green, and blue. These components are called "additive primaries" since a color in RGB space is produced by adding them together. In contrast, CMY space is a color space primarily used for printing. The three color components are cyan, magenta, and yellow. These three components are called "subtractive primaries" since a color in CMY space is produced through light absorption. Both RGB [17] and CMY space are device-dependent and perceptually non-uniform. In HSV (or HSL, or HSB) space is widely used in computer graphics and is a more intuitive way of describing color. The three color components are hue, saturation (lightness) and value (brightness). The hue is invariant to the changes in illumination and camera direction and hence more suited to object retrieval. RGB coordinates can be easily translated to the HSV (or HLS, or HSB) coordinates by a simple formula.

TEXTURE FEATURE EXTRACTION
Texture is another important property of images. Various texture representations have been investigated in pattern recognition and computer vision. Basically, texture representation methods can be classified into two categories: structural and statistical. Structural methods, including morphological operator and adjacency graph, describe texture by identifying structural primitives and their placement rules. They tend to be most effective when applied to textures that are very regular. Statistical methods, including Fourier power spectra, co-occurrence matrices, shift-invariant principal component analysis (SPCA), Tamura feature, Wold decomposition, Markov random [19] field, fractal model, and multi-resolution filtering techniques such as Gabor and wavelet transform, characterize texture by the statistical distribution of the image intensity. In this section, we introduce a number of texture representations which have been used frequently and have proved to be effective in content-based image retrieval. Like color, the texture is a powerful low-level feature for image search and retrieval applications. Much work has been done on texture analysis, classification, and segmentation for the last four decade, still there is a lot of potential for the research. So far, there is no unique definition for texture; however, an encapsulating scientific definition as given in can be stated as, -Texture is an attribute representing the spatial arrangement of the grey levels of the pixels in a region or image‖. The common known texture descriptors are Wavelet Transform, Gabor-filter, co-occurrence matrices and Tamura features. We have used Wavelet Transform, which decomposes an image into orthogonal components, because of its better localization and computationally inexpensive properties.

SHAPE BASED IMAGE RETRIEVAL
Shape features of objects or regions have been used in many content-based image retrieval systems [16]. Compared with color and texture features, shape features are usually described after images have been segmented into regions or objects. Since robust and accurate image segmentation is difficult to achieve, the use of shape features for image retrieval has been limited to special applications where objects or regions are readily available. The state-of-art methods for shape description can be categorized into either boundary-based (rectilinear shapes, polygonal approximation, finite element models, and Fourier-based shape descriptors) [18] or region-based methods (statistical moments). A good shape representation feature for an object should be invariant to translation, rotation and scaling. In this section, we briefly describe some of these shape features that have been commonly used in image retrieval applications. For a concise comprehensive introductory overview of the shape matching techniques..

RELEVANCE FEEDBACK (RF)
Relevance feedback is a significantly important algorithm which attempts to reduce the gap between the two levels of features, namely high and low. Through query-by-example or sketch, system provided initial retrieval results. According to query, user judges whether and how much this image is similar (positive) or dissimilar (negative) to query image. At last, to learn feedback, machine learning algorithm is applied. To make RF technique more robust and to change the query, use of negative objects gives more options. Discriminant Expectation-Maximization algorithm is proposed and the issue of image retrieval is treated as a semi-supervised learning. GM models of the images based on a universal GM model in a Bayesian manner and information is extracted from the entire database. As a distance measure between GM and an SVM classifier, Fast KL approximation is used with an appropriate kernel function employed in each RF round to perform the RF task. Methods include the naive Bayesian technique used in Greedy EM [20] avoids the overcome of the strong dependence of the solution on parameter initialization by incrementally adding components to the mixture until the desirable number of components has been reached. To measure parameters of ideal query many current systems use only the low-level image features without using image semantic contents. System will give more accurate results if feature vector define query in a better way otherwise more relevant results are not provided. GM model use the standard EM algorithm and information is extracted from the entire database. Relevance feedback focuses on selection of features which merges the probabilistic formulation by using both the positive and negative example. Algorithm learns the necessity to assign features of images with the user interaction and results are also applied. Considerable improvement in the performance of the CBIR system.

IMAGE SEGMENTATION
Automatic image segmentation methods have been developed such as curve evolution, graph partitioning and energy diffusion. For images containing only similar color regions in color space such as direct clustering many pre-existing segmentations perform better. These retrieval systems are applicable for working with colors only. One image region can be several times matched to region of another image by an integrated region matching. On the significance matrix similar to EMD defining overall similarity of two images with heterogeneous color and texture ranges of natural images. In defining high-level concepts, texture is main factor. Estimation of parameter of texture model is more problematic task and is required by majority of texture segmentation algorithms. These demerits are overcome by JSEG segmentation in which contrary to estimation pattern of given color and texture, similarity is tested. Firstly, quantization of image colors into classes then pixels are substituted by their respective label of the class of color to achieve mapping of class. Then on this image map, spatial segmentation is performed. As a result, region with similar color texture is obtained beneficial for various systems [12]. Blobworld segmentation [13] is widely implemented algorithm of segmentation where pixels are grouped in a joint color texture-position used to obtain it. At first the joint model with combination of Gaussians distribution, position features are constructed. Secondly by expectation maximization algorithm, parameters of model are measured. Segmentation of image is provided by resulting cluster of pixels. For object segmentation from images with connectivity constraint, k mean is an extension of k-means algorithm. For k-means algorithm new centroid is defined for each region participating in working. A method [14] Field Programmable Gate Arrays can be worked for optimality of retrieval problem which provide dedicated functional blocks to perform complex image processing operations. Novel image segmentation algorithm namely, Fuzzy Edge Detection and Segmentation FEDS is built into an FPGA [15].

WAVELET BASED CBIR
Discrete wavelet transformation (DWT) is used to transform an image from spatial domain into frequency domain. The wavelet transform represents a function as a superposition of a family of basis functions called wavelets. Wavelet transforms extract information from signal at different scales by passing the signal through low pass and high pass filters. Wavelets provide multiresolution capability and good energy compaction. Wavelets are robust with respect to color intensity shifts and can capture both texture and shape information efficiently. The wavelet transforms can be computed linearly with time and thus allowing for very fast algorithms [28]. DWT decomposes a signal into a set of Basis Functions and Wavelet Functions [18]. The wavelet transform computation of a two-dimensional image is also a multi-resolution approach, which applies recursive filtering and sub-sampling. At each level (scale), the image is decomposed into four frequency sub-bands, LL, LH, HL, and HH where L denotes low frequency and H denotes high frequency.

CONCLUSIONS
The purpose of this survey is to provide an overview of the functionality of content based image retrieval systems. This paper has surveyed the essential concepts of content-based image retrieval systems. This paper attempts to introduce the theory and practical applications of CBIR techniques. Use of the hybrid feature including color, texture and shape as feature vector of the regions to match images can give better results.Most systems use color and texture features, few systems use shape feature, and still less use layout features. Most systems are products of research, and therefore emphasize one aspect of content-based retrieval. Sometimes this is the sketching capability in the user interface, sometimes it is the new indexing technique, etc. Efficient multidimensional techniques are required for the retrieval system to enable them fast and scaleable. The development of speedy,inexpensive and strong processors coupled with fast memory devices have contributed a lot in this field. Hence, immense range applications are guaranteed by this development using CBIR in future. Maximum support is also provided in bridging the ‗semantic gap' between low level features and the perceptual knowledge present in the images with the richness of human semantics.