Open Access

Local quantized extrema patterns for content-based natural and texture image retrieval

Human-centric Computing and Information Sciences20155:26

https://doi.org/10.1186/s13673-015-0044-z

Received: 13 December 2014

Accepted: 11 August 2015

Published: 2 September 2015

Abstract

This paper proposes a novel feature descriptor, named local quantized extrema patterns (LQEP) for content based image retrieval. The standard local quantized patterns (LQP) collect the directional relationship between the center pixel and its surrounding neighbors and the directional local extrema patterns (DLEP) collect the directional information based on local extrema in 0°, 45°, 90°, and 135° directions for a given center pixel in an image. In this paper, the concepts of LQP and DLEP are integrated to propose the LQEP for image retrieval application. First, the directional quantized information is collected from the given image. Then, the directional extrema is collected from the quantized information. Finally, the RGB color histogram is integrated with the LQEP for a feature vector generation. The performance of the proposed method is tested by conducting three experiments on Coel-1K, Corel-5K and MIT VisTex databases for natural and texture image retrieval. The performance of the proposed method is evaluated in terms of precision, recall, average retrieval precision and average retrieval rate on benchmark databases. The results after investigation show a considerable improvements in terms of their evaluation measures as compared to the existing methods on respective databases.

Keywords

Local quantized patterns (LQP)Ditectional local extrema patterns (DLEP)TextureImage retrievalPattern recognitionDatabases

Background

Data mining is an active area of research for mining or retrieving data/information from a large database or library. Image retrieval is the part of data mining in which the visual information (images) from the large size database or library is retrieved. Earlier, text-based retrieval was used for retrieving the information. In this process, the images are annotated with text and then text-based database management systems were used to perform image retrieval. Many advances, such as data modelling, multidimensional indexing, and query evaluation, have been made along this research direction. However, there exist two major difficulties, especially when the size of image collections is too large (tens or hundreds of thousands). One is the vast amount of labor required to annotate the images manually. The other difficulty, which is more essential, araise due to the rich content in the images and the subjectivity of human perception. That is, for the same image content different people may perceive it differently. To address these issues, content-based image retrieval (CBIR) came into existence. CBIR utilizes the visual content such as color, texture, shape, etc., for indexing and retrieving images from the database. The comprehensive and extensive literature on feature extraction for CBIR is available in [15].

Color does not only add beauty to images/video but provides more information about the scene. This information is used as the feature for retrieving the images. Various color-based image search schemes have been proposed, some of these are discussed in this section. Swain and Ballard [6] introduced the concepts of color histogram feature and the histogram intersection distance metric to measure the distance between the histograms of the images. The global color histogram is extensively utilized for the purpose of retrieval which gives probability of occurrences of each unique color in the image on a global platform. It is a fast method and has translation, rotation and scale invariant property, but it suffers from lack of spatial information which yields false retrieval results. Stricker et al. [7] proposed two new color indexing schemes. In the first approach they have used the cumulative color histogram. In the second method instead of storing the complete color distributions, first three moments of each color space of image are used. Idris and Panchanathan [8] have used the vector quantization technique to compress the image and from codeword of each image, they obtained the histogram of codewords which was used as the feature vector. In the same manner, Lu et al. [9] proposed a feature for color image retrieval by employing the combination of discrete cosine transform (DCT) and vector quantization technique. A well known image compression method i.e. block truncation coding (BTC) method is proposed in [10] for extracting two features i.e. block color co-occurrence matrix (BCCM) and block pattern histogram (BPH). Image descriptors are also generated with the help of vector quantization technique. Global histogram also suffers from spatial information. In order to overcome this problem Hauang et al. [11] proposed a color correlogram method which includes the local spatial color distribution of color information for image retrieval. Pass and Zabih [12] proposed the color coherence vectors (CCV) where a histogram based approach incorporates some spatial information. Rao et al. [13] proposed the modification in color histogram to achieve the spatial information and for this purpose they proposed three spatial color histograms : annular, angular and hybrid color histograms. Chang et al. [14] proposed a method which takes care of change in color due to change of illumination, the orientation of the surface, and the viewing geometry of the camera with less feature vector length as compared to color correlogram.

Texture is another important feature for CBIR. Smith et al. used the mean and variance of the wavelet coefficients as texture features for CBIR [15]. Moghaddam et al. proposed the Gabor wavelet correlogram (GWC) for CBIR [16]. Ahmadian et al. used the wavelet transform for texture classification [17]. Subrahmanyam et al. have proposed the correlogram algorithm for image retrieval using wavelets and rotated wavelets (WC + RWC) [18]. Ojala et al. proposed the local binary pattern (LBP) features for texture description [19] and these LBPs are converted to rotational invariant LBP for texture classification [20]. Pietikainen et al. proposed the rotational invariant texture classification using feature distributions [21]. Ahonen et al. [22] and Zhao et al. [23] used the LBP operator facial expression analysis and recognition. Heikkila et al. proposed the background modeling and detection by using LBP [24]. Huang et al. proposed the extended LBP for shape localization [25]. Heikkila et al. used the LBP for interest region description [26]. Li et al. used the combination of Gabor filter and LBP for texture segmentation [27]. Zhang et al. proposed the local derivative pattern for face recognition [28]. They have considered LBP as a nondirectional first order local pattern, which are the binary results of the first-order derivative in images. The block-based texture feature which use the LBP texture feature as the source of image description is proposed in [29] for CBIR. The center-symmetric local binary pattern (CS-LBP) which is a modified version of the well-known LBP feature is combined with scale invariant feature transform (SIFT) in [30] to describe the regions of interest. Yao et al. [31] have proposed two types of local edge patterns (LEP) histograms, one is LEPSEG for image segmentation, and the other is LEPINV for image retrieval. The LEPSEG is sensitive to variations in rotation and scale, on the contrary, the LEPINV is resistant to variations in rotation and scale. The local ternary pattern (LTP) [32] has been introduced for face recognition under different lighting conditions. Subrahmanyam et al. have proposed various pattern based features such as local maximum edge patterns (LMEBP) [33], local tetra patterns (LTrP) [34] and directional local extrema patterns (DLEP) [35] for natural/texture image retrieval and directional binary wavelet patterns (DBWP) [36], local mesh patterns (LMeP) [37] and local ternary co-occurance patterns (LTCoP) [38] for biomedical image retrieval. Reddy et al. [39] have extended the DLEP features by adding the magnitude information of the local gray values of an image. Hussain and trigges [40] have proposed the local quantized patterns (LQP) for visual recognition.

Recently, the integration of color and texture features are proposed for image retrieval. Jhanwar et al. [41] have proposed the motif co-occurrence matrix (MCM) for content-based image retrieval. They also proposed the color MCM which is calculated by applying MCM on individual red (R), green (G), and blue (B) color planes. Lin et al. [42] combined the color feature, k-mean color histogram (CHKM) and texture features, motif cooccurrence matrix (MCM) and difference between the pixels of a scan pattern (DBPSP). Vadivel et al. [43] proposed the integrated color and intensity co-occurrence matrix (ICICM) for image retrieval application. First they analyzed the properties of the HSV color space and then suitable weight functions have been suggested for estimating the relative contribution of color and gray levels of an image pixel. Vipparthi et al. [44] have proposed the local quinary patterns for image retrieval.

The concepts of LQP [40] and DLEP [35] have motivated us to propose the local quantized extrema patterns (LQEP) for image retrieval application. The main contributions of this work are summarized as follows. (a) The proposed method collects the directional quantized extrema information from the query/database image by integrating the concepts of LQP and DLEP. (b) To improve the performance of the CBIR system, the LQEP operator combines with RGB color histogram. (c) The performance of the proposed method is tested on benchmark databases for natural and texture image retrieval.

The paper is summarized as follows: In “Background”, a brief review of image retrieval and related work is given. “Review of existing local patterns”, presents a concise review of existing local pattern operators. The proposed system framework and similarity distance measures are illustrated in “Local quantized extrema patterns (LQEP)”. Experimental results and discussions are given in **Experimental Results and Discussion. The conclusions and future scope are given in **Conclusion.

Review of existing local patterns

Local binary patterns (LBP)

The LBP operator was introduced by Ojala et al. [19] for texture classification. Success in terms of speed (no need to tune any parameters) and performance is reported in many research areas such as texture classification [1821], face recognition [22, 23], object tracking [33], image retrieval [3339] and finger print recognition. Given a center pixel in the 3 × 3 pattern, LBP value is calculated by comparing its gray scale value with its neighboring pixels based on Eqs. (1) and (2):
$$ LBP_{P,R} = \sum\limits_{p = 1}^{P} {2^{(p - 1)} \times f_{1} (I(g_{p} ) - I(g_{c} ))} $$
(1)
$$ f_{1} (x) = \left\{ {\begin{array}{*{20}l} 1 & {x \ge 0} \\ 0 & {else} \\ \end{array} } \right. $$
(2)
where \( I(g_{c} ) \) denotes the gray value of the center pixel, \( I(g_{p} ) \) represents the gray value of its neighbors, \( P \) stands for the number of neighbors and \( R \), the radius of the neighborhood.
After computing the LBP pattern for each pixel (j, k), the whole image is represented by building a histogram as shown in Eq. (3).
$$ H_{LBP} (l) = \sum\limits_{j = 1}^{{N_{1} }} {\sum\limits_{k = 1}^{{N_{2} }} {f_{2} (} } LBP(j,k),l);\,l \in [0,\,(2^{P} - 1)] $$
(3)
$$ f_{2} (x,y) = \left\{ {\begin{array}{*{20}l} 1 & {x = y} \\ 0 & {else} \\ \end{array} } \right. $$
(4)
where the size of input image is \( N_{1} \times N_{2} \).
Figure 1 shows an example of obtaining an LBP from a given 3 × 3 pattern. The histograms of these patterns contain the information on the distribution of edges in an image.
Fig. 1

Calculation of LBP

Block based local binary patterns (BLK_LBP)

Takala et al. [29] have proposed the block based LBP for CBIR. The block division method is a simple approach that relies on subimages to address the spatial properties of images. It can be used together with any histogram descriptors similar to LBP. The method works in the following way: First it divides the model images into square blocks that are arbitrary in size and overlap. Then the method calculates the LBP distributions for each of the blocks and combines the histograms into a single vector of sub-histograms representing the image.

Center-symmetric local binary patterns (CS_LBP)

Instead of comparing each pixel with the center pixel, Heikkila et al. [30] have compared center-symmetric pairs of pixels for CS_LBP as shown in Eq. (5):
$$ CS\_LBP_{P,R} = \sum\limits_{p = 1}^{P} {2^{(p - 1)} \times f_{1} (I(g_{p} ) - I(g_{p + (P/2)} ))} $$
(5)

After computing the CS_LBP pattern for each pixel (j, k), the whole image is represented by building a histogram as similar to the LBP.

Directional local extrema patterns (DLEP)

Subrahmanyam et al. [35] propsed directional local extrema patterns (DLEP) for CBIR. DLEP describes the spatial structure of the local texture based on the local extrema of center gray pixel \( g_{c} \).

In DLEP, for a given image the local extrema in 0°, 45°, 90°, and 135° directions are obtained by computing local difference between the center pixel and its neighbors as shown below:
$$ I^{\prime} (g_{i} ) = I(g_{c} ) - I(g_{i} );\quad i = 1,2, \ldots ,8 $$
(6)
The local extremes are obtained by Eq. (7).
$$ \hat{I}_{\alpha } (g_{c} ) = f_{3} (I^{{\prime }} (g_{j} ),I^{{\prime }} (g_{j + 4} ));\quad j = {{(1 + \alpha } \mathord{\left/ {\vphantom {{(1 + \alpha } {45)\,\,\forall \,\alpha = 0{^\circ },}}} \right. \kern-0pt} {45)\,\,\forall \,\alpha = 0{^\circ },}}\;45{^\circ },\,90{^\circ },\,135{^\circ } $$
(7)
$$ f_{3} (I^{{\prime }} (g_{j} ),I^{{\prime }} (g_{j + 4} )) = \left\{ {\begin{array}{*{20}l} 1 \quad & {I^{{\prime }} (g_{j} ) \times I^{{\prime }} (g_{j + 4} ) \ge 0} \\ 0 \quad & {else} \\ \end{array} } \right. $$
(8)
The DLEP is defined (α = 0°, 45°, 90°, and 135°) as follows:
$$ \left. {DLEP(I(g_{c} ))} \right|_{\alpha } = \left\{ {\hat{I}_{\alpha } (g_{c} );\,\hat{I}_{\alpha } (g_{1} );\,\hat{I}_{\alpha } (g_{2} ); \ldots \hat{I}_{\alpha } (g_{8} )} \right\} $$
(9)

Local quantized patterns (LQP)

Hussain and Trigges [40] have proposed the LQP operator for visual recognition. The LQP collects the directional geometric features in horizontal (H), vertical (V), diagonal (D) and antidiagonal (A) strips of pixels; combinations of these such as horizontal-vertical (HV), diagonal-antidiagonal (DA) and horizontalvertical-diagonal-antidiagonal (HVDA); and traditional circular and disk-shaped regions. Figure 2 illustrate the possible direction quantized geometric structures for LQP operator. The more details about LQP are available in [40].
Fig. 2

The possible directional LQP structures

Local quantized extrema patterns (LQEP)

The operators DLEP [35] and LQP [40] have motivated us to propose the LQEP for image retrieval. The LQEP integrates the concepts of LQP and DLEP for image retrieval. First, the possible structures are extracted from the give pattern. Then, the extrema operation is performed on the directional geometric structures. Figure 3 illustrates the calculation of LQEP for a given 7 × 7 pattern. For easy understanding of the readers, in Fig. 3 the 7 × 7 pattern is indexed with pixel positions. The positions are indexed in a manner to write the four directional extrema operator calculations. In this paper, HVDA7 geometric structure is used for feature extraction. The brief description about the LQEP feature extraction is given as follows.
Fig. 3

The LQEP calculation for a given 7 × 7 pattern using HVDA7 geometric structure

For a given center pixel (C) in an image I, the HVDA7 geometric structure is collected using Fig. 3. Then the four directional extremas (DE) in 0°, 45°, 90°, and 135° directions are obtained as follows.
$$ \left. {DE(I(g_{c} ))} \right|_{0^\circ } = \left\{ {f_{4} (I(g_{1} ),I(g_{4} ),I(g_{C} ));f_{4} (I(g_{2} ),I(g_{5} ),I(g_{C} ));f_{4} (I(g_{3} ),I(g_{6} ),I(g_{C} ))} \right\} $$
(10)
$$ \left. {DE(I(g_{c} ))} \right|_{45^\circ } = \left\{ {f_{4} (I(g_{13} ),I(g_{16} ),I(g_{c} ));f_{4} (I(g_{14} ),I(g_{17} ),I(g_{c} ));f_{4} (I(g_{15} ),I(g_{18} ),I(g_{c} ))} \right\} $$
(11)
$$ \left. {DE(I(g_{c} ))} \right|_{90^\circ } = \left\{ {f_{4} (I(g_{7} ),I(g_{10} ),I(g_{C} ));f_{4} (I(g_{8} ),I(g_{11} ),I(g_{C} ));f_{4} (I(g_{9} ),I(g_{12} ),I(g_{C} ))} \right\} $$
(12)
$$ \left. {DE(I(g_{c} ))} \right|_{135^\circ } = \left\{ {f_{4} (I(g_{19} ),I(g_{22} ),I(g_{C} ));f_{4} (I(g_{20} ),I(g_{23} ),I(g_{C} ));f_{4} (I(g_{21} ),I(g_{24} ),I(g_{C} ))} \right\} $$
(13)
where,
$$ f_{4} (x,y,c) = \left\{ {\begin{array}{*{20}l} 1 & {if\,(x > c)\,or\,(y > c)} \\ 1 & {if\,(x < c)\,or\,(y < c)} \\ 0 & {else} \\ \end{array} } \right. $$
(14)
The LQEP is defined by Eq. (10)–(13) as follows.
$$ LQEP = \left[ {\left. {DE(I(g_{c} ))} \right|_{0^\circ } ,\left. {DE(I(g_{c} ))} \right|_{45^\circ } ,\left. {DE(I(g_{c} ))} \right|_{90^\circ } ,\left. {DE(I(g_{c} ))} \right|_{135^\circ } } \right] $$
(15)
Eventually, the given image is converted to LQEP map with values ranging from 0 to 4095. After calculation of LQEP, the whole image is represented by building a histogram supported by Eq. (16).
$$ H_{LQEP} (l) = \sum\limits_{j = 1}^{{N_{1} }} {\sum\limits_{k = 1}^{{N_{2} }} {f_{2} (} } LQEP(j,k),l);\,l \in [0,\,4095];\, $$
(16)
where, LQEP(i,j) represents the LQEP map value ranging 0 to 4095.

Proposed image retrieval system

In this paper, we integrate the concepts of DLEP and LQP for image retrieval. First, the image is loaded and converted into gray scale if it is RGB. Secondly, the four directional HVDA7 structure is collected using the LQP geometric structures. Then, the four directional extremas in 0°, 45°, 90°, and 135° directions are collected. Finally, the LQEP feature is generated by constructing the histograms. Further, to improve the performance of the proposed method, we integrate the LQEP with color RGB histogram for image retrieval.

Figure 4 depicts the flowchart of the proposed technique and algorithm for the same is given as follows.
Fig. 4

Proposed image retrieval system framework

Algorithm:

Input: Image; Output: Retrieval result
  1. 1.

    Load the image and convert into gray scale (if it is RGB).

     
  2. 2.

    Collect the HVDA7 structure for a given center pixel.

     
  3. 3.

    Calculate the local extrema in 0°, 45°, 90°, and 135° directions.

     
  4. 4.

    Compute the 12-bit LQEP with four directional extrema.

     
  5. 5.

    Construct the histogram for 12-bit LQEP.

     
  6. 6.

    Construct the RGB histogram from the RGB image.

     
  7. 7.

    Construct the feature vector by concatenating RGB and LQEP histograms.

     
  8. 8.

    Compare the query image with the image in the database using Eq. (20).

     
  9. 9.

    Retrieve the images based on the best matches

     

Query matching

Feature vector for query image Q is represented as \( f_{Q} = (f_{{Q_{1} }} ,f_{{Q_{2} }} , \ldots f_{{Q_{Lg} }} ) \) is obtained after the feature extraction. Similarly each image in the database is represented with feature vector \( f_{{DB_{j} }} = (f_{{DB_{j1} }} ,f_{{DB_{j2} }} , \ldots f_{{DB_{jLg} }} );\,j = 1,2, \ldots ,\left| {DB} \right| \). The goal is to select n best images that resemble the query image. This involves selection of n top matched images by measuring the distance between query image and image in the database \( DB \). In order to match the images we use four different similarity distance metrics as follows.
$$ Manhattan \, distance \, measure{:}\,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\left| {f_{{DB_{ji} }} - f_{Q,i} } \right|} $$
(17)
$$ Euclidean \, distance \, measure{:}\;D(Q,I_{1} ) = \left( {\sum\limits_{i = 1}^{Lg} {(f_{{DB_{ji} }} - f_{Q,i} )^{2} } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} $$
(18)
$$ Canberra \, distance \, measure{:} \,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\frac{{\left| {f_{{DB_{ji} }} - f_{Q,i} } \right|}}{{\left| {f_{{DB_{ji} }} } \right| + \left| {f_{Q,i} } \right|}}} $$
(19)
$$ d_{1}\; distance \, measure{:}\,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\left| {\frac{{f_{{DB_{ji} }} - f_{Q,i} }}{{1 + f_{{DB_{ji} }} + f_{Q,i} }}} \right|} $$
(20)
where, L g represents the feature vector length, I 1 is database image and \( f_{{DB_{ji} }} \) is \( i{th} \) feature of \( j{th} \) image in the database \( DB \).

Experimental results and discussion

The performance of the proposed method is analyzed by conducting two experiments on benchmark databases. Further, it is mentioned that the databases used are Corel-1K [45], Corel-5K [45] and MIT VisTex [46] databases.

In all experiments, each image in the database is used as the query image. For each query, the system collects n database images X = (x 1 , x 2 ,…, x n ) with the shortest image matching distance computed using Eq. (20). If the retrieved image x i  = 1, 2,…, n belongs to same category as that of the query image then we say the system has appropriately identified the expected image else the system fails to find the expected image.

The performance of the proposed method is measured in terms of average precision/average retrieval precision (ARP), average recall/average retrieval rate (ARR) as shown below:

For the query image \( I_{q} \), the precision is defined as follows:
$$ Precision{:}\,P(I_{q} ) = \frac{Number\,of \, relevant \, images \, retrieved}{Total \, Number\,of \, images \, retrieved} $$
(21)
$$ Average\;Retrieval\,Precision{:}\, ARP = \frac{1}{{\left| {DB} \right|}}\sum\limits_{i = 1}^{{\left| {DB} \right|}} {P(I_{i} )} $$
(22)
$$ Recall{:}\,R(I_{q} ) = \frac{Number\,of \, relevant \, images \, retrieved}{Total \, Number\,of\,relevant \, images \, in\,the\,database} $$
(23)
$$ Average\,Retrieval\,Rate{:}\,ARR = \frac{1}{{\left| {DB} \right|}}\sum\limits_{i = 1}^{{\left| {DB} \right|}} {R(I_{i} )} $$
(24)

Experiment #1

In this experiment, Corel-1K database [44] is used. This database consists of large number of images of various contents ranging from animals to outdoor sports to natural images. These images have been pre-classified into different categories each of size 100 by domain professionals. Some researchers think that Corel database meets all the requirements to evaluate an image retrieval system, due its large size and heterogeneous content. For experimentation we selected 1000 images which are collected from 10 different domains with 100 images per domain. The performance of the proposed method is measured in terms of ARP and ARR as shown in Eq. (2124). Figure 5 illustrates the sample images of Corel-1K database.
Fig. 5

Sample images of Corel-1K database

Table 1 and Fig. 6 illustrate the retrieval results of proposed method and other existing methods in terms of ARP on Corel–1K database. Table 2 and Fig. 7 illustrate the retrieval results of proposed method and other existing methods in terms of ARR on Corel–1K database. From Tables 1, 2, Figs. 6 and 7, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of precision, ARP, recall and ARR on Corel–1K database. Figure 8a, b illustrate the analysis of proposed method (LQEP) with various similarity distance measures on Corel-1K database in terms of ARP and ARR respectively. From Fig. 8, it is observed that the d1 distance measure outperforms the other distance measures in terms of ARP and ARR on Corel-1K database. Figure 9 illustrates the query results of proposed method on Corel–1K database.
Table 1

The retrieval results of proposed method and various other existing method in terms of ARP at n = 20 on Corel-1 K database

Category

Average retrieval precision (ARP) (%); (n = 20)

Jhanwar et al.

Lin et al.

CC

Vadivel et al.

Subrahmanyam et al.

LBP

LTP

LDP

LTrP

Reddy et al.

LQP

Proposed method

Africans

53.15

68.3

80.40

78.25

69.75

52.46

57.20

55.35

60.90

61.30

61.95

75.15

Beaches

43.85

54

41.25

44.25

54.25

51.33

43.60

52.05

53.90

51.25

53.9

57.65

Buildings

48.7

56.2

55.65

59.10

63.95

55.66

63.35

62.25

63.40

57.85

61.95

74.7

Buses

82.8

88.8

76.70

86.05

89.65

96.33

95.50

95.8

96.55

94.40

97.4

94.3

Dinosaurs

95

99.3

99.00

98.70

98.70

95.23

96.8

94.5

98.00

97.85

98.75

98.95

Elephants

34.85

65.8

56.20

59.00

48.80

42.03

46.05

43.35

46.10

48.90

50

56.55

Flowers

88.35

89.1

92.90

85.35

92.30

85.63

91.40

85.20

86.60

89.10

91.85

95.45

Horses

59.35

80.3

76.50

74.95

89.450

65.30

64.75

69.40

72.15

66.20

77.3

86.65

Mountains

30.8

52.2

33.70

36.55

47.30

35.93

34.55

33.55

36.10

39.40

43.75

45.9

Food

50.4

73.3

70.60

64.40

70.90

70.36

70.65

76.15

75.05

75.35

78.65

82

Total

58.72

72.7

68.29

68.66

72.5

65.03

66.38

66.76

68.87

68.16

71.55

76.73

Fig. 6

Comparison of proposed method with other existing methods in terms of ARP on Corel-1K database

Table 2

The retrieval results of proposed method and various other existing method in terms of ARR at n = 20 on Corel-1K database

Category

Average Retrieval Recall (ARR) (%)

Jhanwar et al.

Lin et al.

CC

Vadivel et al.

Subrahmanyam et al.

LBP

LTP

LDP

LTrP

Reddy et al.

LQP

Proposed method

Africans

32.21

42.1

46.29

48.41

43.58

38.1

32.9

38.1

38.6

39.25

41.23

44.38

Beaches

29.04

32.1

25.29

25.85

35.77

35.4

29.4

36.2

38.3

33.82

34.68

38.32

Buildings

27.7

36.5

35.01

37.05

34.89

33.7

35

36.5

34.9

31.96

33.55

44.84

Buses

48.66

61.7

60.97

66.52

63.39

70.5

69.9

74.2

73.4

73.57

77.85

73

Dinosaurs

81.44

94.1

89.59

78.11

92.78

75.1

87.5

77.2

83.7

90.28

92.16

92.55

Elephants

21.42

33.1

34.14

35.66

30.31

25.4

27.8

28.5

29.5

30.53

30.91

32.78

Flowers

63.53

75

77.69

57.73

64.59

65.6

71.3

62.2

65.8

69.32

76.5

79.99

Horses

35.84

47.6

36.13

41.47

66.55

42.2

40.4

44.3

43.1

36.16

47.8

50.39

Mountains

21.75

27.7

21.02

24.37

32.09

26.9

23.6

24.6

27.5

29.35

29.55

32.32

Food

29.02

49

39.27

38.24

45.12

37.2

40.5

47.9

52.2

45.3

52.25

54.54

Total

39.06

49.89

46.54

45.34

50.91

44.9

45.8

46.9

48.7

47.95

51.64

54.31

Fig. 7

Comparison of proposed method with other existing methods in terms of ARR on Corel-1K database

Fig. 8

Comparison of proposed method with various distance measures in terms of ARP and ARR on Corel-1K database

Fig. 9

Query results of proposed method on Corel-1K database

Experiment #2

In this experiment Corel-5K database [45] is used for image retrieval. The Corel-5K database consists of 5000 images which are collected from 50 different domains have 100 images per domain. The performance of the proposed method is measured in terms of ARP and ARR as shown in Eq. (2124).

Table 3 illustrates the retrieval results of proposed method and other existing methods on Corel-5K database in terms of precision and recall. Figure 10a, b show the category wise performance of methods in terms of precision and recall on Corel-5K database. The performance of all techniques in terms of ARP and ARR on Corel-5K database can be seen in Fig. 10c, d respectively. From Table 3, Fig. 10, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of their evaluation measures on Corel-5 K database. The performance of the proposed method is also analyzed with various distance measures on Corel-5 K database as shown in Fig. 11. From Fig. 11, it is observed that the d 1 distance measure outperforms the other distance measures in terms of ARP and ARR on Corel-5K database. Figure 12 illustrates the query results of the proposed method on Corel-5K database.
Table 3

Results of various methods in terms of precision and recall on Corel-5K database

Database

Performance

Method

CS_LBP

LEPSEG

LEPINV

BLK_LBP

LBP

DLEP

Reddy et al.

LQP

PM

Corel–5 K

Precision (%)

32.9

41.5

35.1

45.7

43.6

48.8

54.4

50.7

62.2

Recall (%)

14.0

18.3

14.8

20.3

19.2

21.1

24.1

22.4

29.6

PM proposed method, BLK_LBP block based LBP [25]

Fig. 10

Comparison of proposed method with other existing methods on Corel–5K. a Category wise performance in terms of precision, b category wise performance in terms of recall, c total database performance in terms of ARP and d total database performance in terms of ARR

Fig. 11

Comparison of proposed method with various distance measures in terms of ARP on Corel-5K database

Fig. 12

Query results of proposed method on Corel-5K database (top left image is the query image)

Experiment #3

In this experiment, MIT VisTex database is considered which consists of 40 different textures [46]. The size of each texture is 512 × 512 which is divided into sixteen 128 × 128 non-overlapping sub-images, thus creating a database of 640 (40 × 16) images. In this experiment, each image in the database is used as the query image. Average retrieval recall or average retrieval rate rate (ARR) given in Eq. (24) is the bench mark for comparing results of this experiment.

Figures 13, 14 illustrate the performance of various methods in terms of ARR and ARP on MIT VisTex database. From Figs. 13, 14, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of ARR and ARP on MIT VisTex database. Figure 15 illustrates the performance of proposed method with different similarity distance measures in terms of ARR on MIT VisTex database. From Fig. 15, it is observed that d1 distance measure outperforms the other distance measures in terms of ARR on MIT VisTex database. Figure 16 illustrates the query results of proposed method on MIT VisTex database.
Fig. 13

Comparison of proposed method with other existing methods in terms of ARR on MIT VisTex database

Fig. 14

Comparison of proposed method with other existing methods in terms of ARP on MIT VisTex database

Fig. 15

Comparison of proposed method with various distance measures in terms of ARR on MIT VisTex database

Fig. 16

Query results of the proposed method on MIT VisTEx database (top left image is the query image)

Conclusions

A new feature descriptor, named local quantized extrema patterns (LQEP) is proposed in this paper for natural and texture image retrieval. The proposed method integrates the concepts of local quantization geometric structures and local extrema for extracting features from a query/database image for retrieval. Further, the performance of the proposed method is improved by combining it with the standard RGB histogram. Performance of the proposed method is tested by conducting three experiments on benchmark image databases and retrieval results show a significant improvement in terms of their evaluation measures as compared to other existing methods on respective databases.

Declarations

Authors’ contributions

All authors are contributed equally for the current research work. All authors read and approved the final manuscript.

Acknowledgements

The authors declare that they have no acknowledgements for the current research work.

Compliance with ethical guidelines

Competing interests The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electronics and Communication Engineering, JNTU Hyderabad
(2)
Narasaraopet Institute of Technology

References

  1. Aura C, Castro EMMM (2003) Image mining by content. Expert Syst Appl 23:377–383Google Scholar
  2. Subrahmanyam M, Maheshwari RP, Balasubramanian R (2012) Expert system design using wavelet and color vocabulary trees for image retrieval. Expert Syst Appl 39:5104–5114View ArticleGoogle Scholar
  3. Vipparthi SK, Nagar SK (2014) Expert image retrieval system using directional local motif XoR patterns. Expert Syst Appl 41(17):8016–8026View ArticleGoogle Scholar
  4. Rui Y, Huang TS (1999) Image retrieval: current techniques, promising directions and open issues. J Vis Commun Image Represent 10:39–62View ArticleGoogle Scholar
  5. Smeulders AWM, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22(12):1349–1380View ArticleGoogle Scholar
  6. Swain MJ, Ballard DH (1991) Color Indexing. Int J Comput Vision 7:11–32View ArticleGoogle Scholar
  7. Stricker M, Orengo M (1995) Similarity of color images. In: Proceedings SPIE Storage and Retrieval for Image and Video Databases III, San Jose, Wayne Niblack, pp 381–392Google Scholar
  8. Idris F, Panchanathan S (1997) Image and video indexing using vector quantization. Mach Vis Appl 10:43–50View ArticleGoogle Scholar
  9. Lu ZM, Burkhardt H (2005) Colour image retrieval based on DCT domain vector quantisation index histograms. Electron Lett 41:956–957View ArticleGoogle Scholar
  10. Qiu G (2003) Color image indexing using BTC. IEEE Trans Image Process 12:93–101View ArticleGoogle Scholar
  11. Huang J, Kumar S, Mitra M, Zhu W, Zabih R (1997) Image indexing using color correlograms. In: Proceeding Computer Vision and Pattern Recognition, San Jaun, Puerto Rico, pp 762–768Google Scholar
  12. Pass G, Zabih R (1996) Histogram refinement for content-based image retrieval. In: Proceedings IEEE Workshop on Applications of Computer Vision, 1996, pp 96–102Google Scholar
  13. Rao A, Srihari RK, Zhang Z (1999) Spatial color histograms for content-based image retrieval. In: Proceedings of the Eleventh IEEE International Conference on Tools with Artificial Intelligence, Chicago, IL, USA 1999, pp 183–187Google Scholar
  14. Chang MH, Pyun JY, Ahmad MB, Chun JH, Park JA (2005) Modified color co-occurrence matrix for image retrieval. Lect Notes Comput Sci 3611:43–50View ArticleGoogle Scholar
  15. Smith JR, Chang SF (1996) Automated binary texture feature sets for image retrieval. In: Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, Columbia Univ., New York, pp 2239–2242Google Scholar
  16. Moghaddam HA, Khajoie TT, Rouhi AH (2003) A new algorithm for image indexing and retrieval using wavelet correlogram. In: International Conference on Image Processing, K.N. Toosi Univ. of Technol., Tehran, Iran, vol. 2, pp 497–500Google Scholar
  17. Ahmadian A, Mostafa A (2003) An efficient texture classification algorithm using Gabor wavelet. In: 25th Annual international conf. of the IEEE EMBS, Cancun, Mexico, pp 930–933Google Scholar
  18. Murala S, Maheshwari RP, Balasubramanian R (2011) A correlogram algorithm for image indexing and retrieval using wavelet and rotated wavelet filters. Int J Signal Imaging Syst Eng 4(1):27–34View ArticleGoogle Scholar
  19. Ojala T, Pietikainen M, Harwood D (1996) A comparative sudy of texture measures with classification based on feature distributions. J Pattern Recogn 29(1):51–59View ArticleGoogle Scholar
  20. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987View ArticleGoogle Scholar
  21. Pietikainen M, Ojala T, Scruggs T, Bowyer KW, Jin C, Hoffman K, Marques J, Jacsik M, Worek W (2000) Overview of the face recognition using feature distributions. J Pattern Recogn 33(1):43–52View ArticleGoogle Scholar
  22. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: applications to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041View ArticleGoogle Scholar
  23. Zhao G, Pietikainen M (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans Pattern Anal Mach Intell 29(6):915–928View ArticleGoogle Scholar
  24. Heikkila M, Pietikainen M (2006) A texture based method for modeling the background and detecting moving objects. IEEE Trans Pattern Anal Mach Intell 28(4):657–662View ArticleGoogle Scholar
  25. Huang X, Li SZ, Wang Y (2004) Shape localization based on statistical method using extended local binary patterns. In: Proceedings of International Conference on Image and Graphics, pp 184–187Google Scholar
  26. Heikkila M, Pietikainen M, Schmid C (2009) Description of interest regions with local binary patterns. J Pattern recognition 42:425–436View ArticleGoogle Scholar
  27. Li M, Staunton RC (2008) Optimum Gabor filter design and local binary patterns for texture segmentation. J Pattern Recogn 29:664–672View ArticleGoogle Scholar
  28. Zhang B, Gao Y, Zhao S, Liu J (2010) Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor. IEEE Trans Image Proc 19(2):533–544MathSciNetView ArticleGoogle Scholar
  29. Takala Valtteri, Ahonen Timo, Pietikainen Matti (2005) Block-based methods for image retrieval using local binary patterns, SCIA 2005. LNCS 3450:882–891Google Scholar
  30. Heikkil Marko, Pietikainen M, Schmid C (2009) Description of interest regions with local binary patterns. Pattern Recogn 42:425–436View ArticleGoogle Scholar
  31. Yao Cheng-Hao, Chen Shu-Yuan (2003) Retrieval of translated, rotated and scaled color textures. Pattern Recogn 36:913–929View ArticleGoogle Scholar
  32. Tan X, Triggs B (2010) Enhanced local texture feature sets for face rec-ognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650MathSciNetView ArticleGoogle Scholar
  33. Murala S, Maheshwari RP, Balasubramanian R (2012) Local maximum edge binary patterns: a new descriptor for image retrieval and object tracking. Signal Process 92:1467–1479View ArticleGoogle Scholar
  34. Murala S, Maheshwari RP, Balasubramanian R (2012) Local tetra patterns: a new feature descriptor for content based image retrieval. IEEE Trans Image Processing 21(5):2874–2886MathSciNetView ArticleGoogle Scholar
  35. Murala S, Maheshwari RP, Balasubramanian R (2012) Directional local extrema patterns: a new descriptor for content based image retrieval. Int J Multimedia Inf Retrieval 1(3):191–203View ArticleGoogle Scholar
  36. Murala S, Maheshwari RP, Balasubramanian R (2012) Directional binary wavelet patterns for biomedical image indexing and retrieval. J Med Syst 36(5):2865–2879MathSciNetView ArticleGoogle Scholar
  37. Murala S, Jonathan Wu QM (2013) Local mesh patterns versus local Binary patterns: biomedical image indexing and retrieval. IEEE J Biomed Health Inform 18(3):929–938View ArticleGoogle Scholar
  38. Murala S, Jonathan Wu QM (2013) Local ternary co-occurrence patterns: a new feature descriptor for MRI and CT image retrieval. Neurocomputing 119(7):399–412View ArticleGoogle Scholar
  39. Reddy PVB, Reddy ARM (2014) Content based image indexing and retrieval using directional local extrema and magnitude patterns. Int J Electron Commun (AEÜ) 68(7):637–643View ArticleGoogle Scholar
  40. ul Hussain S, Triggs B (2012) Visual Recognition Using Local Quantized Patterns. ECCV 2012, Part II, LNCS 7573, Italy, pp 716–729Google Scholar
  41. Jhanwar N, Chaudhuri S, Seetharaman G, Zavidovique B (2004) Content-based image retrieval using motif co-occurrence matrix. Image Vision Comput 22:1211–1220View ArticleGoogle Scholar
  42. Lin CH, Chen RT, Chan YKA (2009) Smart content-based image retrieval system based on color and texture feature. Image Vision Comput 27:658–665View ArticleGoogle Scholar
  43. Vadivel A (2007) Sural Shamik, Majumdar AK. An integrated color and intensity co-occurrence matrix. Pattern Recognit Lett 28:974–983View ArticleGoogle Scholar
  44. Vipparthi SK, Nagar SK (2014) Color directional local quinary patterns for content based indexing and retrieval. Human Centric Comp Inform Sci 4:6. doi:10.1186/s13673-014-0006-x View ArticleGoogle Scholar
  45. Corel 1000 and Corel 10000 image database. [Online]. Available: http://wang.ist.psu.edu/docs/related.shtml
  46. MIT Vision and Modeling Group, Vision Texture. [Online]. Available: http://vismod.media.mit.edu/pub/

Copyright

© Koteswara Rao and Venkata Rao. 2015