Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Mr. Kommu Naveen, Dr. R. M. S Parvathi
DOI Link: https://doi.org/10.22214/ijraset.2021.39172
Certificate: View Certificate
In a Content Based Image Retrieval (CBIR) System, the task is to retrieve similar images from a large database given a query image. The usual procedure is to extract some useful features from the query image, and retrieve images which have similar set of features. For this purpose, a suitable similarity measure is chosen, and images with high similarity scores are retrieved. Naturally the choice of these features play a very important role in the success of this system, and high level features are required to reduce the “semantic gap”. In this paper, we propose to use features derived from pre-trained network models from a deep- learning convolution network trained for a large image classification problem. This approach appears to produce vastly superior results for a variety of databases, and it outperforms many contemporary CBIR systems. We analyse the retrieval time of the method, and also propose a pre-clustering of the database based on the above-mentioned features which yields comparable results in a much shorter time in most of the cases.
I. INTRODUCTION
Given a query image, often similar images may need to be retrieved from a large database. This is called Content Based Image Retrieval. The standard procedure is to find similar images based on some features extracted from the images. Ideally these features should describe the content information of the images. That is why high-level features are needed, and the low level features like pixel values etc are not very useful.
IBM developed the first commercial version of CBIR system naming QBIC (Query By Image Content)[24] in 1995. It allows user to query by user-constructed sketches, example images and drawings. This system uses a combination of texture, shape and colour. Colour co-occurrence matrix (CCM) is used to extract low level features from images, which has been widely used in many works[34, 15, 26] in the area of CBIR. Bose et al. [7] used some visual descriptors to extract features from MPEG-7 standard [18, 21] along with CCM features which achieved some improvement.
Lohite et al. [20] worked with the widely used color, texture and edge features of the images, and optimized the result using SVM (Support Vector Machine) classifier. Mehmood et al.[23] presented a CBIR method named WATH (weighted average of triangular histograms) of visual words. This method adds image spatial elements to inverted index of BoVW (bag-of-visual-words) model, corrects overfitting problem on larger size of dictionary and tries to bride the semantic gap between low-level and high-level features.
Rashno et al.[30] proposed a new scheme which suggests to transform the input RGB image to three subsets in neutrosophic (NS) domain. For each of the segment, statistic component, histogram, colour features including dominant colour descriptor (DCD) and wavelet features are extracted. These features are then used to retrieve images.
Kumar et al.[33] introduced a new feature descriptor called local mean differential excitation pattern (LMDeP) which can produce robust features. Sarwar et al.[32] recommended a method based on bag-of-words (BoW) model, which integrates visual words with local intensity order pattern (LIOP) feature and local binary pattern variance (LBPV) feature to reduce the semantic gap issue and enhance CBIR performance. Rana et al.[29] proposed image retrieval by combining colour and shape features with nonparametric ranklet transformed texture features. Yusuf et al.[41] gained improvement in CBIR performance on the basis of visual words fusion of scale invariant feature transform (SIFT) and local intensity order pattern (LIOP) descriptors. Sharif et al.[35] came up with another feature descriptor called binary robust invariant scalable keypoints (BRISK) along with SIFT. Ashraf et al.[4] developed a method which retrieves images using YCbCr colour scheme with canny edge histogram and discrete wavelet transform. In Obulesu et al.’s [25] two extended versions of motif co-occurrence matrices (MCM) are calculated and combined to improve the CBIR performance.
After the introduction and evolution of Deep Learning Neural Network, the performance of CBIR has got a boost, because by the help of deep models we can finally extract higher-level features along with the low-level features from the image to reduce the semantic gap mentioned above. Khokhar et al.[17] described how Back-propagation Feedforward Neural Network (BFNN) can be used for classification in CBIR after exploiting some features of images
e.g. geometric, colour and texture. Ashraf et al.[5] presented a bandlet transform based image representation technique which returns information about major objects present in the image reliably. Finally to retrieve images Artifical Neural Network has been used. Xu et al.[40] proposed part-based weighting aggregation (PWA) for CBIR. This PWA utilizes discriminative filters of deep convolutional layers as part detectors. Several other recent CBIR techniques can be found in the review paper[43].
In this paper, features derived from a pre-trained network model from a deep learning convolution neural network trained for a large image classification have been used for retrieval of similar images. The resulting algorithms appears to achieve remarkable success in terms of retrieval accuracy, and appears to outperform many contemporary CBIR methods. The algorithm is quite fast, however, to reduce retrieval time, a concept of pre-clustering the database has also been introduced which seems to work faster without sacrificing retrieval performance. The paper is organized is as follows: Section 2 describes the motivation behind this approach, presents a review of the key ingredients and explores the characteristics of the derived features from a pre-trained network model. In Section 3, we present the details regarding the pre-trained models, similarity measures and evaluation of performance that have been used in this work. Section 4 contains the results of extensive experiments while Section 5 discusses the time complexity. In Section 6, we introduce a concept of pre-clustering of the database and in Section 7, we present the concluding remarks and some future directions.
II. PROPOSED METHOD USING FEATURES DERIVED BY DEEP LEARNING
In a Content-Based Image Retrieval system, images are represented by a set of low level or/and high-level features. This is called feature encoding where an image from RGB or HSV space is encoded to a n-dimensional feature vector. In this paper we propose to derive feature vectors of an image with the help of some pre-trained deep learning models. For this purpose we first present a brief description of the key concepts such as neural network, pre-trained models etc.
A. Neural Network
Neural Networks are set up as collections of neurons that are connected in a non-cyclic graph. These models are often depicted into separate layers of neurons. Generally, the most common type is the fully-connected Neural Network layer in which neurons between two adjacent layers are fully pairwise connected, but there is no connection between neurons within a single layer. Below are two examples of fully connected Neural Network[11].
B. Convolutional Neural Network (CNN)
Convolutional Neural Networks take input as images and they handle the architecture in a more sensible way. The layers of a CNN have neurons which are arranged in 3 dimensions: width, height, depth. Here, the word depth refers to the third dimension of an activation volume of a layer. The neurons in a layer are connected to a small region of the preceding CNN layer, unlike to all the neurons which is a norm in a fully-connected neural network. The visualization is shown in Figure 2.
As mentioned above, a simple CNN is a sequence of layers, and every layer of a CNN transforms one volume of activation to another by passing through certain differentiable functions. There are mainly three types of layers in CNN architectures: Convolutional Layer, Pooling Layer and Fully-Connected Layer (shown in Figure 1). We will stack these layers on top of each other to form a fully operational CNN architecture[11]. Figure 2 shows the CNN model architecture of a classification problem. This network consists of convolution, pooling, fully connected layers and some activation layers (e.g. ReLU, softmax etc).
Figure 1: Left: A two-layer Neural Network (one hidden layer of four neurons and one output layer with two neurons) with three inputs. Right: A three-layer neural network with two hidden layers of four neurons each, one output layer with a single neuron and three inputs [11]
Figure 2: Every layer of a CNN converts the 3D input volume to a 3D output volume of activations. In given example, the image of the bird is the input layer, so its height and width are the dimensions of the image, and the depth would be three (Red, Green, Blue channels)
C. Pre-trained Neural Network Model
Transfer learning[39] is a method where instead of starting the training process of a model from scratch, we use the learned weights of an already trained model to solve a different but similar problem. In this way we leverage previous learning through the learned weights and save time considerable amount of time. Usually much better results are also achieved compared to training from scratch.
In computer vision, transfer learning is usually executed by the use of pre-trained models. A pre-trained model[22] is a model that was trained on a large benchmark dataset to solve some specific problems similar to the ones we want to solve. Accordingly, because of the high computational cost of training such deep learning models, it is a common practice to import and use models from a published architecture (e.g. VGG, ResNet, Xception etc). A comprehensive review of pre-trained models’ performance on computer vision problems using data from the ImageNet[12] challenge is presented by Canziani et al.[8].
Being motivated by this, we have used a pre-trained Neural Network model which was trained on the ImageNet Dataset. This dataset contains more than 14 million images which belong to more than 20,000 classes. It also provides bounding box annotations for around 1 million images, which can be used in Object Localization tasks. Multiple layers of convolutional layers, average pooling layers, max pooling layers etc. are stacked up with one another with different combinations to build up the model architectures. The final layer is then unwrapped to an n-dimensional vector, which is called the dense (or fully connected) layer. Several fully connected layers can be stacked on this unwrapped dense layer. Finally, a softmax activation layer of dimension: the number of class is stacked over the final dense layer to get the probabilities of each class for classification problems or a linear activation layer of single dimension can be placed to predict the regressed value for regression problems or other kind of structured activation layers are placed according to the desired problems to solve.
D. Visualizing what CNN Learn
It is often referred that deep-learning models are “black boxes”. For certain types of deep-learning models it may be true but for CNN it is not absolutely true. The representations and features, learned by CNN are highly amenable to visualization, in large part because they’re representations of visual concepts. Since 2013, different types of techniques have been developed for visualizing and interpreting these feature representations of CNN. Below are some methods to visualize the learnings of CNN[10].
E. Visualizing Intermediate CNN outputs (Intermediate Activations)
For this sample image shown in Figure 3: from ImageDB2000 Dataset, the first few intermediate activations for the above image are shown in the Figure 4. The last few intermediate layer activations for Figure 3 are shown in the Figure 5.
There are a few things to note here:
F. Visualizing Intermediate Convent Outputs (Adaptive Deconvolutional Networks)
This model produces an over-complete image feature representation that can be used as input to standard neural network object classifiers. This model is learned from natural images and, given a new image, requires inference to compute. It decomposes an image in a hierarchical fashion using multiple alternating layers of convolutional sparse coding (deconvolution[42]) and max-pooling. Each of this deconvolution layers attempts to minimize the reconstruction error of the input image under a sparsity constraint on an over-complete set of feature maps. After doing so, for this sample image shown in Figure 6. Some of the shallow level (low level) convolution features are shown in Figure 7. And, some of the deep level (high level) convolution features are shown in Figure 8.
Here we actually see that the model represents edge, texture type low-level features in the first layers, where in the last layers the model learns to represent some higher-level features. As an example, in Figure 8 the deep layers of the model represents fish fin, fish body types concepts.
Visualizing convent filters has been clearly described in chapter 5 of Francois Chollet’s book[10]. It has been shown that the filters in the earlier layers encode directional edges, colors and textures. Also, as we visualize the filters in the deeper layer we find that the filters lend to learn textures found in natural images: feathers, eyes, leaves etc.
G. Proposed method of Feature Extraction by Deep Learning
From the above section it is evident that as the model architecture goes deep, it starts to learn high-level features from the low-level features. We propose to use these higher-level features for the feature representation of images in a CBIR system. So, we removed the last softmax activation layer used for calculating probabilities of each class and selected the preceding fully connected layer to be our feature vector representation for CBIR. As this vector is the deepest layer of the model, this represents the most learned high-level features. We encoded (predicted) the images of our CBIR database through our pre-trained model and got an n-dimensional feature vector for each of the images. The flowchart of this process is shown in Figure 10 for better understanding. The value of n varies with the selection of deep learning network architecture.
III. DETAILS OF THE PROPOSED ALGORITHM AND PERFORMANCE EVALUATION
A. Pre-trained Models Used
We have tried the following network architectures all pre-trained on the ImageNet dataset:
The main advantage of this method of feature extraction is that now we are able to extract higher level features without exploiting our database. This is necessary because from the practical point of view we would be having a dump of dataset without any class information. The user will try to find similar images from the database based on the query image he/she has. Now as the dump dataset does not have any pre-defined class, training model on our dataset is not a feasible task unless we manually try to assign class to each of the images of our database dump consisting millions or billions of images which is very time consuming and prone to subjective error for classifying images. To avoid this problem, we are using a Neural Network model pre-trained on a huge separate dataset (ImageNet) to perform feature extraction independently on our CBIR datasets unlike training the model itself on the CBIR datasets[17, 16]
B. Database Used
The CBIR methods were applied on the following image databases, which vary in number as well as types of images.
C. Similarity Measure
The similarity (or dissimilarity) between a query image (Q) given by the user and a database image (I) stored in the system is measured by some distance metric. It is assumed that this distance will accurately measure the dissimilarity (or similarity) between the images as well. A smaller calculated distance implies more similarity. The similarity between two images can be different for different user’s way of perceiving the images. Broadly speaking, there are mainly two types of similarity measures, geometric and probabilistic[28]. In the first case, the similarity is based on the distance between the feature vectors. A most widely used one is Minkowski, of which L1-norm and L2-norm are most popular. In probabilistic type, a Gaussian classifier is often used to measure the relevance between the query image and the database image so that the pairs that had a high likelihood ratio were classified as relevant and the pairs having a low likelihood ratio is considered as irrelevant. Although past research[3] shows that the probabilistic methods perform significantly better than the geometric methods, they are computationally expensive. Throughout this paper we have used L1 or L2 norm as dissimilarity measure.
D. Evaluation of Performance
The most commonly used measures for evaluating the performance of a CBIR system is Precision, which is defined as follows:
Precision = Number of relevant images retrieved
Number of retrieved images (3)
Generally, the number of images retrieved by any CBIR method is a pre-specified positive integer. This is called the scope of the system. Precision value is calculated for each image in the database, and these values are averaged over all images in the database. Usually, the greater the scope, the larger is the number of relevant images retrieved, typically leading to decreasing values of precision.
IV. RESULTS
In this section we present all the results including selection of the best deep learning network architecture, retrieval results of our CBIR system with respect to the sample query images taken from our datasets, category wise image retrieval precision for our datasets and comparison of precision between the proposed method and some other recent ones on CBIR.
A. Selection of best Deep Learning Network architecture
We did a comparative study to select the best deep learning architecture as described in section 2.5 by calculating the average precision for each one of them for a scope value of 20 on DBCorel dataset. Euclidean Distance (L2 norm) is used as the dissimilarity metric. From the results given in Figure 11 it is clearly seen that InceptionResNetV2 is the winner with 96.115% average precision value. Therefore we decided to use the InceptionResNetV2 network architecture for all the subsequent experiments.
B. Image Retrieval of Sample Query Images
Using the Inception Res Net V2 architecture on the Corel Dataset for the scope of 20, for the example query image shown in Figure 12. We retrieve 20 results shown in Figure 13. From Figure 13, we find that the query images belong to the “bus” category and all 20 results are relevant to the query image. So, the precision for this query image is 1.
For another query image from the corel dataset shown in Figure 14, the retrieved results are shown in Figure . Here we see that the query image belongs to the “African People” category and out of 20 retrieved results 13 results are relevant to the query image. So, for this specific image precision value is 13/20 = 0.65
C. Category wise Precision Calculation
In this subsection we produce the category wise average precision on DBCorel and DB2000 for a scope of 20 using Euclidean Distance as dissimilarity metric. Fig 16 and Table 1 illustrate the results. From the result we see that in DBCorel, for Bus, Dinosaurs and Elephant category the pre-trained InceptionResNetV2 model retrieves all the images with 100% precision but performs comparatively poorly for the African People category resulting in 79.35% precision. The overall average precision for this dataset is 96.115%. For DB2000 the best retrieved category is Airplane (precision: 99.975%) and worst category is Leaf (precision: 91.125%), overall average precision being 97%.
D. Result Comparison with other Recently Proposed Algorithms
For DB2000, we select Bose et al.’s paper[7] as the baseline result. This paper[7] extracted features from the images in two ways: features from colour co-occurrence matrix and features from MPEG-7. As our paper does not take into account Relevance Feedback[7], we are only comparing the precision without relevance feedback of this paper[7] with ours. Figure 17 shows that our proposed method outperforms all the methods discussed in[7].
In recent years, many researchers have worked[17, 4, 35, 41, 1, 32, 2, 5, 30, 23, 20, 6] on DBCorel Dataset extracting different kinds of features and similarity distances. We present two types of comparison with these recent papers on DBCorel Dataset: Category wise precision comparison (Figure 18) and average precision comparison (Table 2). It shows that our proposed method outperforms all other methods published in the recent papers. Lohite et al.[20] calculated category wise precision for scope of 50 instead of 20. We did a comparative study with this algorithm too in Figure 19 and showed that except African People category our proposed method works better even for scope 50.
For DBCaltech (Caltech101) Dataset, we produce the comparisn with two algortihms given in [7, 29]. Figure 20 compares the average precision. Multiple CBIR techniques are described in both [7] & [29]. Hence, we picked the average precision of the best methods. Clearly the proposed method outperformed both of the other methods.
V. REAL TIME CBIR
We have already seen that due to introduction of very deep neural network model (InceptionResNetV2), our results have been improved quite a significant amount, but it arises the retrieval time of images as a matter of question. Whatever models we use, our ultimate goal is to retrieve images in real-time. In this chapter we will discuss about the time complexity of our CBIR system and will show that in spite of introducing deep models, our system can retrieve images in real time for DBCaltech dataset. Also, we will show that introducing the application of Principal Component Analysis[36] will make the image retrieval even faster without sacrificing the precision.
Because of small image count, we did not calculate image retrieval time for DB2000 and DBCorel, as it will be always low.
A. Principal Component Analysis
Principal Component Analysis[36] (PCA), is a dimensionality-reduction method generally used to reduce the dimen- sionality of large data-sets, by converting large set of variables into smaller ones which contains most of the information of the original dataset.
Methods |
Average Precision (%) |
Proposed Method |
96.115 |
Ashraf et al.[4] |
73.5 |
Sharif et al.[35] |
84.39 |
Yousuf et al.[41] |
87.3 |
Ahmed et al.[2] |
83.5 |
Sarwar et al.[32] |
89.58 |
Ahmed et al.[1] |
76.5 |
Ashraf et al.[5] |
82 |
Rashno et al.[30] |
65.95 |
Mehmood et al.[23] |
87.85 |
Khokhar et al.[17] |
94.3 |
Ahamed et al.[6] |
82 |
Table 2: Comparison of average precision between our proposed method and recent papers’ methods for scope of 20 on DBCorel
Figure 20: Comparison of average precision among our proposed method, Bose et al.[7] and Rana et al.[29] methods for scope of 20 on DBCaltech.
Reducing the number of variables of a data set truncates the information of it, but the trick in dimensionality reduction is to trade a little information for simplicity. The reason behind is smaller data sets are easier to handle and visualize. Analyzing data becomes much easier and faster for machine learning algorithms in small sized datasets. So, to sum up, the idea of PCA is simple—reduce the number of variables of a data set, while preserving as much information as possible.
B. PCA on the Encoded Features
The encoded feature vector dimension for InceptionResNetV2 is 1536 which seems to be large. So, we did Principal Component Analysis on the 1536 feature vector to reduce its dimension and chose the number of principal components (M) for which the average precision value is maximum. For DBCaltech dataset we are taking roughly 100 PCs to calculate the precision. It is seen that taking the first handful number of PCs results almost the same or sometime better average precision with respect to the whole 1536 features. Average precision with PCA: 82.54% and average precision without PCA: 82.02%. In summary PCA increases precision and saves computational time as we are handling with a very reduced dimension.
C. Approach
Here, we calculate the average query image retrieval time for the scope of 20 on DBCaltech Dataset. This experiment is done with the following combinations: with PCA and without PCA.
To explain the experiment with PCA, at first, we will feed all of our database images through CBIR model (Inception- ResNetV2 without the last softmax layer) and PCA respectively, then store those extracted features of dimension 100 of each image in memory as a feature bank. Now when a query image comes it will be passed through CBIR model and PCA respectively. Then we will compare the extracted features from the query image with each of the feature list in the feature bank and ultimately retrieve those images whose features are closer to the query image features evaluated by some similarity metrics i.e.
Manhattan Distance, Euclidean Distance etc. So, the time between the feeding of query image and retrieving similar images is the image retrieval time and Fig 41 shows this average image retrieval time. We use the term “average”, because we used all the images for our database as query image and calculated retrieval time for each of these images and finally took mean. This process is test on two machines:
D. Our Local Machine
E. GPU Machine
As we all know that GPUs are highly specialized in parallel computing, so the time required for image retrieval is very less in GPU compared to our local machine. This can be clearly seen in Figure 21. Also it can be seen that using PCA has reduced down the retrieval time a bit.
DBCaltech has 9144 images with 1536 dimensional features (without PCA) and DB2000 has 2000 images with 1536 dimensional features. We can see that our GPU machine and somewhat our local machine also can retrieve images from these dataset in real time. Image retrieval time depends on both the Database size and dimension. Dimension is almost same as long as we use same architecture (InceptionResNetV2 in our case). But if we use a dataset of very high number of images (say millions) then the image retrieval time increases naturally. In case, where we see that searching through all of the database for relevant images is taking much time then we could use a random sample of size say 10,000 or 20,000 to retrieve 20 images from the database of size millions or billions.
VI: FAST IMAGE RETRIEVAL WITH IMAGE CLUSTERING
As mentioned above that as the size of the database increases the image retrieval time also increases. So, we have thought of a novel method to further improve the image retrieval time. This method does clustering of the database images and then only search for images within a specified cluster.
A. Approach
The pre-trained model: InceptionResNetV2 we have used during our pre-specified CBIR method was originally trained to predict 1000 classes. Earlier we omitted the last softmax layer and chose the last dense layer for feature extraction. This time we will use both the last dense layer output and softmax layer probability output. The method is explained step by step below. This method has been applied on the Caltech Dataset.
Note: The precision value of the with this above fast retrieval method becomes 81.48% while with the previous method it was 82.02%. So, it is clearly seen that the precision value does not reduce much while the image retrieval time reduces around 2.5 times. The reason behind almost similar results even if searching in the small subset is that the InceptionResNetV2 model predicts the similar images to the same classes, so the clusters of similar images are formed in the classes.
This paper shows that using pre-trained deep learning features gives better precision result with respect to the features derived by traditional methods e.g. CCM, wavelet etc. However, this result can be improved for a specific dataset by introducing the user feedback which is called as Relevance Feedback. Relevance Feedback is basically the feedback from users after each retrieval regarding which results are relevant to the query images and which are not. Using this feedback the CBIR system will start learning and will improve the result gradually. There is one limitation of features derived by deep learning is that these features are not rotation-invariant. This means that if we try to retrieve similar images given a same query image but with different orientation angles at every time, the retrieval results will change significantly. Building a rotation-invariant CBIR system may be a next step of improvement over this approach.
[1] K. T. Ahmed et al. “Convolution, Approximation and Spatial Information Based Object and Color Signatures for Content Based Image Retrieval”. In: 2019 International Conference on Computer and Information Sciences (ICCIS). Apr. 2019, pp. 1–6. DOI: 10.1109/ICCISci.2019.8716437. [2] Khawaja Ahmed, Shahida, and Muhammad Iqbal. “Content Based Image Retrieval using Image Features Information Fusion”. In: Information Fusion 51 (Nov. 2018). DOI: 10.1016/j.inffus.2018.11.004. [3] S. Aksoy and R. M. Haralick. “Probabilistic vs. geometric similarity measures for image retrieval”. In: Proceed- ings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662). Vol. 2. June 2000, 357–362 vol.2. DOI: 10.1109/CVPR.2000.854847. [4] Rehan Ashraf et al. “Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform”. In: Journal of Medical Systems 42 (Mar. 2018). DOI: 10.1007/s10916-017-0880-7. [5] Rehan Ashraf et al. “Content Based Image Retrieval Using Embedded Neural Networks with Bandletized Regions”. In: Entropy 17 (June 2015), pp. 3552–3580. DOI: 10.3390/e17063552. [6] Mohamed Uvaze Ahamed Ayoobkhan, Eswaran C, and Kannan Ramakrishnan. “CBIR system based on prediction errors”. In: Journal of Information Science and Engineering 33 (Mar. 2017), pp. 347–365. DOI: 10.1688/JISE. 2017.33.2.5. [7] Smarajit Bose et al. “Improved Content-Based Image Retrieval via Discriminant Analysis”. In: International Journal of Machine Learning and Computing 7 (June 2017), pp. 44–48. DOI: 10.18178/ijmlc.2017.7.3.618. [8] Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. “An Analysis of Deep Neural Network Models for Practical Applications”. In: arXiv e-prints, arXiv:1605.07678 (May 2016), arXiv:1605.07678. arXiv: 1605. 07678 [cs.CV]. [9] François Chollet. “Xception: Deep Learning with Depthwise Separable Convolutions”. In: arXiv e-prints, arXiv:1610.02357 (Oct. 2016), arXiv:1610.02357. arXiv: 1610.02357 [cs.CV]. [10] Francois Chollet. Deep Learning with Python. 1st. Greenwich, CT, USA: Manning Publications Co., 2017. ISBN: 9781617294433. [11] CS231n: Convolutional Neural Networks for Visual Recognition. URL: http://cs231n.stanford.edu/. (accessed: 23.12.2019). [12] J. Deng et al. “ImageNet: A large-scale hierarchical image database”. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. June 2009, pp. 248–255. DOI: 10.1109/CVPR.2009.5206848. [13] Kaiming He et al. “Deep Residual Learning for Image Recognition”. In: arXiv e-prints, arXiv:1512.03385 (Dec. 2015), arXiv:1512.03385. arXiv: 1512.03385 [cs.CV]. [14] Gao Huang et al. “Densely Connected Convolutional Networks”. In: arXiv e-prints, arXiv:1608.06993 (Aug. 2016), arXiv:1608.06993. arXiv: 1608.06993 [cs.CV]. [15] Jing Huang. “Color-spatial Image Indexing and Applications”. AAI9838769. PhD thesis. Ithaca, NY, USA, 1998. ISBN: 0-591-92418-8. [16] Safia Jabeen et al. “An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model”. In: PLoS ONE 13 (Mar. 2018). DOI: 10.1371/journal.pone.0194526. [17] Suman Khokhar and Satya Verma. “Content Based Image Retrieval with Multi-Feature Classification by Back- propagation Neural Network”. In: International Journal of Computer Applications Technology and Research 6 (July 2017), pp. 278–284. DOI: 10.7753/IJCATR0607.1002. [18] Harald Kosch. Distributed Multimedia Database Technologies Supported by MPEG-7 and MPEG-21. CRC Press, 2003. ISBN: 9780849318542. [19] Li Fei-Fei, R. Fergus, and P. Perona. “Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories”. In: 2004 Conference on Computer Vision and Pattern Recognition Workshop. June 2004, pp. 178–178. DOI: 10.1109/CVPR.2004.383. [20] Mr. Yogen Mahesh Lohite and Prof. Sushant J. Pawar. “A novel method for Content Based Image retrieval using Local features and SVM classifier”. In: International Research Journal of Engineering and Technology (IRJET) 04.07 (2017). [21] B. S. Manjunath et al. “Color and texture descriptors”. In: IEEE Transactions on Circuits and Systems for Video Technology 11.6 (June 2001), pp. 703–715. ISSN: 1558-2205. DOI: 10.1109/76.927424. [22] Pedro Marcelino. Transfer learning from pre-trained models. URL: https://towardsdatascience.com/ transfer-learning-from-pre-trained-models-f2393f124751. (accessed: 23.12.2019). [23] Zahid Mehmood, Toqeer Mahmood, and Muhammad Arshad Javid. “Content-based Image Retrieval and Semantic Automatic Image Annotation Based on the Weighted Average of Triangular Histograms Using Support Vector Machine”. In: Applied Intelligence 48.1 (Jan. 2018), pp. 166–181. ISSN: 0924-669X. DOI: 10.1007/s10489- 017-0957-5. URL: https://doi.org/10.1007/s10489-017-0957-5. [24] Wayne Niblack et al. “The QBIC Project: Querying Images by Content, Using Color, Texture, and Shape.” In: vol. 1908. Jan. 1993, pp. 173–187. [25] A. Obulesu, Vakulabharanam Vijaya Kumar, and Sumalatha Lingamgunta. “Content based Image Retrieval Using Multi Motif Co-Occurrence Matrix”. In: International Journal of Image, Graphics and Signal Processing 10 (Apr. 2018), pp. 59–72. DOI: 10.5815/ijigsp.2018.04.07. [26] T. Ojala et al. “Semantic image retrieval with hsv correlograms”. In: (Jan. 2001). [27] Michael Ortega-Binderberger. Corel Image Features Data Set. URL: https://archive.ics.uci.edu/ml/ datasets/corel+image+features. (accessed: 23.12.2019). [28] Jing Peng, Bir Bhanu, and Shan Qing. “Probabilistic Feature Relevance Learning for Content-Based Image Retrieval”. In: Computer Vision and Image Understanding 75 (1999), pp. 150–164. [29] Soumya Rana, Maitreyee Dey, and Siarry Patrick. “Boosting Content Based Image Retrieval Performance Through Integration of Parametric & Nonparametric Approaches”. In: Journal of Visual Communication and Image Representation 58 (Nov. 2018). DOI: 10.1016/j.jvcir.2018.11.015. [30] A. Rashno and S. Sadri. “Content-based image retrieval with color and texture features in neutrosophic domain”. In: 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA). Apr. 2017, pp. 50–55. DOI: 10.1109/PRIA.2017.7983063.
Copyright © 2022 Mr. Kommu Naveen, Dr. R. M. S Parvathi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET39172
Publish Date : 2021-11-30
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here