Academic
Publications
QCluster: relevance feedback using adaptive clustering for content-based image retrieval

QCluster: relevance feedback using adaptive clustering for content-based image retrieval,10.1145/872757.872829,Deok-Hwan Kim,Chin-Wan Chung

QCluster: relevance feedback using adaptive clustering for content-based image retrieval   (Citations: 50)
BibTex | RIS | RefWorks Download
The learning-enhanced relevance feedback has been one of the most active research areas in content-based image retrieval in recent years. However, few methods using the relevance feedback are currently available to process relatively complex queries on large image databases. In the case of complex image queries, the feature space and the distance function of the user's perception are usually different from those of the system. This difference leads to the representation of a query with multiple clusters (i.e., regions) in the feature space. Therefore, it is necessary to handle disjunctive queries in the feature space.In this paper, we propose a new content-based image retrieval method using adaptive classification and cluster-merging to find multiple clusters of a complex image query. When the measures of a retrieval method are invariant under linear transformations, the method can achieve the same retrieval quality regardless of the shapes of clusters of a query. Our method achieves the same high retrieval quality regardless of the shapes of clusters of a query since it uses such measures. Extensive experiments show that the result of our method converges to the user's true information need fast, and the retrieval quality of our method is about 22% in recall and 20% in precision better than that of the query expansion approach, and about 34% in recall and about 33% in precision better than that of the query point movement approach, in MARS.
Cumulative Annual
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
Sort by: