正在加载图片...
MOBILE VISUAL CLOTHING SEARCH George A.Cushen and Mark S.Nixon University of Southampton gc505,msn@ecs.soton.ac.uk ABSTRACT to recognize clothing in surveillance videos.Although their We present a mobile visual clothing search system whereby a method is fast,they capture their dataset in a controlled lab smart phone user can either choose a social networking photo with a simple white background.The clothing retrieval prob- lem has been less extensively studied.One scenario is pre- or take a new photo of a person wearing clothing of interest and search for similar clothing in a retail database.From the sented in [6,7.State of the art work focusses primarily on query image,the person is detected,clothing is segmented, clothes parsing and semantic classification [8,9].Although, and clothing features are extracted and quantized.The infor- Yamaguchi et al.achieve good performance,they only briefly demonstrate retrieval and their method is very computation- mation is sent from the phone client to a server,where the fea- ally intensive. ture vector of the query image is used to retrieve similar cloth- ing products from online databases.The phone's GPS loca- Current mobile image retrieval systems include Google Goggles',Kooaba',and LookTel.However,these systems tion is used to re-rank results by retail store location.State of the art work focusses primarily on the recognition of a diverse are developed for image retrieval on general objects in a range of clothing offline and pays little attention to practical scene.When these systems are applied to clothes search,they applications.Evaluated on a challenging dataset,the system can provide visually and categorically less relevant results is relatively fast and achieves promising results. than our method for retrieving products based on a dressed person and can have significantly longer response times than Index Terms-Clothes Search,Mobile Search,Image our method. Retrieval The main contributions of this paper are as follows:(1) we present a novel mobile client-server framework for au- 1.INTRODUCTION tomatic visual clothes searching;(2)we propose an exten- sion of GrabCut for the purpose of clothing segmentation;(3) Clothing was the fastest growing segment in US e-commerce we propose a dominant colour descriptor for the efficient and last year,with it predicted to have grown by 20%to $40.9 bil- compact representation of clothing;and(4)we have evalu- lion from 2011 to 2012.It is also expected to have been the ated our approach on query images from a fashion social net- second biggest segment by revenue overall [1].Thus,an ef- work dataset along with a clothing product dataset for results ficient mobile application to automatically recognize clothing and shown promising retrieval results with a relatively fast re- in photos of people and retrieve similar clothing items that are sponse time.The contributions in this paper thus reside in a available for sale from retailers could transform the way we mobile system for automated clothes search with proven ca- shop whilst giving retailers a great potential for commercial pability. gain.Tightly connected to this,is the potential for an efficient clothing retrieval system to be employed for the purpose of 2.SYSTEM OVERVIEW highly targeted mobile advertising which learns what clothing a person may wish to purchase given their social networking The pipeline for our mobile visual clothing search system to photos. retrieve similar clothing products in nearby retail stores is The problem of efficient and practical mobile clothing shown in Figure 1.A smart phone user can either capture a search appears relatively unexplored in literature.Recently, photo of a person wearing clothing of interest or choose an ex- the fields of clothing segmentation,recognition and parsing isting photo,such as from a social network.The person is then have started to gain much attention in literature.Gallagher detected in the image and our clothing segmentation method and Chen designed a Graphcuts approach to segment cloth- is performed to attempt to select only the clothing pixels for ing [2]to aid the person recognition application.Various pri- the next step of feature extraction.Note that we only consider ors have been proposed to segment clothing by Hasan and searching upper body clothing since the images in our social Hogg [3]and Wang and Ai [4].Meanwhile Yang and Yu [5]proposed to integrate tracking and clothing segmentation Igoogle.com/mobile/goggles.kooaba.com,looktel.comMOBILE VISUAL CLOTHING SEARCH George A. Cushen and Mark S. Nixon University of Southampton {gc505, msn}@ecs.soton.ac.uk ABSTRACT We present a mobile visual clothing search system whereby a smart phone user can either choose a social networking photo or take a new photo of a person wearing clothing of interest and search for similar clothing in a retail database. From the query image, the person is detected, clothing is segmented, and clothing features are extracted and quantized. The infor￾mation is sent from the phone client to a server, where the fea￾ture vector of the query image is used to retrieve similar cloth￾ing products from online databases. The phone’s GPS loca￾tion is used to re-rank results by retail store location. State of the art work focusses primarily on the recognition of a diverse range of clothing offline and pays little attention to practical applications. Evaluated on a challenging dataset, the system is relatively fast and achieves promising results. Index Terms— Clothes Search, Mobile Search, Image Retrieval 1. INTRODUCTION Clothing was the fastest growing segment in US e-commerce last year, with it predicted to have grown by 20% to $40.9 bil￾lion from 2011 to 2012. It is also expected to have been the second biggest segment by revenue overall [1]. Thus, an ef- ficient mobile application to automatically recognize clothing in photos of people and retrieve similar clothing items that are available for sale from retailers could transform the way we shop whilst giving retailers a great potential for commercial gain. Tightly connected to this, is the potential for an efficient clothing retrieval system to be employed for the purpose of highly targeted mobile advertising which learns what clothing a person may wish to purchase given their social networking photos. The problem of efficient and practical mobile clothing search appears relatively unexplored in literature. Recently, the fields of clothing segmentation, recognition and parsing have started to gain much attention in literature. Gallagher and Chen designed a Graphcuts approach to segment cloth￾ing [2] to aid the person recognition application. Various pri￾ors have been proposed to segment clothing by Hasan and Hogg [3] and Wang and Ai [4]. Meanwhile Yang and Yu [5] proposed to integrate tracking and clothing segmentation to recognize clothing in surveillance videos. Although their method is fast, they capture their dataset in a controlled lab with a simple white background. The clothing retrieval prob￾lem has been less extensively studied. One scenario is pre￾sented in [6, 7]. State of the art work focusses primarily on clothes parsing and semantic classification [8, 9]. Although, Yamaguchi et al. achieve good performance, they only briefly demonstrate retrieval and their method is very computation￾ally intensive. Current mobile image retrieval systems include Google Goggles1 , Kooaba1 , and LookTel1 . However, these systems are developed for image retrieval on general objects in a scene. When these systems are applied to clothes search, they can provide visually and categorically less relevant results than our method for retrieving products based on a dressed person and can have significantly longer response times than our method. The main contributions of this paper are as follows: (1) we present a novel mobile client-server framework for au￾tomatic visual clothes searching; (2) we propose an exten￾sion of GrabCut for the purpose of clothing segmentation; (3) we propose a dominant colour descriptor for the efficient and compact representation of clothing; and (4) we have evalu￾ated our approach on query images from a fashion social net￾work dataset along with a clothing product dataset for results and shown promising retrieval results with a relatively fast re￾sponse time. The contributions in this paper thus reside in a mobile system for automated clothes search with proven ca￾pability. 2. SYSTEM OVERVIEW The pipeline for our mobile visual clothing search system to retrieve similar clothing products in nearby retail stores is shown in Figure 1. A smart phone user can either capture a photo of a person wearing clothing of interest or choose an ex￾isting photo, such as from a social network. The person is then detected in the image and our clothing segmentation method is performed to attempt to select only the clothing pixels for the next step of feature extraction. Note that we only consider searching upper body clothing since the images in our social 1google.com/mobile/goggles, kooaba.com, looktel.com
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有