正在加载图片...
426 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,VOL.14,NO.2,MARCH/APRIL 2008 Mesh-Guided Optimized Retexturing for Image and Video Yanwen Guo,Hangiu Sun,Member,/EEE,Qunsheng Peng,and Zhongding Jiang,Member,/EEE Abstract-This paper presents a novel approach for replacing textures of specified regions in the input image and video using stretch- based mesh optimization.The retexturing results have the similar distortion and shading effect conforming to the unknown underlying geometry and lighting conditions.For replacing textures in a single image,two important steps are developed:The stretch-based mesh parameterization incorporating the recovered normal information is deduced to imitate perspective distortion of the region of interest:the Poisson-based refinement process is exploited to account for texture distortion at fine scale.The luminance of the input image is preserved through color transfer in YCbCr color space.Our approach is independent of the replaced textures.Once the input image is processed,any new textures can be applied to efficiently generate the retexturing results.For video retexturing,we propose key-frame- based texture replacement extended and generalized from the image retexturing.Our approach repeatedly propagates the replacement results of key frames to the rest of the frames.We develop the local motion optimization scheme to deal with the inaccuracies and errors of robust optical flow when tracking moving objects.Visibility shifting and texture drifting are effectively alleviated using graphcut segmentation algorithm and the global optimization to smooth trajectories of the tracked points over temporal domain.Our experimental results showed that the proposed approach can generate visually pleasing results for retextured images and video. Index Terms-Texture replacement,parameterization,Poisson equation,graphcut segmentation. 1 INTRODUCTION ap tmmuty por the onrihuigu community.For the second issue,relighting techniques can common task for creating visual effects.This process is be adopted to change intensities of pixels of the new texture commonly referred to as retexturing or texture replacement. when properties of light sources and surface appearances are The key issue of texture replacement is how to preserve the known beforehand.However,accurate recovery of these original shading effect and texture distortion without properties from a real-world image is more difficult than knowing the underlying geometry and lighting conditions. geometry recovery.Hence,relighting techniques are imprac- Retexturing objects of images and video clips has wide tical for texture replacement. applications in digital entertainment,virtual exhibition,art, For generating plausible visual effects,full recovery of and industry design. 3D geometry and lighting conditions can be relaxed in For retexturing image,two fundamental issues must be practice.Fang and Hart proposed one normal-guided texture addressed:how to deform the new texture for conforming to synthesis method that produced compelling replacement scene geometry and how to keep shading effect encoded in effects [1].This method works well when a 3D surface is the original image for consistent lighting conditions.One untextured,nearly diffuse,and illuminated by a single possible solution to the first issue is recovering 3D surface directional light source.One limitation of this texture geometry using shape-from-shading techniques,then estab- synthesis approach is that the synthesis process must be lishing parameterization between the surface and the texture. repeated when a new texture is applied.For regular/near- Unfortunately,shape-from-shading techniques for a single regular textures that are popular in the real world,Liu et al. image cannot accurately recover the 3D geometry with high suggested an interactive scheme to extract the deformation efficiency.Even with multiple images,full recovery of field of texture image with respect to the original sample [2]. The extraction of lighting information can also benefit from Y.Guo is with the National Laboratory for Novel Software Technology, this restriction through a Markov process [3].Nevertheless, Nanjing University,Nanjing 210093,P.R.China. this approach usually needs tedious user interaction with E-mail:ywguo@nju.edu.cn. high accuracy. .H.Sun is with the Department of Computer Science and Engineering,the Chinese University of Hong Kong,Shatin,N.T.,Hong Kong. Video clip is an image sequence in time domain,which E-mail:hangiu@cse.cuhk.edu.hk. usually contains dynamic objects and lighting changes. Q.Peng is with the State Key Lab of CAD&CG,Zhejiang University, Retexturing video is more challenging than retexturing Hangzhou 310058,P.R.China.E-mail:peng@cad.zju.edu.cn. .Z.Jiang is with the Software School,Fudan UIniversity,Shanghai,201203, image due to these dynamic phenomena.In particular, P.R.China.E-mail:zdjiang@fudan.edu.cn. keeping the texture coherence over time is more challen- Manuscript received 26 Dec.2006;revised 11 July 2007;accepted 27 Aug. ging.The temporal coherence demands that the new texture 2007;published online 17 Sept.2007. should be perceptually fixed on 3D surface when an object Recommended for acceptance by J.Dorsey. For information on obtaining reprints of this article,please send e-mail to: or camera moves.For achieving temporal coherence,the tocg@computer.org,and reference IEEECS Log Number TVCG-0231-1206. key-frame-based methods [4],[5]consist of two steps.First, Digital Object Identifier no.10.1109/TVCG.2007.70438. a few of video frames are selected as key frames on which 1077-2626/08/25.0020081EEE Published by the IEEE Computer SocietyMesh-Guided Optimized Retexturing for Image and Video Yanwen Guo, Hanqiu Sun, Member, IEEE, Qunsheng Peng, and Zhongding Jiang, Member, IEEE Abstract—This paper presents a novel approach for replacing textures of specified regions in the input image and video using stretch￾based mesh optimization. The retexturing results have the similar distortion and shading effect conforming to the unknown underlying geometry and lighting conditions. For replacing textures in a single image, two important steps are developed: The stretch-based mesh parameterization incorporating the recovered normal information is deduced to imitate perspective distortion of the region of interest; the Poisson-based refinement process is exploited to account for texture distortion at fine scale. The luminance of the input image is preserved through color transfer in YCbCr color space. Our approach is independent of the replaced textures. Once the input image is processed, any new textures can be applied to efficiently generate the retexturing results. For video retexturing, we propose key-frame￾based texture replacement extended and generalized from the image retexturing. Our approach repeatedly propagates the replacement results of key frames to the rest of the frames. We develop the local motion optimization scheme to deal with the inaccuracies and errors of robust optical flow when tracking moving objects. Visibility shifting and texture drifting are effectively alleviated using graphcut segmentation algorithm and the global optimization to smooth trajectories of the tracked points over temporal domain. Our experimental results showed that the proposed approach can generate visually pleasing results for retextured images and video. Index Terms—Texture replacement, parameterization, Poisson equation, graphcut segmentation. Ç 1 INTRODUCTION EDITING contents of photos/footages by changing the appearances of some regions with new textures is a common task for creating visual effects. This process is commonly referred to as retexturing or texture replacement. The key issue of texture replacement is how to preserve the original shading effect and texture distortion without knowing the underlying geometry and lighting conditions. Retexturing objects of images and video clips has wide applications in digital entertainment, virtual exhibition, art, and industry design. For retexturing image, two fundamental issues must be addressed: how to deform the new texture for conforming to scene geometry and how to keep shading effect encoded in the original image for consistent lighting conditions. One possible solution to the first issue is recovering 3D surface geometry using shape-from-shading techniques, then estab￾lishing parameterization between the surface and the texture. Unfortunately, shape-from-shading techniques for a single image cannot accurately recover the 3D geometry with high efficiency. Even with multiple images, full recovery of 3D geometry is still an open problem in the computer vision community. For the second issue, relighting techniques can be adopted to change intensities of pixels of the new texture when properties of light sources and surface appearances are known beforehand. However, accurate recovery of these properties from a real-world image is more difficult than geometry recovery. Hence, relighting techniques are imprac￾tical for texture replacement. For generating plausible visual effects, full recovery of 3D geometry and lighting conditions can be relaxed in practice. Fang and Hart proposed one normal-guided texture synthesis method that produced compelling replacement effects [1]. This method works well when a 3D surface is untextured, nearly diffuse, and illuminated by a single directional light source. One limitation of this texture synthesis approach is that the synthesis process must be repeated when a new texture is applied. For regular/near￾regular textures that are popular in the real world, Liu et al. suggested an interactive scheme to extract the deformation field of texture image with respect to the original sample [2]. The extraction of lighting information can also benefit from this restriction through a Markov process [3]. Nevertheless, this approach usually needs tedious user interaction with high accuracy. Video clip is an image sequence in time domain, which usually contains dynamic objects and lighting changes. Retexturing video is more challenging than retexturing image due to these dynamic phenomena. In particular, keeping the texture coherence over time is more challen￾ging. The temporal coherence demands that the new texture should be perceptually fixed on 3D surface when an object or camera moves. For achieving temporal coherence, the key-frame-based methods [4], [5] consist of two steps. First, a few of video frames are selected as key frames on which 426 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 14, NO. 2, MARCH/APRIL 2008 . Y. Guo is with the National Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, P.R. China. E-mail: ywguo@nju.edu.cn. . H. Sun is with the Department of Computer Science and Engineering, the Chinese University of Hong Kong, Shatin, N.T., Hong Kong. E-mail: hanqiu@cse.cuhk.edu.hk. . Q. Peng is with the State Key Lab of CAD&CG, Zhejiang University, Hangzhou 310058, P.R. China. E-mail: peng@cad.zju.edu.cn. . Z. Jiang is with the Software School, Fudan University, Shanghai, 201203, P.R. China. E-mail: zdjiang@fudan.edu.cn. Manuscript received 26 Dec. 2006; revised 11 July 2007; accepted 27 Aug. 2007; published online 17 Sept. 2007. Recommended for acceptance by J. Dorsey. For information on obtaining reprints of this article, please send e-mail to: tvcg@computer.org, and reference IEEECS Log Number TVCG-0231-1206. Digital Object Identifier no. 10.1109/TVCG.2007.70438. 1077-2626/08/$25.00 2008 IEEE Published by the IEEE Computer Society
向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有