正在加载图片...
428 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,VOL.14,NO.2,MARCH/APRIL 2008 image that is deformed to match the evolvement of normal field throughout the video.For the synthesis method,the minimum advection tree is constructed to deal with the visibility issue due to dynamic motions of moving objects. Such tree determines the initial frame for each image cluster and the advection for clusters among frames.The main challenge of video texture replacement is how to stably track the moving objects and their interior regions.At present,accurately tracking moving objects of dynamic video is an open problem.The replaced textures drift in the experimental results [4]. (a) 6 For stably tracking moving objects and their interior Fig.1.Mesh generation.(a)The input image.(b)The generated mesh parts,Scholz and Magnor presented one system of video The yellow dots are detected by Canny operator,whereas the green texture replacement [5]using color-coded patterns.The ones are added automatically with a distance threshold to maintain deformation process of the texture throughout video clip mesh uniformity. can be accurately extracted.Since the deformation is accurate,compelling results can be achieved.However, into several nearly developable parts,each of them is videos are usually captured by off-the-shelf camera without handled using the texture replacement.Based on this the color-coded patterns,the system is not applicable to assumption,the basic idea is converting reconstruction of them.Our approach is designed for those videos in which the underlying 3D surface of ROI into computation of its the special patterns are unavailable. corresponding mesh in texture space.Using projective Recently,White and Forsyth proposed another video geometry,we further formulate the retexturing task as a retexturing method [12].At coarse scale,old texture is stretch-based mesh parameterization problem.After the replaced with a new one by tracking deforming surface in parameterization is completed,the result is further refined 2D.At fine scale,local irradiance is estimated to preserve with one Poisson-based refinement process the structure information in real lighting environment. Since local irradiance estimation is difficult and unreliable, 3.1 Mesh Generation the approach is limited to screen printing with a finite We first generate an initial mesh on the concerned region number of colors.Our method can be applied to video and make its shape consistent with the underlying sequences with rich color details. geometry of this region.Mesh generation for image was 2.3 Object Tracking addressed in motion compensation for video compression Object tracking is the process of locating moving object [20].The content-based mesh was computed by extracting a throughout the whole image sequence taken by video set of feature points followed by Delaunay triangulation. camera.For general object motion,the nonparametric Our algorithm as follows shares the same idea in [20]. algorithm such as optical flow [13]can be applied.When First,the concerned region is specified interactively by the motion can be described using simple models,methods outlining the boundary of ROI using snakes.For reducing based on feature points and parametric models are more user intervention,our approach supports extracting ROI preferable [14].For instance,Jin et al.presented one using the up-to-date segmentation techniques [21],[9]. combined model of geometry and photometry to track Second,we employ the edge detection operator,for features and detect outliers in video [15].Contour tracking example,Canny operator,for extracting some feature points can be more effective for nonrigid objects than isolated inside ROI.For keeping uniformity of the points,some point tracking.Agarwala et al.[16]and Wang et al.[17] auxiliary ones are usually added.Finally,the constrained introduced frameworks for tracking contours of moving Delaunay triangulation algorithm is adopted to generate a objects in video sequences,which are based on spatiotem- feature-consistent mesh.Fig.1 shows an example of mesh poral optimization and user assistance.Chuang et al. generation. described one method of accurately tracking specified trimap for video matting [18].A trimap is one labeling 3.2 Mesh parameterization image in which 0 stands for background,1 stands for Let M denote the generated mesh of input image.M is one foreground,and the rest is the unknown region to be 2D mesh that represents the 2D projection of 3D surface of labeled.Stably tracking of the trimap is carried out based on ROI.If normal vector of every mesh point of M is recovered, robust optical flow algorithm [19]. the normal field of M will encode the geometry shape of the underlying surface.For obtaining the distortion effect of the 3 IMAGE RETEXTURING new texture,it is feasible to first parameterize M,then map the new texture onto ROI.Since M is one 2D mesh, The key issue of image texture replacement is how to parameterizing M onto the texture space is 2D-to-2D,which preserve the distortion effect of texture,as well as the can be computed using the geometry information of M. shading effect encoded in the original image.Texture Let M'be the parameterized mesh in texture space. distortion is mainly caused by the undulation of underlying Theoretically,M'can be completely determined by lengths surface of object in the given image.Assume that the of all edges and topology of M.For avoiding artifacts,the surface where texture replacement is performed on,is topology of M'should be the same as that of M.The length nearly developable.Otherwise,the surface can be divided of each edge in M'should be ideally equal to the 3D lengthimage that is deformed to match the evolvement of normal field throughout the video. For the synthesis method, the minimum advection tree is constructed to deal with the visibility issue due to dynamic motions of moving objects. Such tree determines the initial frame for each image cluster and the advection for clusters among frames. The main challenge of video texture replacement is how to stably track the moving objects and their interior regions. At present, accurately tracking moving objects of dynamic video is an open problem. The replaced textures drift in the experimental results [4]. For stably tracking moving objects and their interior parts, Scholz and Magnor presented one system of video texture replacement [5] using color-coded patterns. The deformation process of the texture throughout video clip can be accurately extracted. Since the deformation is accurate, compelling results can be achieved. However, videos are usually captured by off-the-shelf camera without the color-coded patterns, the system is not applicable to them. Our approach is designed for those videos in which the special patterns are unavailable. Recently, White and Forsyth proposed another video retexturing method [12]. At coarse scale, old texture is replaced with a new one by tracking deforming surface in 2D. At fine scale, local irradiance is estimated to preserve the structure information in real lighting environment. Since local irradiance estimation is difficult and unreliable, the approach is limited to screen printing with a finite number of colors. Our method can be applied to video sequences with rich color details. 2.3 Object Tracking Object tracking is the process of locating moving object throughout the whole image sequence taken by video camera. For general object motion, the nonparametric algorithm such as optical flow [13] can be applied. When the motion can be described using simple models, methods based on feature points and parametric models are more preferable [14]. For instance, Jin et al. presented one combined model of geometry and photometry to track features and detect outliers in video [15]. Contour tracking can be more effective for nonrigid objects than isolated point tracking. Agarwala et al. [16] and Wang et al. [17] introduced frameworks for tracking contours of moving objects in video sequences, which are based on spatiotem￾poral optimization and user assistance. Chuang et al. described one method of accurately tracking specified trimap for video matting [18]. A trimap is one labeling image in which 0 stands for background, 1 stands for foreground, and the rest is the unknown region to be labeled. Stably tracking of the trimap is carried out based on robust optical flow algorithm [19]. 3 IMAGE RETEXTURING The key issue of image texture replacement is how to preserve the distortion effect of texture, as well as the shading effect encoded in the original image. Texture distortion is mainly caused by the undulation of underlying surface of object in the given image. Assume that the surface where texture replacement is performed on, is nearly developable. Otherwise, the surface can be divided into several nearly developable parts, each of them is handled using the texture replacement. Based on this assumption, the basic idea is converting reconstruction of the underlying 3D surface of ROI into computation of its corresponding mesh in texture space. Using projective geometry, we further formulate the retexturing task as a stretch-based mesh parameterization problem. After the parameterization is completed, the result is further refined with one Poisson-based refinement process. 3.1 Mesh Generation We first generate an initial mesh on the concerned region and make its shape consistent with the underlying geometry of this region. Mesh generation for image was addressed in motion compensation for video compression [20]. The content-based mesh was computed by extracting a set of feature points followed by Delaunay triangulation. Our algorithm as follows shares the same idea in [20]. First, the concerned region is specified interactively by outlining the boundary of ROI using snakes. For reducing user intervention, our approach supports extracting ROI using the up-to-date segmentation techniques [21], [9]. Second, we employ the edge detection operator, for example, Canny operator, for extracting some feature points inside ROI. For keeping uniformity of the points, some auxiliary ones are usually added. Finally, the constrained Delaunay triangulation algorithm is adopted to generate a feature-consistent mesh. Fig. 1 shows an example of mesh generation. 3.2 Mesh parameterization Let M denote the generated mesh of input image. M is one 2D mesh that represents the 2D projection of 3D surface of ROI. If normal vector of every mesh point of M is recovered, the normal field of M will encode the geometry shape of the underlying surface. For obtaining the distortion effect of the new texture, it is feasible to first parameterize M, then map the new texture onto ROI. Since M is one 2D mesh, parameterizing M onto the texture space is 2D-to-2D, which can be computed using the geometry information of M. Let M0 be the parameterized mesh in texture space. Theoretically, M0 can be completely determined by lengths of all edges and topology of M. For avoiding artifacts, the topology of M0 should be the same as that of M. The length of each edge in M0 should be ideally equal to the 3D length 428 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 14, NO. 2, MARCH/APRIL 2008 Fig. 1. Mesh generation. (a) The input image. (b) The generated mesh. The yellow dots are detected by Canny operator, whereas the green ones are added automatically with a distance threshold to maintain mesh uniformity
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有