正在加载图片...
434 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,VOL.14,NO.2,MARCH/APRIL 2008 (a) (b) Fig.10.Image retexturing result of curtains so the replaced textures may appear discontinuous in the region.It is mainly caused by the discontinuities of visual appearance of the underlying 3D surface.Unlike previous methods that cannot create such effects easily,our mesh- guided retexturing approach can generate the effects with convenient user interaction In our approach,visual discontinuities are handled in the (c) (d) process of stretch-based parameterization.We interactively specify relevant points around the boundary of occluded Fig.8.(b).(c),and(d)The replacement results for a screenshot image region (for example,the red points in Fig.12a).Each of (a)of a 3D cube.This example illustrates that our approach retains the these points is accompanied with a virtual point.The perspective distortion convincingly. original point and its virtual reflection represent the adjacent invisible/visible triangles,respectively.In the replaced texture with the means of simulating texture process of mesh parameterization,a virtual distance is distortions.With the stretch-based parameterization,our valued to the original specified point and its virtual one. retexturing approach,however,can create such effects with Fig.12b is the generated mesh in texture space.The holes in ease (Figs.11c and 11d). the mesh generate discontinuities of the replaced result in In our approach,we only need to specify a few key self-occlusion region.Figs.12c and 12d show two replace- vectors over the input image (Fig.11a)for describing texton ment results. scales.The scale of each mesh edge computed by the Fig.13 compares our method with Textureshop [1].The inverse distance interpolation is integrated into the objec- input image is the inset in Fig.13b.Fig.13a is our retexturing result,and Fig.13b is directly copied from that generated by tive function Textureshop [1,Fig.7].Texture discontinuities are created in drapes of the sculpture in both results.The result generated =∑(Ie:-QP-(1/喝)‘/层, (25) by our mesh-guided approach is visually comparable to that (i.j)Eedoes where 1/s;;represents the ideal scale interpolated for edge e(QiQ;).The smaller the edge scale is,the bigger the texton scale is.Minimizing the function yields the parameterized mesh in texture space (Fig.11b). Results with self-occlusion.In a self-occluded region, texture patches may be covered by their neighboring ones, (a (b (c) (d) Fig.11.Results with variant texton scales.The key vectors specified are shown in (a).(b)shows the computed mesh in texture space.From top left to bottom right,texton scales in (c)and (d)vary progressively from Fig.9.Image retexturing result of clothes. small to big along the horizontal and vertical directions.replaced texture with the means of simulating texture distortions. With the stretch-based parameterization, our retexturing approach, however, can create such effects with ease (Figs. 11c and 11d). In our approach, we only need to specify a few key vectors over the input image (Fig. 11a) for describing texton scales. The scale of each mesh edge computed by the inverse distance interpolation is integrated into the objec￾tive function: El ¼ X ði;jÞ2edges wij kQi Qjk2 ð1=s2 ijÞl 2 ij 2. l 2 ij; ð25Þ where 1=sij represents the ideal scale interpolated for edge eðQiQjÞ. The smaller the edge scale is, the bigger the texton scale is. Minimizing the function yields the parameterized mesh in texture space (Fig. 11b). Results with self-occlusion. In a self-occluded region, texture patches may be covered by their neighboring ones, so the replaced textures may appear discontinuous in the region. It is mainly caused by the discontinuities of visual appearance of the underlying 3D surface. Unlike previous methods that cannot create such effects easily, our mesh￾guided retexturing approach can generate the effects with convenient user interaction. In our approach, visual discontinuities are handled in the process of stretch-based parameterization. We interactively specify relevant points around the boundary of occluded region (for example, the red points in Fig. 12a). Each of these points is accompanied with a virtual point. The original point and its virtual reflection represent the adjacent invisible/visible triangles, respectively. In the process of mesh parameterization, a virtual distance is valued to the original specified point and its virtual one. Fig. 12b is the generated mesh in texture space. The holes in the mesh generate discontinuities of the replaced result in self-occlusion region. Figs. 12c and 12d show two replace￾ment results. Fig. 13 compares our method with Textureshop [1]. The input image is the inset in Fig. 13b. Fig. 13a is our retexturing result, and Fig. 13b is directly copied from that generated by Textureshop [1, Fig. 7]. Texture discontinuities are created in drapes of the sculpture in both results. The result generated by our mesh-guided approach is visually comparable to that 434 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 14, NO. 2, MARCH/APRIL 2008 Fig. 8. (b), (c), and (d) The replacement results for a screenshot image (a) of a 3D cube. This example illustrates that our approach retains the perspective distortion convincingly. Fig. 9. Image retexturing result of clothes. Fig. 10. Image retexturing result of curtains. Fig. 11. Results with variant texton scales. The key vectors specified are shown in (a). (b) shows the computed mesh in texture space. From top left to bottom right, texton scales in (c) and (d) vary progressively from small to big along the horizontal and vertical directions.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有