正在加载图片...
564 J.Comput.Sci.&Technol.,May 2009,Vol.24,No.3 (a) (b) (c) (d) (f) (g) h O (K) Fig.3.Cleaning effects comparison for the facial image.(a)Image with size 528 x 644 in pixels.(b)and (c)Results obtained by bilateral with different parameters.(d)Result of Matsui et al (e)Mask painted by the user to specify the region of cleaning.(f)Result of our method.Apparently,the bumps as well freckles are removed thoroughly,while the pores on the face are retained.(g)~(k)Five zoomed in skin patches selected from the same position (the rectangle)of Figs.3(a)~3(d)and 3(f)respectively. Subsequently,we tested our face cleaning approach removed completely in the result.In another experi- with a set of facial images.For color image,we first ment of bilateral filtering,we increase the od to remove convert it from RGB space to YCrCb space,and then bumps (at top left),it appears that pores are removed perform cleaning operation on its Y component.Fi- and the scars (at bottom left)are still present,as shown nally,we invert the YCrCb space to RGB space to pro- in Fig.3(c).Since large o,would blur the edges of face, duce the resulting image.In the following experimental when performing bilateral filtering for Fig.3(a),we also results,when computing IPD with our algorithm the A use the mask image,see Fig.3(e),to keep the eyes and is set as 0.4. mouth out of filtering in Figs.3(b)and 3(c).Fig.3(d) Fig.3 shows a comparison of our cleaning method, is the result of Matsui et al.2]with =31,2=29, bilateral filtering and the technique of Matsui et all2 E3=10.a =3.As can be seen,part of skin pores are The original image to be processed is shown in Fig.3(a), retained,but the spots and the scares on the left face and the mask image we used is shown in Fig.3(e).Our are left behind.Figs.3(g)~3(k)are five zoomed in skin ALNEMD decomposes the original image into three patches selected from the same position(the rectangle) IMFs (K=2)together with a residual.In the origi- of Figs.3(a)~3(d)and 3(f)respectively. nal image,there are a number of pronounced scars and Since the IPD is derived from the normalized local bumps,especially the scars on the left part of the face, energy,it reflects the relative energy of image pixels making the task of cleaning challenging.However,as and work in the case of noisy,blurry and different reso- can be seen from our processed image(Fig.3(f)with pa- lution images.Fig.4(a)is the original image part and rameter h=5),the pores on the face are well preserved, Fig.4(b)is the corresponding IPD.Fig.4(c)is a noisy while the large rough part of the face(at bottom left) version of Fig.4(a)with additive Gaussian white noise has been repaired.Figs.3(b)and 3(c)show the re- of variance 25,and Fig.4(d)is the IPD of Fig.4(c).As sults of bilateral filtering with different parameters.For can be seen,the IPD can still represent the visual per- Fig.3(b),the geometric spread od 1 and the photo- ception of face images.Fig.4(e)is a motion blurred metric spread or 1000.For Fig.3(c),od =2 and version of Fig.4(a)with translation 20 pixels and rota- o.=3000.It should be noted that in Fig.3(b)small od tion 10 degrees.Fig.4(f)is the corresponding IPD,we is adopted with an attempt to preserve pores and large can see that in Fig.4(e)pores are blurred,leaving be- or is adopted intending to smooth the edges of bumps hind some scars which have been represented with high and scars,but the small bumps (at top left)are not IPD values in Fig.4(f).Fig.4(g)is an image with a564 J. Comput. Sci. & Technol., May 2009, Vol.24, No.3 Fig.3. Cleaning effects comparison for the facial image. (a) Image with size 528×644 in pixels. (b) and (c) Results obtained by bilateral with different parameters. (d) Result of Matsui et al. [12] (e) Mask painted by the user to specify the region of cleaning. (f) Result of our method. Apparently, the bumps as well freckles are removed thoroughly, while the pores on the face are retained. (g)∼(k) Five zoomed in skin patches selected from the same position (the rectangle) of Figs. 3(a)∼3(d) and 3(f) respectively. Subsequently, we tested our face cleaning approach with a set of facial images. For color image, we first convert it from RGB space to YCrCb space, and then perform cleaning operation on its Y component. Fi￾nally, we invert the YCrCb space to RGB space to pro￾duce the resulting image. In the following experimental results, when computing IPD with our algorithm the λ is set as 0.4. Fig.3 shows a comparison of our cleaning method, bilateral filtering and the technique of Matsui et al. [12] The original image to be processed is shown in Fig.3(a), and the mask image we used is shown in Fig.3(e). Our ALNEMD decomposes the original image into three IMFs (K = 2) together with a residual. In the origi￾nal image, there are a number of pronounced scars and bumps, especially the scars on the left part of the face, making the task of cleaning challenging. However, as can be seen from our processed image (Fig.3(f) with pa￾rameter h = 5), the pores on the face are well preserved, while the large rough part of the face (at bottom left) has been repaired. Figs. 3(b) and 3(c) show the re￾sults of bilateral filtering with different parameters. For Fig.3(b), the geometric spread σd = 1 and the photo￾metric spread σr = 1000. For Fig.3(c), σd = 2 and σr = 3000. It should be noted that in Fig.3(b) small σd is adopted with an attempt to preserve pores and large σr is adopted intending to smooth the edges of bumps and scars, but the small bumps (at top left) are not removed completely in the result. In another experi￾ment of bilateral filtering, we increase the σd to remove bumps (at top left), it appears that pores are removed and the scars (at bottom left) are still present, as shown in Fig.3(c). Since large σr would blur the edges of face, when performing bilateral filtering for Fig.3(a), we also use the mask image, see Fig.3(e), to keep the eyes and mouth out of filtering in Figs. 3(b) and 3(c). Fig.3(d) is the result of Matsui et al. [12] with ε1 = 31, ε2 = 29, ε3 = 10, a = 3. As can be seen, part of skin pores are retained, but the spots and the scares on the left face are left behind. Figs. 3(g)∼3(k) are five zoomed in skin patches selected from the same position (the rectangle) of Figs. 3(a)∼3(d) and 3(f) respectively. Since the IPD is derived from the normalized local energy, it reflects the relative energy of image pixels and work in the case of noisy, blurry and different reso￾lution images. Fig.4(a) is the original image part and Fig.4(b) is the corresponding IPD. Fig.4(c) is a noisy version of Fig.4(a) with additive Gaussian white noise of variance 25, and Fig.4(d) is the IPD of Fig.4(c). As can be seen, the IPD can still represent the visual per￾ception of face images. Fig.4(e) is a motion blurred version of Fig.4(a) with translation 20 pixels and rota￾tion 10 degrees. Fig.4(f) is the corresponding IPD, we can see that in Fig.4(e) pores are blurred, leaving be￾hind some scars which have been represented with high IPD values in Fig.4(f). Fig.4(g) is an image with a
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有