正在加载图片...
Encoder Image Modulator Error Control Coding Mutiplexing etc Demodulator Processing FIGURE 17.5 Overview of an image compression system number is the sampled value of the image at a pixel (picture element)location. These numbers are represented with finite precision using a fixed number of bits. Until recently, the dominant image size was 512 X 512 pixels with 8 bits or 1 byte per pixel. The total storage size for such an image is 5122=0.25 X 10 bytes or 0.25 Mbytes. When digital image processing first emerged in the 1960s, this was considered to be a formidable amount of data, and so interest in developing ways to reduce this storage requirement arose immediately. Since that time, image compression has continued to be an active area of research. The recent emergence of standards for image coding algorithms and the commercial availability of very large scale integration(VLSI)chips that implement image coding algorithms is indicative of the present maturity of the field, although research activity continues apace. with declining memory costs and increasing transmission bandwidths, 0. 25 Mbytes is no longer considered to be the large amount of data that it once was. This might suggest that the need for image compression is not as great as previously. Unfortunately(or fortunately, depending on one's point of view), this is not the case because our appetite for image data has also grown enormously over the years. The old 512 X 512 pixels x I byte per pixel"standard"was a consequence of the spatial and gray scale resolution of sensors and displays that were commonly available until recently. At this time, displays with more than 10x 10 pixels and 24 bits/pixel to allow full color representation(8 bits each for red, green, and blue)are becoming commonplace. Thus, our 0.25-Mbyte standard image size has grown to 3 Mbytes. This is just the tip of the iceberg, however. For example, in desktop printing applications, a 4-color(cyan, magenta, yellow, and black)image of an 8.5 X hyperspectral image contains terrain irradiance measurements in each of 200 10-nm-wide spectral band,, R 11 in. page sampled at 600 dots per in. requires 134 Mbytes. In remote sensing applications, a typic 5-m intervals on the ground. Each measurement is recorded with 12-bit precision. Such data are acquire from aircraft or satellite and are used in agriculture, forestry, and other fields concerned with management of natural resources. Storage of these data from just a 10 X 10 km2 area requires 4800 Mbytes. Figure 17.5 shows the essential components of an image compression system. At the system input, the image encoded into its compressed form by the image coder. The compressed image may then be subjected to further digital processing, such as error control coding, encryption, or multiplexing with other data sources, before being used to modulate the analog signal that is actually transmitted through the channel or stored in a storage medium. At the system output, the image is processed step by step to undo each of the operations that was performed on it at the system input At the final step, the image is decoded into its original uncom- ressed form by the image decoder. Because of the role of the image encoder and decoder in an image compression system, image coding is often used as a synonym for image compression. If the reconstructed image is identical to the original image, the compression is said to be lossless. Otherwise, it is lossy Image compression algorithms depend for their success on two separate factors: redundancy and irrelevancy. Redundancy refers to the fact that each pixel in an image does not take on all possible values with equal probability, and the value that it does take on is not independent of that of the other pixels in the image. If this were not true, the image would appear as a white noise pattern such as that seen when a television receiver is tuned to an unused channel. From an information-theoretic point of view, such an image contains the e 2000 by CRC Press LLC© 2000 by CRC Press LLC number is the sampled value of the image at a pixel (picture element) location. These numbers are represented with finite precision using a fixed number of bits. Until recently, the dominant image size was 512 3 512 pixels with 8 bits or 1 byte per pixel. The total storage size for such an image is 5122 ª 0.25 3 106 bytes or 0.25 Mbytes. When digital image processing first emerged in the 1960s, this was considered to be a formidable amount of data, and so interest in developing ways to reduce this storage requirement arose immediately. Since that time, image compression has continued to be an active area of research. The recent emergence of standards for image coding algorithms and the commercial availability of very large scale integration (VLSI) chips that implement image coding algorithms is indicative of the present maturity of the field, although research activity continues apace. With declining memory costs and increasing transmission bandwidths, 0.25 Mbytes is no longer considered to be the large amount of data that it once was. This might suggest that the need for image compression is not as great as previously. Unfortunately (or fortunately, depending on one’s point of view), this is not the case because our appetite for image data has also grown enormously over the years. The old 512 3 512 pixels 3 1 byte per pixel “standard’’ was a consequence of the spatial and gray scale resolution of sensors and displays that were commonly available until recently. At this time, displays with more than 103 3 103 pixels and 24 bits/pixel to allow full color representation (8 bits each for red, green, and blue) are becoming commonplace. Thus, our 0.25-Mbyte standard image size has grown to 3 Mbytes. This is just the tip of the iceberg, however. For example, in desktop printing applications, a 4-color (cyan, magenta, yellow, and black) image of an 8.5 3 11 in.2 page sampled at 600 dots per in. requires 134 Mbytes. In remote sensing applications, a typical hyperspectral image contains terrain irradiance measurements in each of 200 10-nm-wide spectral bands at 25-m intervals on the ground. Each measurement is recorded with 12-bit precision. Such data are acquired from aircraft or satellite and are used in agriculture, forestry, and other fields concerned with management of natural resources. Storage of these data from just a 10 3 10 km2 area requires 4800 Mbytes. Figure 17.5 shows the essential components of an image compression system. At the system input, the image is encoded into its compressed form by the image coder. The compressed image may then be subjected to further digital processing, such as error control coding, encryption, or multiplexing with other data sources, before being used to modulate the analog signal that is actually transmitted through the channel or stored in a storage medium. At the system output, the image is processed step by step to undo each of the operations that was performed on it at the system input. At the final step, the image is decoded into its original uncom￾pressed form by the image decoder. Because of the role of the image encoder and decoder in an image compression system, image coding is often used as a synonym for image compression. If the reconstructed image is identical to the original image, the compression is said to be lossless. Otherwise, it is lossy. Image compression algorithms depend for their success on two separate factors: redundancy and irrelevancy. Redundancy refers to the fact that each pixel in an image does not take on all possible values with equal probability, and the value that it does take on is not independent of that of the other pixels in the image. If this were not true, the image would appear as a white noise pattern such as that seen when a television receiver is tuned to an unused channel. From an information-theoretic point of view, such an image contains the FIGURE 17.5 Overview of an image compression system
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有