Xử lý hình ảnh kỹ thuật số P16
lượt xem 4
download
Xử lý hình ảnh kỹ thuật số P16
IMAGE FEATURE EXTRACTION An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural in the sense that such features are defined by the visual appearance of an image, while other, artificial features result from specific manipulations of an image. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spectra are examples of artificial features.
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Xử lý hình ảnh kỹ thuật số P16
 Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0471374075 (Hardback); 0471221325 (Electronic) 16 IMAGE FEATURE EXTRACTION An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural in the sense that such features are defined by the visual appearance of an image, while other, artificial features result from specific manipu lations of an image. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spec tra are examples of artificial features. Image features are of major importance in the isolation of regions of common property within an image (image segmentation) and subsequent identification or labeling of such regions (image classification). Image segmentation is discussed in Chapter 16. References 1 to 4 provide information on image classification tech niques. This chapter describes several types of image features that have been proposed for image segmentation and classification. Before introducing them, however, methods of evaluating their performance are discussed. 16.1. IMAGE FEATURE EVALUATION There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been indepen dently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a 509
 510 IMAGE FEATURE EXTRACTION segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or seg mentation ability of the classifier or segmenter. The figureofmerit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt (5) have utilized the Bhattacharyya distance (3) figureofmerit for texture feature evaluation. The method should be extensible for other features as well. The Bhatta charyya distance (Bdistance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as 1⁄2 B ( S 1, S 2 ) = – ln ∫ [ p ( x S 1 )p ( x S 2 ) ] dx (16.11) where x denotes a vector containing individual image feature measurements with conditional density p ( x S i ) . It can be shown (3) that the Bdistance is related mono tonically to the Chernoff bound for the probability of classification error using a Bayes classifier. The bound on the error probability is 1⁄2 P ≤ [ P ( S 1 )P ( S2 ) ] exp { – B ( S 1, S 2 ) } (16.12) where P ( Si ) represents the a priori class probability. For future reference, the Cher noff error bound is tabulated in Table 16.11 as a function of Bdistance for equally likely feature classes. For Gaussian densities, the Bdistance becomes 1 T Σ1 + Σ2 –1  Σ 1 + Σ 2  B ( S 1, S 2 ) = 1 ( u 1 – u 2 )  ( u 1 – u 2 ) + 1 ln  (16.13)      2  8 2 2 Σ1 1 ⁄ 2 Σ2 1 ⁄ 2 where ui and Σ i represent the feature mean vector and the feature covariance matrix of the classes, respectively. Calculation of the Bdistance for other densities is gener ally difficult. Consequently, the Bdistance figure of merit is applicable only for Gaussiandistributed feature data, which fortunately is the common case. In prac tice, features to be evaluated by Eq. 16.13 are measured in regions whose class has been determined independently. Sufficient feature measurements need be taken so that the feature mean vector and covariance can be estimated accurately.
 AMPLITUDE FEATURES 511 TABLE 16.11 Relationship of Bhattacharyya Distance and Chernoff Error Bound B ( S 1, S 2 ) Error Bound 1 1.84 × 10–1 2 6.77 × 10–2 4 9.16 × 10–3 6 1.24 × 10–3 8 1.68 × 10–4 10 2.27 × 10–5 12 2.07 × 10–6 16.2. AMPLITUDE FEATURES The most basic of all image features is some measure of image amplitude in terms of luminance, tristimulus value, spectral value, or other units. There are many degrees of freedom in establishing image amplitude features. Image variables such as lumi nance or tristimulus values may be utilized directly, or alternatively, some linear, nonlinear, or perhaps noninvertible transformation can be performed to generate variables in a new amplitude space. Amplitude measurements may be made at spe cific image points, [e.g., the amplitude F ( j, k ) at pixel coordinate ( j, k ) , or over a neighborhood centered at ( j, k ) ]. For example, the average or mean image amplitude in a W × W pixel neighborhood is given by w w 1 M ( j, k ) =  W  2 ∑ ∑ F ( j + m, k + n ) (16.21) m = –w n = –w where W = 2w + 1. An advantage of a neighborhood, as opposed to a point measure ment, is a diminishing of noise effects because of the averaging process. A disadvan tage is that object edges falling within the neighborhood can lead to erroneous measurements. The median of pixels within a W × W neighborhood can be used as an alternative amplitude feature to the mean measurement of Eq. 16.21, or as an additional feature. The median is defined to be that pixel amplitude in the window for which onehalf of the pixels are equal or smaller in amplitude, and onehalf are equal or greater in amplitude. Another useful image amplitude feature is the neighborhood standard deviation, which can be computed as w w 1⁄2 1 2 S ( j, k ) =  W  ∑ ∑ [ F ( j + m, k + n ) – M ( j + m, k + n ) ] (16.22) m = –w n = – w
 512 IMAGE FEATURE EXTRACTION (a) Original (b) 7 × 7 pyramid mean (c) 7 × 7 standard deviation (d ) 7 × 7 plus median FIGURE 16.21. Image amplitude features of the washington_ir image. In the literature, the standard deviation image feature is sometimes called the image dispersion. Figure 16.21 shows an original image and the mean, median, and stan dard deviation of the image computed over a small neighborhood. The mean and standard deviation of Eqs. 16.21 and 16.22 can be computed indirectly in terms of the histogram of image pixels within a neighborhood. This leads to a class of image amplitude histogram features. Referring to Section 5.7, the firstorder probability distribution of the amplitude of a quantized image may be defined as P ( b ) = P R [ F ( j, k ) = r b ] (16.23) where r b denotes the quantized amplitude level for 0 ≤ b ≤ L – 1 . The firstorder his togram estimate of P(b) is simply
 AMPLITUDE FEATURES 513 N( b) P ( b ) ≈   (16.24) M where M represents the total number of pixels in a neighborhood window centered about ( j, k ) , and N ( b ) is the number of pixels of amplitude r b in the same window. The shape of an image histogram provides many clues as to the character of the image. For example, a narrowly distributed histogram indicates a lowcontrast image. A bimodal histogram often suggests that the image contains an object with a narrow amplitude range against a background of differing amplitude. The following measures have been formulated as quantitative shape descriptions of a firstorder histogram (6). Mean: L–1 SM ≡ b = ∑ bP ( b ) (16.25) b=0 Standard deviation: L–1 1⁄2 2 SD ≡ σb = ∑ (b – b ) P( b) (16.26) b=0 Skewness: L–1 1 3 S S =  σb  3 ∑ ( b – b) P(b ) (16.27) b=0 Kurtosis: L–1 1 4 S K =  4 ∑ ( b – b) P( b ) – 3 (16.28) σb b=0 Energy: L–1 2 SN = ∑ [P(b )] (16.29) b=0 Entropy: L–1 SE = – ∑ P ( b ) log 2 { P ( b ) } (16.210) b=0
 514 IMAGE FEATURE EXTRACTION m,n r q j,k FIGURE 16.22. Relationship of pixel pairs. The factor of 3 inserted in the expression for the Kurtosis measure normalizes SK to zero for a zeromean, Gaussianshaped histogram. Another useful histogram shape measure is the histogram mode, which is the pixel amplitude corresponding to the histogram peak (i.e., the most commonly occurring pixel amplitude in the window). If the histogram peak is not unique, the pixel at the peak closest to the mean is usu ally chosen as the histogram shape descriptor. Secondorder histogram features are based on the definition of the joint proba bility distribution of pairs of pixels. Consider two pixels F ( j, k ) and F ( m, n ) that are located at coordinates ( j, k ) and ( m, n ), respectively, and, as shown in Figure 16.22, are separated by r radial units at an angle θ with respect to the horizontal axis. The joint distribution of image amplitude values is then expressed as P ( a, b ) = P R [ F ( j, k ) = r a, F ( m, n ) = r b ] (16.211) where r a and r b represent quantized pixel amplitude values. As a result of the dis crete rectilinear representation of an image, the separation parameters ( r, θ ) may assume only certain discrete values. The histogram estimate of the secondorder dis tribution is N ( a, b ) P ( a, b ) ≈   (16.212) M where M is the total number of pixels in the measurement window and N ( a, b ) denotes the number of occurrences for which F ( j, k ) = r a and F ( m, n ) = r b . If the pixel pairs within an image are highly correlated, the entries in P ( a, b ) will be clustered along the diagonal of the array. Various measures, listed below, have been proposed (6,7) as measures that specify the energy spread about the diagonal of P ( a, b ). Autocorrelation: L–1 L–1 SA = ∑ ∑ abP ( a, b ) (16.213) a=0 b=0
 AMPLITUDE FEATURES 515 Covariance: L–1 L–1 SC = ∑ ∑ ( a – a ) ( b – b )P ( a, b ) (16.214a) a=0 b=0 where L–1 L–1 a = ∑ ∑ aP ( a, b ) (16.214b) a=0 b=0 L–1 L–1 b = ∑ ∑ bP ( a, b ) (16.214c) a=0 b=0 Inertia: L–1 L–1 2 SI = ∑ ∑ (a – b) P ( a, b ) (16.215) a=0 b=0 Absolute value: L–1 L–1 SV = ∑ ∑ a – b P ( a, b ) (16.216) a=0 b=0 Inverse difference: L–1 L–1 P ( a, b ) SF = ∑ ∑  1 + ( a – b) 2 (16.217) a=0 b=0 Energy: L–1 L–1 2 SG = ∑ ∑ [ P ( a, b ) ] (16.218) a=0 b=0 Entropy: L–1 L–1 ST = – ∑ ∑ P ( a, b ) log 2 { P ( a, b ) } (16.219) a=0 b=0 The utilization of secondorder histogram measures for texture analysis is consid ered in Section 16.6.
 516 IMAGE FEATURE EXTRACTION 16.3. TRANSFORM COEFFICIENT FEATURES The coefficients of a twodimensional transform of a luminance image specify the amplitude of the luminance patterns (twodimensional basis functions) of a trans form such that the weighted sum of the luminance patterns is identical to the image. By this characterization of a transform, the coefficients may be considered to indi cate the degree of correspondence of a particular luminance pattern with an image field. If a basis pattern is of the same spatial form as a feature to be detected within the image, image detection can be performed simply by monitoring the value of the transform coefficient. The problem, in practice, is that objects to be detected within an image are often of complex shape and luminance distribution, and hence do not correspond closely to the more primitive luminance patterns of most image trans forms. Lendaris and Stanley (8) have investigated the application of the continuous two dimensional Fourier transform of an image, obtained by a coherent optical proces sor, as a means of image feature extraction. The optical system produces an electric field radiation pattern proportional to ∞ ∞ F ( ω x, ω y ) = ∫–∞ ∫–∞ F ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy (16.31) where ( ω x, ω y ) are the image spatial frequencies. An optical sensor produces an out put 2 M ( ω x, ω y ) = F ( ω x, ω y ) (16.32) proportional to the intensity of the radiation pattern. It should be observed that F ( ω x, ω y ) and F ( x, y ) are unique transform pairs, but M ( ω x, ω y ) is not uniquely related to F ( x, y ) . For example, M ( ω x, ω y ) does not change if the origin of F ( x, y ) is shifted. In some applications, the translation invariance of M ( ω x, ω y ) may be a benefit. Angular integration of M ( ω x, ω y ) over the spatial frequency plane produces a spatial frequency feature that is invariant to translation and rotation. Representing M ( ω x, ω y ) in polar form, this feature is defined as 2π N (ρ) = ∫0 M ( ρ, θ ) dθ (16.33) 2 2 2 where θ = arc tan { ω x ⁄ ω y } and ρ = ω x + ω y . Invariance to changes in scale is an attribute of the feature ∞ P(θ ) = ∫0 M ( ρ, θ ) dρ (16.34)
 TRANSFORM COEFFICIENT FEATURES 517 FIGURE 16.31. Fourier transform feature masks. The Fourier domain intensity pattern M ( ω x, ω y ) is normally examined in specific regions to isolate image features. As an example, Figure 16.31 defines regions for the following Fourier features: Horizontal slit: ∞ ωy (m + 1 ) S1( m ) = ∫– ∞ ∫ω ( m ) y M ( ω x, ω y ) dω x dω y (16.35) Vertical slit: ωx (m + 1 ) ∞ S2 ( m ) = ∫ω ( m) ∫–∞ M ( ω x, ωy ) dωx dωy x (16.36) Ring: ρ ( m + 1 ) 2π S3 ( m ) = ∫ρ ( m ) ∫0 M ( ρ, θ ) dρ dθ (16.37) Sector: ∞ θ(m + 1 ) S4 ( m ) = ∫0 ∫ θ ( m ) M ( ρ, θ ) dρ dθ (16.38)
 518 IMAGE FEATURE EXTRACTION (a ) Rectangle (b ) Rectangle transform (c ) Ellipse (d ) Ellipse transform (e ) Triangle (f ) Triangle transform FIGURE 16.32. Discrete Fourier spectra of objects; log magnitude displays. For a discrete image array F ( j, k ) , the discrete Fourier transform N–1 N–1 1 – 2πi F ( u, v ) =  N  ∑ ∑ F ( j, k ) exp  ( ux + vy ) N (16.39) j=0 k=0
 TEXTURE DEFINITION 519 for u, v = 0, …, N – 1 can be examined directly for feature extraction purposes. Hor izontal slit, vertical slit, ring, and sector features can be defined analogous to Eqs. 16.35 to 16.38. This concept can be extended to other unitary transforms, such as the Hadamard and Haar transforms. Figure 16.32 presents discrete Fourier transform log magnitude displays of several geometric shapes. 16.4. TEXTURE DEFINITION Many portions of images of natural scenes are devoid of sharp edges over large areas. In these areas, the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. Image texture measurements can be used to segment an image and classify its segments. Several authors have attempted qualitatively to define texture. Pickett (9) states that “texture is used to describe two dimensional arrays of variations... The ele ments and rules of spacing or arrangement may be arbitrarily manipulated, provided a characteristic repetitiveness remains.” Hawkins (10) has provided a more detailed description of texture: “The notion of texture appears to depend upon three ingredi ents: (1) some local 'order' is repeated over a region which is large in comparison to the order's size, (2) the order consists in the nonrandom arrangement of elementary parts and (3) the parts are roughly uniform entities having approximately the same dimensions everywhere within the textured region.” Although these descriptions of texture seem perceptually reasonably, they do not immediately lead to simple quan titative textural measures in the sense that the description of an edge discontinuity leads to a quantitative description of an edge in terms of its location, slope angle, and height. Texture is often qualitatively described by its coarseness in the sense that a patch of wool cloth is coarser than a patch of silk cloth under the same viewing conditions. The coarseness index is related to the spatial repetition period of the local structure. A large period implies a coarse texture; a small period implies a fine texture. This perceptual coarseness index is clearly not sufficient as a quantitative texture mea sure, but can at least be used as a guide for the slope of texture measures; that is, small numerical texture measures should imply fine texture, and large numerical measures should indicate coarse texture. It should be recognized that texture is a neighborhood property of an image point. Therefore, texture measures are inher ently dependent on the size of the observation neighborhood. Because texture is a spatial property, measurements should be restricted to regions of relative uniformity. Hence it is necessary to establish the boundary of a uniform textural region by some form of image segmentation before attempting texture measurements. Texture may be classified as being artificial or natural. Artificial textures consist of arrangements of symbols, such as line segments, dots, and stars placed against a neutral background. Several examples of artificial texture are presented in Figure 16.41 (9). As the name implies, natural textures are images of natural scenes con taining semirepetitive arrangements of pixels. Examples include photographs of brick walls, terrazzo tile, sand, and grass. Brodatz (11) has published an album of photographs of naturally occurring textures. Figure 16.42 shows several natural texture examples obtained by digitizing photographs from the Brodatz album.
 520 IMAGE FEATURE EXTRACTION FIGURE 16.41. Artificial texture.
 VISUAL TEXTURE DISCRIMINATION 521 (a) Sand (b) Grass (c) Wool (d) Raffia FIGURE 16.42. Brodatz texture fields. 16.5. VISUAL TEXTURE DISCRIMINATION A discrete stochastic field is an array of numbers that are randomly distributed in amplitude and governed by some joint probability density (12). When converted to light intensities, such fields can be made to approximate natural textures surpris ingly well by control of the generating probability density. This technique is useful for generating realistic appearing artificial scenes for applications such as airplane flight simulators. Stochastic texture fields are also an extremely useful tool for investigating human perception of texture as a guide to the development of texture feature extraction methods. In the early 1960s, Julesz (13) attempted to determine the parameters of stochas tic texture fields of perceptual importance. This study was extended later by Julesz et al. (14–16). Further extensions of Julesz’s work have been made by Pollack (17),
 522 IMAGE FEATURE EXTRACTION FIGURE 16.51. Stochastic texture field generation model. Purks and Richards (18), and Pratt et al. (19). These studies have provided valuable insight into the mechanism of human visual perception and have led to some useful quantitative texture measurement methods. Figure 16.51 is a model for stochastic texture generation. In this model, an array of independent, identically distributed random variables W ( j, k ) passes through a linear or nonlinear spatial operator O { · } to produce a stochastic texture array F ( j, k ). By controlling the form of the generating probability density p ( W ) and the spatial operator, it is possible to create texture fields with specified statistical proper ties. Consider a continuous amplitude pixel x 0 at some coordinate ( j, k ) in F ( j, k ) . Let the set { z 1, z 2, …, z J } denote neighboring pixels but not necessarily nearest geo metric neighbors, raster scanned in a conventional toptobottom, lefttoright fash ion. The conditional probability density of x 0 conditioned on the state of its neighbors is given by p ( x 0, z 1, …, z J ) p ( x 0 z 1, …, z J ) =  (16.51) p ( z 1, …, z J ) The firstorder density p ( x 0 ) employs no conditioning, the secondorder density p ( x 0 z 1 ) implies that J = 1, the thirdorder density implies that J = 2, and so on. 16.5.1. Julesz Texture Fields In his pioneering texture discrimination experiments, Julesz utilized Markov process state methods to create stochastic texture arrays independently along rows of the array. The family of Julesz stochastic arrays are defined below. 1. Notation. Let x n = F ( j, k – n ) denote a row neighbor of pixel x 0 and let P(m), for m = 1, 2,..., M, denote a desired probability generating function. 2. Firstorder process. Set x 0 = m for a desired probability function P(m). The resulting pixel probability is P ( x0 ) = P ( x0 = m ) = P ( m ) (16.52)
 VISUAL TEXTURE DISCRIMINATION 523 3. Secondorder process. Set F ( j, 1 ) = m for P ( m ) = 1 ⁄ M , and set x 0 = ( x 1 + m )MOD { M }, where the modulus function p MOD { q } ≡ p – [ q × ( p ÷ q ) ] for integers p and q. This gives a firstorder probability 1 P ( x 0 ) =  (16.53a) M and a transition probability p ( x 0 x 1 ) = P [ x 0 = ( x 1 + m ) MOD { M } ] = P ( m ) (16.53b) 4. Thirdorder process. Set F ( j, 1 ) = m for P ( m ) = 1 ⁄ M , and set F ( j, 2 ) = n for P ( n ) = 1 ⁄ M . Choose x 0 to satisfy 2x 0 = ( x 1 + x 2 + m ) MOD { M } . The governing probabilities then become 1 P ( x 0 ) =   (16.54a) M 1 p ( x 0 x 1 ) =   (16.54b) M p ( x 0 x 1, x 2 ) = P [ 2x 0 = ( x 1 + x 2 + m ) MOD { M } ] = P ( m ) (16.54c) This process has the interesting property that pixel pairs along a row are independent, and consequently, the process is spatially uncorrelated. Figure 16.52 contains several examples of Julesz texture field discrimination tests performed by Pratt et al. (19). In these tests, the textures were generated according to the presentation format of Figure 16.53. In these and subsequent visual texture discrimination tests, the perceptual differences are often small. Proper discrimination testing should be performed using highquality photographic trans parencies, prints, or electronic displays. The following moments were used as sim ple indicators of differences between generating distributions and densities of the stochastic fields. η = E { x0 } (16.55a) 2 2 σ = E { [ x0 – η ] } (16.55b) E { [ x0 – η ] [ x1 – η ] } α =   (16.55c) 2 σ E { [ x 0 – η ] [ x1 – η ] [ x2 – η ] } θ =   (16.55d) 3 σ
 524 IMAGE FEATURE EXTRACTION (a) Different first order (b) Different second order sA = 0.289, sB = 0.204 sA = 0.289, sB = 0.289 aA = 0.250, aB = − 0.250 (c) Different third order sA = 0.289, sB = 0.289 aA = 0.000, aB = 0.000 qA = 0.058, qB = − 0.058 FIGURE 16.52. Field comparison of Julesz stochastic fields; η A = η B = 0.500 . The examples of Figure 16.52a and b indicate that texture field pairs differing in their first and secondorder distributions can be discriminated. The example of Figure 16.52c supports the conjecture, attributed to Julesz, that differences in third order, and presumably, higherorder distribution texture fields cannot be perceived provided that their firstorder and second distributions are pairwise identical.
 VISUAL TEXTURE DISCRIMINATION 525 FIGURE 16.53. Presentation format for visual texture discrimination experiments. 16.5.2. Pratt, Faugeras, and Gagalowicz Texture Fields Pratt et al. (19) have extended the work of Julesz et al. (13–16) in an attempt to study the discriminability of spatially correlated stochastic texture fields. A class of Gaus sian fields was generated according to the conditional probability density –1 ⁄ 2 J+1 –1 exp – 1 ( v J + 1 – η J + 1 ) ( K J + 1 ) ( v J + 1 – η J + 1 ) T ( 2π ) KJ + 1   2 p ( x 0 z 1, …, z J ) =   –1 ⁄ 2 J 1 T –1 ( 2π ) K J exp –  ( v J – η J ) ( K J ) ( v J – η J )  2 (16.56a) z1 where vJ = … zJ (16.56b) x0 vJ + 1 = (16.56c) vJ The covariance matrix of Eq. 16.56a is of the parametric form
 526 IMAGE FEATURE EXTRACTION (a) Constrained secondorder density (b) Constrained thirdorder density FIGURE 16.54. Row correlation factors for stochastic field generation. Dashed line, field A; solid line, field B. 1 α β γ … α KJ + 1 = –2 (16.57) β σ KJ γ … where α, β, γ, … denote correlation lag terms. Figure 16.54 presents an example of the row correlation functions used in the texture field comparison tests described below. Figures 16.55 and 16.56 contain examples of Gaussian texture field comparison tests. In Figure 16.55, the firstorder densities are set equal, but the secondorder nearest neighbor conditional densities differ according to the covariance function plot of Figure 16.54a. Visual discrimination can be made in Figure 16.55, in which the correlation parameter differs by 20%. Visual discrimination has been found to be marginal when the correlation factor differs by less than 10% (19). The first and secondorder densities of each field are fixed in Figure 16.56, and the thirdorder
 VISUAL TEXTURE DISCRIMINATION 527 (a) aA = 0.750, aB = 0.900 (b) aA = 0.500, aB = 0.600 FIGURE 16.55. Field comparison of Gaussian stochastic fields with different secondorder nearest neighbor densities; η A = η B = 0.500, σ A = σ B = 0.167. conditional densities differ according to the plan of Figure 16.54b. Visual discrimi nation is possible. The test of Figure 16.56 seemingly provides a counterexample to A B A B the Julesz conjecture. In this test, [ p ( x 0 ) = p ( x 0 ) ] and p ( x 0, x 1 ) = p ( x 0, x 1 ), but A B p ( x 0, x 1, x 2 ) ≠ p ( x 0, x 1, x 2 ) . However, the general secondorder density pairs A B p ( x 0, z j ) and p ( x 0, z j ) are not necessarily equal for an arbitrary neighbor z j , and therefore the conditions necessary to disprove Julesz’s conjecture are violated. To test the Julesz conjecture for realistically appearing texture fields, it is neces sary to generate a pair of fields with identical firstorder densities, identical Markovian type secondorder densities, and differing thirdorder densities for every (a) bA = 0.563, bB = 0.600 (b) bA = 0.563, bB = 0.400 FIGURE 16.56. Field comparison of Gaussian stochastic fields with different thirdorder nearest neighbor densities; η A = η B = 0.500, σ A = σ B = 0.167, α A = α B = 0.750 .
 528 IMAGE FEATURE EXTRACTION hA = 0.500, hB = 0.500 sA = 0.167, sB = 0.167 aA = 0.850, aB = 0.850 qA = 0.040, qB = − 0.027 FIGURE 16.57. Field comparison of correlated Julesz stochastic fields with identical first and secondorder densities, but different thirdorder densities. pair of similar observation points in both fields. An example of such a pair of fields is presented in Figure 16.57 for a nonGaussian generating process (19). In this example, the texture appears identical in both fields, thus supporting the Julesz conjecture. Gagalowicz has succeeded in generating a pair of texture fields that disprove the Julesz conjecture (20). However, the counterexample, shown in Figure 16.58, is not very realistic in appearance. Thus, it seems likely that if a statistically based texture measure can be developed, it need not utilize statistics greater than secondorder. FIGURE 16.58. Gagalowicz counterexample.
CÓ THỂ BẠN MUỐN DOWNLOAD

Xử lý hình ảnh kỹ thuật số P1
19 p  59  14

Xử lý hình ảnh kỹ thuật số P2
22 p  61  8

Xử lý hình ảnh kỹ thuật số P14
41 p  49  8

Xử lý hình ảnh kỹ thuật số P3
44 p  44  7

Xử lý hình ảnh kỹ thuật số P9
28 p  43  6

Xử lý hình ảnh kỹ thuật số P7
23 p  48  6

Xử lý hình ảnh kỹ thuật số P6
18 p  45  6

Xử lý hình ảnh kỹ thuật số P15
66 p  49  5

Xử lý hình ảnh kỹ thuật số P12
51 p  45  5

Xử lý hình ảnh kỹ thuật số P8
28 p  73  5

Xử lý hình ảnh kỹ thuật số P10
54 p  50  3

Xử lý hình ảnh kỹ thuật số P11
21 p  53  3

Xử lý hình ảnh kỹ thuật số P5
19 p  47  3

Xử lý hình ảnh kỹ thuật số P13
28 p  41  3

Xử lý hình ảnh kỹ thuật số P4
30 p  46  3

Xử lý hình ảnh kỹ thuật số P17
37 p  57  3

Xử lý hình ảnh kỹ thuật số P18
24 p  42  2