Cơ sở dữ liệu hình ảnh - Chương 4
lượt xem 9
download
4. SEGMENTATION AND EDGE DETECTION 4.1 Region Operations Discovering regions can be a very simple exercise, as illustrated in 4.1.1. However, more often than not, regions are required that cover a substantial area of the scene rather than a small
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Cơ sở dữ liệu hình ảnh - Chương 4
- 4. SEGMENTATION AND EDGE DETECTION 4.1 Region Operations Discovering regions can be a very simple exercise, as illustrated in 4.1.1. However, more often than not, regions are required that cover a substantial area of the scene rather than a small group of pixels. 4.1.1 Crude edge detection USE. To reconsider an image as a set of regions. OPERATION. There is no operation involved here. The regions are simply identified as containing pixels of the same gray level, the boundaries of the regions (contours) are at the cracks between the pixels rather than at pixel positions. Such as a region detection may give far for many regions to be useful (unless the number of gray levels is relatively small). So a simple approach is to group pixels into ranges of near values (quantizing or bunching). The ranges can be considering the image histogram in order to identify good bunching for region purposes results in a merging of regions based overall gray-level statistics rather than on gray levels of pixels that are geographically near one another. 4.1.2 Region merging It is often useful to do the rough gray-level split and then to perform some techniques on the cracks between the regions – not to enhance edges but to identify when whole regions are worth combining – thus reducing the number of regions from the crude region detection above. USE. Reduce number of regions, combining fragmented regions, determining which regions are really part of the same area. OPERATION. Let s be crack difference, i.e. the absolute difference in gray levels between two adjacent (above, below, left, right) pixels. Then give the threshold value T, we can identify, for each crack 1, if s < T w= 0, otherwise i.e. w is 1 if the crack is below the threshold (suggesting that the regions are likely to be the same), or 0 if it is above the threshold. Now measure the full length of the boundary of each of the region that meet at the crack. These will be b1 and b2 respectively. Sum the w values that are along the length of the crack between the regions and calculate: ∑w min ( b1 ,b2 ) If this is greater than a further threshold, deduce that the two regions should be joined. Effectively this is taking the number of cracks that suggest that the regions should be
- merged and dividing by the smallest region boundary. Of course a particularly irregular shape may have a very long region boundary with a small area. In that case it may be preferable to measure areas (count how many pixels there are in them). Measuring both boundaries is better than dividing by the boundary length between two regions as it takes into account the size of the regions involved. If one region is very small, then it will be added to a larger region, whereas if both regions are large, then the evidence for combining them has to be much stronger. 4.1.3 Region splitting Just as it is possible to start from many regions and merge them into fewer, large regions. It is also possible to consider the image as one region and split it into more and more regions. One way of doing this is to examine the gray level histograms. If the image is in color, better results can be obtained by the examination of the three color value histograms. USE. Subdivide sensibly an image or part of an image into regions of similar type. OPERATION. Identify significant peaks in the gray-level histogram and look in the valleys between the peaks for possible threshold values. Some peaks will be more substantial than others: find splits between the "best" peaks first. Regions are identified as containing gray-levels between the thresholds. With color images, there are three histograms to choose from. The algorithm halts when no peak is significant. LIMITATION. This technique relies on the overall histogram giving good guidance as to sensible regions. If the image is a chessboard, then the region splitting works nicely. If the image is of 16 chessboard well spaced apart on a white background sheet, then instead of identifying 17 regions, one for each chessboard and one for the background, it identifies 16 x 32 black squares, which is probably not what we wanted. 4.2 Basic Edge Detection The edges of an image hold much information in that image. The edges tell where objects are, their shape and size, and something about their texture. An edge is where the intensity of an image moves from a low value to a high value or vice versa. There are numerous applications for edge detection, which is often used for various special effects. Digital artists use it to create dazzling image outlines. The output of an edge detector can be added back to an original image to enhance the edges. Edge detection is often the first step in image segmentation. Image segmentation, a field of image analysis, is used to group pixels into regions to determine an image's composition. A common example of image segmentation is the "magic wand" tool in photo editing software. This tool allows the user to select a pixel in an image. The software then draws a border around the pixels of similar value. The user may select a pixel in a sky region and the magic wand would draw a border around the complete sky region in the image. The user may then edit the color of the sky without worrying about altering the color of the mountains or whatever else may be in the image.
- Edge detection is also used in image registration. Image registration aligns two images that may have been acquired at separate times or from different sensors. roof edge line edge step edge ramp edge Figure 4.1 Different edge profiles. There is an infinite number of edge orientations, widths and shapes (Figure 4.1). Some edges are straight while others are curved with varying radii. There are many edge detection techniques to go with all these edges, each having its own strengths. Some edge detectors may work well in one application and perform poorly in others. Sometimes it takes experimentation to determine what is the best edge detection technique for an application. The simplest and quickest edge detectors determine the maximum value from a series of pixel subtractions. The homogeneity operator subtracts each 8 surrounding pixels from the center pixel of a 3 x 3 window as in Figure 4.2. The output of the operator is the maximum of the absolute value of each difference. 13 15 11 11 11 11 16 11 11 11 12 11 16 homogenety operator image new pixel = maximum{11−11, 11−13, 11−15, 11−16,11−11, 11−16,11−12,11−11} = 5 Figure 4.2 How the homogeneity operator works. Similar to the homogeneity operator is the difference edge detector. It operates more quickly because it requires four subtractions per pixel as opposed to the eight needed by the homogeneity operator. The subtractions are upper left − lower right, middle left − middle right, lower left − upper right, and top middle − bottom middle (Figure 4.3).
- 13 15 11 11 11 11 16 11 11 11 12 11 16 homogenety operator image new pixel = maximum{11−11, 13−12, 15−16, 11−16} = 5 Figure 4.3 How the difference operator works. 4.2.1 First order derivative for edge detection If we are looking for any horizontal edges it would seem sensible to calculate the difference between one pixel value and the next pixel value, either up or down from the first (called the crack difference), i.e. assuming top left origin Hc = y_difference(x, y) = value(x, y) – value(x, y+1) In effect this is equivalent to convolving the image with a 2 x 1 template 1 −1 Likewise Hr = X_difference(x, y) = value(x, y) – value(x – 1, y) uses the template –1 1 Hc and Hr are column and row detectors. Occasionally it is useful to plot both X_difference and Y_difference, combining them to create the gradient magnitude (i.e. the strength of the edge). Combining them by simply adding them could mean two edges canceling each other out (one positive, one negative), so it is better to sum absolute values (ignoring the sign) or sum the squares of them and then, possibly, take the square root of the result. It is also to divide the Y_difference by the X_difference and identify a gradient direction (the angle of the edge between the regions) Y_difference(x, y) gradient_direction = tan −1 X_difference(x, y) The amplitude can be determine by computing the sum vector of Hc and Hr
- H ( x , y ) = H 2 ( x , y) + H c ( x , y) 2 r Sometimes for computational simplicity, the magnitude is computed as H ( x , y ) = H r ( x , y) + H c ( x , y) The edge orientation can be found by H c ( x, y ) θ = tan −1 H r ( x, y ) In real image, the lines are rarely so well defined, more often the change between regions is gradual and noisy. The following image represents a typical read edge. A large template is needed to average at the gradient over a number of pixels, rather than looking at two only 0 0 0 0 0 0 2 0 3 3 0 0 0 1 0 0 0 2 4 2 0 0 2 0 3 4 3 3 2 3 0 0 1 3 3 4 3 3 3 3 0 1 0 4 3 3 2 4 3 2 0 0 1 2 3 3 4 4 4 3 4.2.2 Sobel edge detection The Sobel operator is more sensitive to diagonal edges than vertical and horizontal edges. The Sobel 3 x 3 templates are normally given as X-direction −1 −2 −1 000 121 Y-direction −1 0 1 −2 0 2 −1 0 1 Original image 0 0 0 0 0 0 2 0 3 3 0 0 0 1 0 0 0 2 4 2 0 0 2 0 2 4 3 3 2 3 0 0 1 3 3 4 3 3 3 3 0 1 0 4 3 3 2 4 3 2 0 0 1 2 3 3 4 4 4 3 absA + absB
- 4 6 4 10 14 12 14 4 6 8 10 20 16 12 6 0 4 10 14 10 2 4 2 4 2 12 12 2 2 4 8 8 Threshold at 12 0 0 0 0 1111 2 0 0 1 1100 0 0 1 0 0000 0 1 1 0 0 00 4.2.3 Other first order operation The Roberts operator has a smaller effective area than the other mask, making it more susceptible to noise. 0 0 − 1 − 1 0 0 H r = 0 1 0 H c = 0 1 0 0 0 0 0 0 0 The Prewit operator is more sensitive to vertical and horizontal edges than diagonal edges. − 1 − 1 − 1 1 0 − 1 Hr = 0 0 H c = 1 0 − 1 0 1 1 1 0 − 1 1 The Frei-Chen mask − 1 − 2 − 1 − 1 0 0 Hr = 2 2 Hc = 0 0 0 0 1 1 0 − 1 0 2 4.3 Second Order Detection In many applications, edge width is not a concern. In others, such as machine vision, it is a great concern. The gradient operators discussed above produce a large response across an area where an edge is present. This is especially true for slowly ramping edges. Ideally, an edge detector should indicate any edges at the center of an edge. This is referred to as localization. If an edge detector creates an image map with edges several pixels wide, it is difficult to locate the centers of the edges. It becomes necessary to employ a process called thinning to reduce the edge width to one pixel. Second order derivative edge detectors provide better edge localization. Example. In an image such as
- 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 The basic Sobel vertical edge operator (as described above) will yield a value right across the image. For example if −1 0 1 −2 0 2 −1 0 1 is used then the results is 8888888 8888888 8888888 Implementing the same template on this "all eight image" would yield 00000000 This is not unlike the differentiation operator to a straight line, e.g. if y = 3x-2. d2y dy =3 and dx 2 dx Once we have gradient, if the gradient is then differentiated and the result is zero, it shows that the original line was straight. Images often come with a gray level "trend" on them, i.e. one side of a regions is lighter than the other, but there is no "edge" to be discovered in the region, the shading is even, indicating a light source that is stronger at one end, or a gradual color change over the surface. Another advantage of second order derivative operators is that the edge contours detected are closed curves. This is very important in image segmentation. Also, there is no response to areas of smooth linear variations in intensity. The Laplacian is a good example of a second order derivative operator. It is distinguished from the other operators because it is omnidirectional. It will highlight edges in all directions. The Laplacian operator will produce sharper edges than most other techniques. These highlights include both positive and negative intensity slopes. The edge Laplacian of an image can be found by convolving with masks such as 0 −1 0 −1 −1 −1 −1 4 −1 −1 8 −1 or 0 −1 0 −1 −1 −1 The Laplacian set of operators is widely used. Since it effectively removes the general gradient of lighting or coloring from an image it only discovers and enhances much more
- discrete changes than, for example, the Sobel operator. It does not produce any information on direction which is seen as a function of gradual change. It enhances noise, though larger Laplacian operators and similar families of operators tend to ignore noise. Determining zero crossings The method of determining zero crossings with some desired threshold is to pass a 3 x 3 window across the image determining the maximum and minimum values within that window. If the difference between the maximum and minimum value exceed the predetermined threshhold, an edge is present. Notice the larger number of edges with the smaller threshold. Also notice that the width of all the edges are one pixel wide. A second order derivative edge detector that is less susceptible to noise is the Laplacian of Gaussian (LoG). The LoG edge detector performs Gaussian smoothing before application of the Laplacian. Both operations can be performed by convolving with a mask of the form −(x 2 + y 2 ) x2 + y2 1 2 σ2 LoG(x, y) = 1− e πσ 4 2σ 2 where x, y present row and column of an image, σ is a value of dispersion that controls the effective spread. Due to its shape, the function is also called the Mexican hat filter. Figure 4.4 shows the cross section of the LoG edge operator with different values of σ. The wider the function, the wider the edge that will be detected. A narrow function will detect sharp edges and more detail. Figure 4.4 Cross selection of LoG with various σ. The greater the value of σ, the wider the convolution mask necessary. The first zero crossing of the LoG function is at 2σ . The width of the positive center lobe is twice that. To have a convolution mask that contains the nonzero values of the LoG function requires a width three times the width of the positive center lobe (8.49σ). Edge detection based on the Gaussian smoothing function reduces the noise in an image. That will reduce the number of false edges detected and also detects wider edges. Most edge detector masks are seldom greater than 7 x 7. Due to the shape of the LoG operator, it requires much larger mask sizes. The initial work in developing the LoG
- operator was done with a mask size of 35 x 35. Because of the large computation requirements of the LoG operator, the Difference of Gaussians (DoG) operator can be used as an approximation to the LoG. The DoG can be shown as x2 + y2 x2 + y2 − − 2 πσ 2 2 πσ 2 e 1 e 2 DoG(x, y) = − 2 2 πσ 2 2 πσ 1 2 The DoG operator is performed by convolving an image with a mask that is the result of subtracting two Gaussian masks with different a values. The ratio σ 1/σ 2 = 1.6 results in a good approximation of the LoG. Figure 4.5 compares a LoG function ( σ = 12.35) with a DoG function (σ1 = 10, σ2 = 16). Figure 4.5 LoG vs. DoG functions. One advantage of the DoG is the ability to specify the width of edges to detect by varying the values of σ1 and σ2. Here are a couple of sample masks. The 9 x 9 mask will detect wider edges than the 7x7 mask. For 7x7 mask, try 0 − 1 −1 −1 0 0 0 −2 −3 −3 −3 −2 0 0 −1 −3 5 − 3 −1 5 5 −1 − 3 5 16 5 − 3 −1 −1 −3 5 − 3 −1 5 5 −2 −3 −3 −3 −2 0 0 0 −1 −1 −1 0 0 0 For 9 x 9 mask, try
- −1 −1 −1 0 0 0 0 0 0 −2 −3 −3 −3 −3 −2 −2 0 0 −3 −2 −1 −1 −1 −3 −3 0 0 −1 −3 −1 −1 − 3 −1 9 9 9 −1 −3 −1 −1 − 3 −1 9 19 9 −1 −3 −1 −1 − 3 −1 9 9 9 −3 −2 −1 −1 −1 −3 −3 0 0 −2 −3 −3 −3 −3 −2 −2 0 0 −1 −1 −1 0 0 0 0 0 0 Color edge detection The method of detecting edges in color images depends on your definition of an edge. One definition of an edge is the discontinuity in an image’s luminance. Edge detection would then be done on the intensity channel of a color image in HSI space. Another definition claims an edge exists if it is present in the red, green, and blue channel. Edge detection can be done by performing it on each of the color components. After combining the color components, the resulting image is still color, see Figure 4.6. Figure 4.6 (a) original image; (b) red channel; (c) green channel; (d) blue channel; (e) red channel edge; (e) green channel edge; (e) blue channel edge. (This picture is taken from Figure 3.24, Chapter 3, [2]) Edge detection can also be done on each color component and then the components can be summed to create a gray scale edge map. Also, the color components can be vector summed to create the gray scale edge map. 2 2 2 G(x, y) = Gred + Ggreen + Gblue It has been shown that the large majority of edges found in the color elements of an image are also found in the intensity component. This would imply that edge detection done on the intensity component alone would suffice. There is the case of low contrast images
- where edges are not detected in the luminance component but found in the chromatic components. The best color edge detector again depends on the application. 4.4 Pyramid Edge Detection Often it happens that the significant edges in an image are well spaced apart from each other and relatively easy to identify. However, there may be a number of other strong edges in the image that are not significant (from the user’s point of view) because they are short or unconnected. The problem is how to enhance the substantial ones but ignore the other shorter ones. USE. To enhance substantial (strong and long) edges but to ignore the weak or short edges. THEORY. The image is cut down to the quarter of the area by halving the length of the sides (both horizontally and vertically). Each pixel in the new quarter-size image is an average of the four corresponding pixels in the full size image. This is repeated until an image is created where the substantial edges are still visible but the other edges have been lost. Now the pyramid is traversed in the other direction. An edge detector is applied to the small image and where edge pixel have been found, an edge detector is applied to the corresponding four pixels in the next large image – and so on to the full-size image. OPERATION. Let the original image be of size m x n. Create a second image of size m/2 x n/2 by evaluating for each 0 < i < m and 0 < j < n. i j 1 newI , = [ I(i, j) + I(i + 1, j) + I(i, j + 1) + I(i + 1, j + 1)] 2 2 4 i.e. the corresponding square of four elements in the original image are averaged to give a value in the new image. This is repeated (possibly recursively) x times, and each generated image is kept. (The generated images will not be larger, in total, than the original image, so only one extra plane is required to hold the image). Now with the smallest image, perform some edge detection operation – such as Sobel. In pixels where edges are discovered (some threshold is required to identity an "edge" pixel) perform an edge detection operation on the group of four corresponding pixels in the next largest image. Continue to do this following the best edges down through the pyramid of images until the main edges in the original image have been discovered. 4.5 Crack Edge Relaxation Crack edge relaxation is also a popular and effective method of edge enhancement. This involves allocating a likelihood value to all of the cracks between pixels as to whether they lie either side of an edge
- 6 8 7 7 7 4 3 2 3 if the gray-level range is 0÷ 9, then the crack probabilities in ninths are: 6 2 8 1 7 Difference value 1 1 3 between two pixels 7 0 7 3 4 4 5 1 3 1 2 1 3 Difference value between two pixels thresholding at 2 gives the edge, where the crack values are bigger than 2. Crack edge relaxation USE. Find substantial edges from an original image, and depending on the number of iterations that can be selected by the user, will find edges not only by simple statistics on a small local group, but will make sensible decisions about edges being connected to one another. OPERATION. Determine the values of the cracks between the pixels. This is I(x, y) − I(x + 1, y) for the vertical cracks and I(x, y) − I(x, y + 1) for the horizontal cracks. Then, classify every pixel cracks depending on how many of the cracks connected to it at both ends are likely to be "significant" cracks, i.e. likely to represent real edges on the picture. Since there are three continuation cracks at each end of every crack, each crack can be classified as having 0, 1, 2 or 3 significant cracks hanging off it at each end. Fig.4.7 shows a selection of crack edge types. (3,3) (3,2) (3,2) (3,2) (0,0) (3,0) (3,1) (2,2) Figure 4.7 A selection of crack edge types.
- If a, b, c are the values of the hanging-off cracks at one end of the crack being classified, and they are ordered such that a ≥ b ≥ c, and m = max(a, b, c, N/10), where N is the number of gray levels supported by the system, then calculate the maximum of (m-a)(m-b)(m-c) Likelihood value for 0 "significant" cracks a(m-b)(m-c) Likelihood value for 1 "significant" cracks ab(m-c) Likelihood value for 2 "significant" cracks abc Likelihood value for 3 "significant" cracks Choose the most likely number of cracks – i.e. the one with the highest likelihood value. Do this for both ends, allocating a class such as (3, 2) to the crack being considered. Increment the crack value if the crack is of type (1,1), (1,2), (2,1), (1,3), (3,1). Intuitively these will probably by the parts of an edge. Decrement the crack value if the crack is of type (0,0), (0,2), (0,1), (2,0), (3,0). Do nothing for the others. Repeat this enhancement process until adequate edge detection has been performed. Create an edge detected image by allocating to each pixel a value dependent on the value of the crack above it and the crack to the right of it. This could be a simple sum or the maximum of the two or a binary value from some combined threshold. This is edge enhancement, using as initial estimate of the edges the cracks between the pixels. It then removes the unlikely ones, enhancing the more likely ones. 4.6 Edge Following If it is know that an object in an image has a discrete edge all around it, then possible once a position on the edge has been found, it is to follow the code around the object and back to the beginning. Edge following is a very useful operation, particularly as a stepping stone to making decision by discovering region positions in images. This is effectively the dual of segmentation by region detection. There are a number edge following techniques. There are many levels of sophistication associated with edge following and the reader may well see how sophistication can be added to the simple technique described. Simple edge following USE. Knowing that a pixel is on an edge, the edge will be followed so that an object is outlined. This is useful prior to calculating the area of a particular shape. It is also useful if the enclosed region is made up of many regions that the user whishes to combine. OPERATION. It is assumed that a position on the edge of a region has been identified, call it (x,y). No flag this position as "used" (so that it is not used again) and evaluate all the 3 x 3 (or larger) Sobel gradient values centered on each of the eight pixels surrounding (x, y).
- Choose the three pixels with the greatest absolute gradient magnitude. Put three pixels positions in a three columns array, one column for each pixel position, order them in the row according to gradient magnitude. Choose the one with greatest gradient magnitude. Now this pixel will be in one of the directions 0 −7 with respect to the pixel (x, y) given by the following map, where * is the position of pixel (x, y). 012 7*3 654 For example, if the maximum gradient magnitude was found from the Sobel operator centered round the pixel (x+1, y) then the direction would be 3. Call the direction of travel d. Assuming that the shape is not very irregular, repeat the above algorithm but instead of looking at all the pixels around the new pixel, look only in direction a, (d+1)mod 8, and (d−1)mod 8. If no suitably high value of gradient magnitude is found, remove the pixel from the list and choose the next one of the three sorted. If all three have been removed from the list, then move up a row and choose the next best from the previous row. Stop when the travel reaches the original pixel, or excursion has gone on too long or the number of rows in the list is very large. As suggested in the description of the technique, the problem may be the amount of time to reach a conclusion. Various heuristic techniques, including adding weights and creating more substantial trees can be included.
CÓ THỂ BẠN MUỐN DOWNLOAD
-
Giáo trình cơ sở dữ liệu máy tính
76 p | 307 | 95
-
Upload file hình ảnh vào cơ sở dữ liệu SQL ASP
3 p | 827 | 77
-
Lưu trữ hình ảnh vào cơ sở dữ liệu với C#
5 p | 226 | 40
-
Bài giảng môn Cơ sở dữ liệu: Chương 7 - ĐH KHTN
0 p | 255 | 35
-
Bài giảng Cơ sở dữ liệu hướng đối tượng: Phần mở đầu - Phạm Thị Anh Lê
7 p | 192 | 27
-
HỆ THỐNG CƠ SỞ DỮ LIỆU
214 p | 197 | 25
-
Bài giảng Cơ sở dữ liệu - GV. Cao Tùng Anh
89 p | 77 | 17
-
Bài giảng Hệ quản trị cơ sở dữ liệu: Chương 3 - GV. Đặng Thị Kim Anh
18 p | 76 | 7
-
Bài giảng Hệ quản trị cơ sở dữ liệu: Chương 2 - GV. Đặng Thị Kim Anh
57 p | 96 | 7
-
Giáo trình nhập môn Cơ sở dữ liệu - NXB Lao động Xã hội: Phần 2
92 p | 43 | 6
-
Thực hành cơ sở dữ liệu: Phần 2
92 p | 38 | 5
-
Bài giảng Cơ sở dữ liệu: Chương 0 - ThS. Nguyễn Thị Như Anh
10 p | 52 | 5
-
Bài giảng Cơ sở dữ liệu: Chương 3 - ThS. Nguyễn Thị Như Anh
17 p | 26 | 4
-
Bài giảng Cơ sở dữ liệu và quản trị cơ sở dữ liệu: Chương 3 - ThS. Nguyễn Vương Thịnh
53 p | 28 | 4
-
Bài giảng Cơ sở dữ liệu: Chương 1 - ThS. Nguyễn Thị Như Anh
20 p | 33 | 4
-
Bài giảng Cơ sở dữ liệu: Chương 2 - ThS. Nguyễn Thị Như Anh
15 p | 17 | 4
-
Hệ cơ sở dữ liệu và các nguyên lý: Phần 1
162 p | 7 | 2
-
Bài giảng Hệ quản trị cơ sở dữ liệu MSSQL 2005: Chương 8 - Hồ Thị Anh Đào
22 p | 36 | 1
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn