Thuật toán Algorithms (Phần 40)

Chia sẻ: Tran Anh Phuong | Ngày: | Loại File: PDF | Số trang:10

0
27
lượt xem
4
download

Thuật toán Algorithms (Phần 40)

Mô tả tài liệu
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Tham khảo tài liệu 'thuật toán algorithms (phần 40)', khoa học tự nhiên, toán học phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả

Chủ đề:
Lưu

Nội dung Text: Thuật toán Algorithms (Phần 40)

  1. ELEMENTARY GRAPH ALGORITHMS actually visited in the order A F E G D C B H I J K L M. Each connected component leads to a tree, called the depth-first search tree. It is important to note that this forest of depth-first search trees is simply another way of drawing the graph; all vertices and edges of the graph are examined by the algorithm. Solid lines in the diagram indicate that the lower vertex was found by the algorithm to be on the edge list of the upper vertex and had not been visited at that time, so that a recursive call was made. Dotted lines correspond to edges to vertices which had already been visited, so the if test in visit failed, and the edge was not “followed” with a recursive call. These comments apply to the first time each edge is encountered; the if test in visit also guards against following the edge the second time that it is encountered. For example, once we’ve gone from A to F (on encountering F in A’s adjacency list), we don’t want to go back from F to A (on encountering A in F’s adjacency list). Similarly, dotted links are actually checked twice: even though we checked that A was already visited while at G (on encountering A in G’s adjacency list), we’ll check that G was already visited later on when we’re back at A (on encountering G in A’s adjacency list). A crucial property of these depth-first search trees for undirected graphs is that the dotted links always go from a node to some ancestor in the tree (another node in the same tree, that is higher up on the path to the root). At any point during the execution of the algorithm, the vertices divide into three classes: those for which visit has finished, those for which visit has only partially finished, and those which haven’t been seen at all. By definition of visit, we won’t encounter an edge pointing to any vertex in the first class, and if we encounter an edge to a vertex in the third class, a recursive call will be made (so the edge will be solid in the depth-first search tree). The only vertices remaining are those in the second class, which are precisely the vertices on the path from the current vertex to the root in the same tree, and any edge to any of them will correspond to a dotted link in the depth-first search tree. The running time of dfs is clearly proportional to V + E for any graph. We set each of the V val values (hence the V term), and we examine each edge twice (hence the E term). The same method can be applied to graphs represented with adjacency matrices by using the following visit procedure:
  2. 384 CHAPTER 29 procedure visit(k: integer); var t: integer; begin now:=now+l; vaI[k] :=now; for t:=l to Vdo if a[k, t] then if val[t]=O then visit(t); end ; Traveling through an adjacency list translates to scanning through a row in the adjacency matrix, looking for true values (which correspond to edges). As before, any edge to a vertex which hasn’t been seen before is “followed” via a recursive call. Now, the edges connected to each vertex are examined in a different order, so we get a different depth-first search forest: This underscores the point that the depth-first search forest is simply another representation of the graph whose particular structure depends both on the search algorithm and the internal representation used. The running time of dfs when this visit procedure is used is proportional to V2, since every bit in the adjacency matrix is checked. Now, testing if a graph has a cycle is a trivial modification of the above program. A graph has a cycle if and only if a nonaero val entry is discovered in visit. That is, if we encounter an edge pointing to a vertex that we’ve already visited, then we have a cycle. Equivalently, all the dotted links in the depth-first search trees belong to cycles. Similarly, depth-first search finds the connected components of a graph. Each nonrecursive call to visit corresponds to a different connected component. An easy way to print out the connected components is to have visit print out
  3. ELEMENTARY GRAPH ALGORITMMS 385 the vertex being visited (say, by inserting write(name(k)) just before exiting), then print 6ut some indication that a new connected component is to start just before the call to visit, in dfs (say, by inserting two writeln statements). This technique would produce the following output when dfs is used on the adjacency list representation of our sample graph: G D E F C B A I H K M L J Note that the adjacency matrix version of visit will compute the same con- nected components (of course), but that the vertices will be printed out in a different order. Extensions to do more complicated processing on the connected com- ponents are straightforward. For example, by simply inserting invaJ[now]=k after vaJ[k]=now we get the “inverse” of the vaJ array, whose nowth entry is the index of the nowth vertex visited. (This is similar to the inverse heap that we studied at the end of Chapter 11, though it serves a quite different purpose.) Vertices in the same connected components are contiguous in this array, the index of each new connected component given by the value of now each time visit is called in dfs. These values could be stored, or used to mark delimiters in inval (for example, the first entry in each connected component could be made negative). The following table would be produced for our example if the adjacency list version of dfs were modified in this way: k n a m e ( k ) vaJIk1 invaJ[k] 1 A 1 -1 2 B 7 6 3 C 6 5 4 D 5 7 5 E 3 4 6 F 2 3 7 G 4 2 8 H 8 -8 9 I 9 9 10 J 10 -10 11 K 11 11 12 L 12 12 13 M 13 13 With such techniques, a graph can be divided up into its connected com-
  4. 386 CHAPTER 29 ponents for later processing by more sophisticated algorithms. Mazes This systematic way of examining every vertex and edge of a graph has a distinguished history: depth-first search was first stated formally hundreds of years ago as a method for traversing mazes. For example, at left in the diagram below is a popular maze, and at right is the graph constructed by putting a vertex at each point where there is more than one path to take, then connecting the vertices according to the paths: This is significantly more complicated than early English garden mazes, which were constructed as paths through tall hedges. In these mazes, all walls were connected to the outer walls, so that gentlemen and ladies could stroll in and clever ones could find their way out by simply keeping their right hand on the wall (laboratory mice have reportedly learned this trick). When independent inside walls can occur, it is necessary to resort to a more sophisticated strategy to get around in a maze, which leads to depth-first search. To use depth-first search to get from one place to another in a maze, we use visit, starting at the vertex on the graph corresponding to our starting point. Each time visit “follows” an edge via a recursive call, we walk along the corresponding path in the maze. The trick in getting around is that we must walk back along the path that we used to enter each vertex when visit finishes for that vertex. This puts us back at the vertex one step higher up in the depth-first search tree, ready to follow its next edge. The maze graph given above is an interesting “medium-sized” graph which the reader might be amused to use as input for some of the algorithms in later chapters. To fully capture the correspondence with the maze, a weighted
  5. ELEMENTARY GRAPH ALGORITHMS 387 version of the graph should be used, with weights on edges corresponding to distances (in the maze) between vertices. Perspective In the chapters that follow we’ll consider a variety of graph algorithms largely aimed at determining connectivity properties of both undirected and directed graphs. These algorithms are fundamental ones for processing graphs, but are only an introduction to the subject of graph algorithms. Many interesting and useful algorithms have been developed which are beyond the scope of this book, and many interesting problems have been studied for which good algorithms have not yet been found. Some very efficient algorithms have been developed which are much too complicated to present here. For example, it is possible to determine efficiently whether or not a graph can be drawn on the plane without any intersecting lines. This problem is called the planarity problem, and no efficient algorithm for solving it was known until 1974, when R. E. Tarjan developed an ingenious (but quite intricate) algorithm for solving the problem in linear time, using depth-first search. Some graph problems which arise naturally and are easy to state seem to be quite difficult, and no good algorithms are known to solve them. For example, no efficient algorithm is known for finding the minimum-cost tour which visits each vertex in a weighted graph. This problem, called the traveling salesman problem, belongs to a large class of difficult problems that we’ll discuss in more detail in Chapter 40. Most experts believe that no efficient algorithms exist for these problems. Other graph problems may well have efficient algorithms, though none has been found. An example of this is the graph isomorphism problem: determine whether two graphs could be made identical by renaming vertices. Efficient algorithms are known for this problem for many special types of graphs, but the general problem remains open. In short, there is a wide spectrum of problems and algorithms for dealing with graphs. We certainly can’t expect to solve every problem which comes along, because even some problems which appear to be simple are still baffling the experts. But many problems which are relatively easy to solve do arise quite often, and the graph algorithms that we will study serve well in a great variety of applications. n
  6. 388 Exercises 1. Which undirected graph representation is most appropriate for determin- ing quickly whether a vertex is isolated (is connected to no other vertices)? 2. Suppose depth-first search is used on a binary search tree and the right edge taken before the left out of each node. In what order are the nodes visited? 3. How many bits of storage are required to represent the adjacency matrix for an undirected graph with V nodes and E edges, and how many are required for the adjacency list representation? 4. Draw a graph which cannot be written down on a piece of paper without two edges crossing. 5. Write a program to delete an edge from a graph represented with ad- jacency lists. 6. Write a version of adjlist that keeps the adjacency lists in sorted order of vertex index. Discuss the merits of this approach. 7. Draw the depth-first search forests that result for the example in the text when dfs scans the vertices in reverse order (from V down to l), for both representations. 8. Exactly how many times is visit called in the depth-first search of an undirected graph, in terms of the number of vertices V, the number of edges E, and the number of connected components C? 9. Find the shortest path which connects all the vertices in the maze graph example, assuming each edge to be of length 1. 10. Write a program to generate a “random” graph of V vertices and E edges as follows: for each pair of integers i < j between 1 and V, include an edge from i to j if and only if randomint(V*(V-l)div 2) is less than E. Experiment to determine about how many connected components are created for V = E = 10,100, and 1000.
  7. 30. Connectivity The fundamental depth-first search procedure in the previous chapter finds the connected components of a given graph; in this section we’ll examine related algorithms and problems concerning other graph connectivity properties. As a first example of a non-trivial graph algorithm we’ll look at a generali- zation of connectivity called biconnectivity. Here we are interested in knowing if there is more than one way to get from one vertex to another in the graph. A graph is biconnected if and only if there are at least two different paths connecting each pair of vertices. Thus even if one vertex and all the edges touching it are removed, the graph is still connected. If it is important that a graph be connected for some application, it might also be important that it stay connected. We’ll look at a method for testing whether a graph is biconnected using depth-first search. Depth-first search is certainly not the only way to traverse the nodes of a graph. Other strategies are appropriate for other problems. In particular, we’ll look at breadth-first search, a method appropriate for finding the shortest path from a given vertex to any other vertex. This method turns out to differ from depth-first search only in the data structure used to save unfinished paths during the search. This leads to a generalized graph traversal program that encompasses not just depth-first and breadth-first search, but also classical algorithms for finding the minimum spanning tree and shortest paths in the graph, as we’ll see in Chapter 31. One particular version of the connectivity problem which arises frequently involves a dynamic situation where edges are added to the graph one by one, interspersed with queries as to whether or not two particular vertices belong to the same connected component. We’ll look at an interesting family of algorithms for this problem. The problem is sometimes called the “union-find” problem, a nomenclature which comes from the application of the algorithms 389
  8. 390 CHAPTER 30 to processing simple operations on sets of elements. Biconnectivity It is sometimes reasonable to design more than one route between points on a graph, so as to handle possible failures at the connection points (vertices). For example, we can fly from Providence to Princeton even if New York is snowed in by going through Philadelphia instead. Or the main communications lines in an integrated circuit might be biconnected, so that the rest of the circuit still can function if one component fails. Another application, which is not particularly realistic but which illustrates the concept is to imagine a wartime stituation where we can make it so that an enemy must bomb at least two stations in order to cut our rail lines. An articulation point in a connected graph is a vertex which, if deleted, would break the graph into two or more pieces. A graph with no articulation points is said to be biconnected. In a biconnected graph, there are two distinct paths connecting each pair of vertices. If a graph is not biconnected, it divides into biconnected components, sets of nodes mutually accessible via two distinct paths. For example, consider the following undirected graph, which is connected but not biconnected: (This graph is obtained from the graph of the previous chapter by adding the edges GC, GH, JG, and LG. In our examples, we’ll assume that these fours edges are added in the order given at the end of the input, so that (for example) the adjacency lists are similar to those in the example of the previous chapter with eight new entries added to the lists to reflect the four new edges.) The articulation points of this graph are A (because it connects B to the rest of the graph), H (because it connects I to the rest of the graph), J (because it connects K to the rest of the graph), and G (because the graph would fall into three pieces if G were deleted). There are six biconnected components: ACGDEF, GJLM, and the individual nodes B, H, I, and K. Determining the articulation points turns out to be a simple extension
  9. CONNECTIVITY 391 of depth-first search. To see this, consider the depth-first search tree for this graph (adjacency list representation): Deleting node E will not disconnect the graph because G and D both have dotted links that point above E, giving alternate paths from them to F (E’s father in the tree). On the other hand, deleting G will disconnect the graph because there are no such alternate paths from L or H to E (L’s father). A vertex x is not an articulation point if every son y has some node lower in the tree connected (via a dotted link) to a node higher in the tree than x, thus providing an alternate connection from x to y. This test doesn’t quite work for the root of the depth-first search tree, since there are no nodes “higher in the tree.” The root is an articulation point if it has two or more sons, since the only path connecting sons of the root goes through the root. These tests are easily incorporated into depth-first search by changing the node-visit procedure into a function which returns the highest point in the tree (lowest val value) seen during the search, as follows:
  10. CHAPTER 30 function visit (k: integer): integer; var t: link; m, min: integer; begin now:=now+l; val[k]:=now; min:=now; t:=adj[k]; while tz do begin if val[tt.v]=O then begin m:=visit(tt.v); if m=val[k] then write(name(k)); end else if val[tt.v]
Đồng bộ tài khoản