intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Tìm hiểu về các phương pháp thiết kế tối ưu (Optimal design methods): Phần 2

Chia sẻ: Khuynh Linh Nguyệt | Ngày: | Loại File: PDF | Số trang:102

6
lượt xem
3
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Phần 2 của cuốn sách "Các phương pháp thiết kế tối ưu (Optimal design methods)" gồm 4 chương cuối, trình bày về: Chương 6 - Các phương pháp trực tiếp để giải các bài toán có ràng buộc; Chương 7 - Giới thiệu về phương pháp thuật giải di truyền; Chương 8 - Phương pháp các khái niệm xấp xỉ; Chương 9 - Giới thiệu về chương trình vẽ đường đồng mức;...

Chủ đề:
Lưu

Nội dung Text: Tìm hiểu về các phương pháp thiết kế tối ưu (Optimal design methods): Phần 2

  1. Chapter 6. Direct method for constrained problems Chapter 6. Direct method for constrained problems Ph−¬ng ph¸p trùc tiÕp ®Ó gi¶i c¸c bμi to¸n tèi −u hãa cã rμng buéc 6.1. LAGRANGE MULTIPLIER METHOD Phương pháp nhân tử Lagrange 6.1.1. Lagrange Multiplier for one equality constraint For the constrained problem: find ⃗, such that 𝐟 ⃗ → 𝐦𝐢𝐧 𝐱 𝐱 subject to: ℓ x ⃗ 0 (6.1) We assume that ℓ ⃗ 𝑥 0 can be used to solve for one variable in terms of others as: x ∅ x ,x ,…,x (6.2) Substituting equation (6.2) into equation (6.1), we eliminate 𝑥 from the cost function and obtain the unconstrained problem as: Find 𝐱 𝟐 , 𝐱 𝟑 , … , 𝐱 𝐧 , such that: f ∅ x ,x ,…,x ,x ,x ,…,x → 𝐦𝐢𝐧 (6.3) The necessary condition for the optimum point ⃗ ∗ of equation (6.3) can be expressed as: 𝑥 df f f    .  0, i  2,3,, n (6.4) dxi xi  xi Since ∅ 𝑥 , 𝑥 , … , 𝑥 may be an implicit function (hàm ẩn), we don’t know  , i  2,3,, n . xi However, we have:  d  x  l l  ℓ ∅, x , x , … , x 0, and   .  0, i  2, 3,  , n. dx i x i  x i l   x i or,  (6.5) x i l  115 
  2. Chapter 6. Direct method for constrained problems Substituting equation (6.5) into equation (6.4), we have: f f l    0, i  2, 3,  , n (6.6) x i l x i  Define the Lagrange multiplier λ as: f λ =   (6.7) l  and obtain: f l  λ.  0, i  2,3,, n (6.8) xi xi Rearranging equation (6.7), we have: f l  λ. 0   f l or, + λ. 0 (6.9) x i xi Combining equation (6.8) and equation (6.9), we have the necessary conditions for the optimum points of equation (6.1) as: f l + λ.  0 , i=1, 2, …, n and ℓ x ⃗ 0 x i xi If we define the Lagrange function L ( ⃗, 𝜆 as: 𝑥 L (x λ ⃗, f x ⃗ ⃗ λℓ x then, the necessary conditions of equation (6.1) can be expressed as: ∇ L (x λ ⃗, 0 (6.10) or,  f l ∇ ⃗ L (x, λ)  0  λ.  0,i  2,3,, n xi xi ∇ L (x λ ⃗, 0→ℓ x ⃗ 0 Equation (6.10) has n+1 equations, and it can be used to solve n+1 unknown ⃗ 𝑎𝑛𝑑 𝜆∗ 𝑥∗ 116
  3. Chapter 6. Direct method for constrained problems Physical meaning of the Lagrange multiplier for only one equality constraint: Ý nghĩa vật lý của nhân tử Lagrange đối với một ràng buộc đẳng thức duy nhất: 1. The gradients of the Lagrange function are zeros at the candidate optimum points. So: 𝛻𝐿 ⃗ ∗ , 𝜆∗ 𝑥 0 or 𝛻𝑓 ⃗ ∗ 𝑥 𝜆∗ 𝛻ℓ ⃗ ∗ 𝑥 0 or 𝛻𝑓 ⃗ ∗ 𝑥 𝜆∗ 𝛻ℓ ⃗ ∗ 𝑥 It means that, at the candidate optimum point, ∇𝑓 ⃗ ∗ and ∇𝑙 ⃗ ∗ are along the same 𝑥 𝑥 line, and 𝜆 is the proportional constant of the vector length of ∇𝑓 ⃗ ∗ and ∇  ⃗ ∗ ∗ 𝑥 𝑥 2. For the ℓ ⃗ 𝑥 0, if it is multiplied by -1, the optimum solution is not changed. However, the sign of 𝜆∗ is reversed as: f x1 λ*      x  x1 It shows that 𝜆∗ for the equality constraint is free in sign. The sign is determined by the form of the equality constraint equation. Example 01: 𝟐 𝟐 Find ⃗, such that: 𝐟 ⃗ 𝐱 𝐱 𝐱𝟏 𝟐 𝐱𝟐 𝟐 → 𝐦𝐢𝐧 Subject to: ℓ x⃗ x x 2 0 1. Graphical solution - Nghiệm ở dạng đồ thị x∗ ⃗ 1, 1 , f ∗ 2 Figure 6.1: Graphical solution of the example 01 of the Lagrange Multiplier for one equality constraint 117 
  4. Chapter 6. Direct method for constrained problems 2. Lagrange Multiplier Method: L (x λ ⃗, f x ⃗ λℓ x ⃗ x 2 x 2 λ x x 2 Candidate optimum points: ∇ ⃗ L=0 and ∇ L=0 L  0  2  x1  2  λ  0 x1 L  0  2  x 2  2  λ  0 x2 L  0  x1  x 2  2  0 λ → x∗ ⃗ 1,1 , λ∗ 2 f  x x * 2 3. Note that from equation (6.7): λ*   1  *   2  x 1 x 1 4. If we multiply ℓ x ⃗ x x 2 0 by 1, then we can solve the solution as: ⃗ ∗ ∗ 𝐱 𝟏, 𝟏 , 𝛌 𝟐 6.1.2. Lagrange multipliers for multiple equality constraints Các nhân tử Lagrange cho bài toán có nhiều ràng buộc đẳng thức A problem with multiple equality constraints is expressed as: Find ⃗, such that: 𝐟 ⃗ → 𝐦𝐢𝐧 𝐱 𝐱 subject to: 𝓵 𝐣 ⃗ 𝐱 𝟎, 𝐣 𝟏, 𝟐, 𝟑, … , 𝐦; 𝐦 𝐧 (6.11) Theorem (Phát biểu định lý): The necessary conditions for the optimum solution of problem in equation (6.11) is: ⃗, ⃗ ∇ L (x λ 0 or, ⃗, ⃗ ∇ ⃗ L (x λ ⃗, ⃗ 0 , ∇ ⃗ L (x λ 0 (6.12) where L ⃗, ⃗ 𝑥 𝜆 𝑓 ⃗ 𝑥 ∑ 𝜆ℓ ⃗ 𝑥 Proof (chứng minh định lý): Assume d x ⃗ dx , dx , … , dx be any admissible variations about the optimum ∗ ∗ point x . If x is a local optimum point, then: ⃗ ⃗ n f df =  x    dx i  f x* .dx  0 (6.13) i 1 i 118
  5. Chapter 6. Direct method for constrained problems At the optimum point, ℓ x ∗ ⃗ 0, and dℓ x ∗ ⃗ 0 for any admissible variations dx ⃗ from the optimum point. So, dℓ x ∗ ⃗ ∇ℓ x ∗ . dx 0 ⃗ ⃗ (6.14) Multiply equation (6.14) by λ and add to equation (6.13) we have:  n   f(x * ) .dx i  λ j  j (x * ).dx i  0 j1  m   f(x* )  λ j  j (  f(x )).dx1 = 0 j1 n    f x * m    ).dx  j x * or, ( x λ j x i i 0 (6.15) i 1 i j1 Since only n-m variables are independent. Let x , x , … , x be dependent variables, and x ,x , … , x be independent variables. We can choose values of λ , i = 1, 2, ..., m, such that the coefficients for first m variations of dx in equation (6.15) become zero. i.e, f m  j  λ j  0, i  1, 2,  , m (6.16) x i j1 x i L Or,  0, i  1,2,,m (6.17) xi For dx , dx , … , dx , because they are independent, the coefficients must be zeros, i.e., f m  j  λ j  0, i  m  1, m  2, , n x i j1 x i L or,  0, i  m  1, m  2,, n (6.18) xi In addition, we have ℓ x ∗ ⃗ 0, j 1,2, … , m, or, L  0, j  1,2,,m (6.19) λi From equation (6.17) to (6.19), we have necessary conditions for equation (6.11) as: L L  0,i  1, 2,, n ;  0, j  1, 2,.., m x i λ j or ∇ L( x ∗ , λ∗ ⃗ 0 (6.20) or ∇ ⃗ L (x λ ⃗, 0, ∇ ⃗ L (x λ ⃗, 0 6.1.3. Sufficient conditions for problems with multiple equality constraints Các điều kiện đủ cho các bài toán tối ưu có nhiều ràng buộc đẳng thức Theorem - Phát biểu định lý For any candidate minimum points of a problem with multiple equality constraints, ⃗ ⃗ ⃗ If dx Q x ∗ , λ∗ dx 0, then x ∗ is a true local minimum. ⃗ ⃗ 119 
  6. Chapter 6. Direct method for constrained problems Proof - Chứng minh định lý For all sufficiently small admissible variations dx from x ∗ , we have: ⃗ ⃗          m L x * , λ *  f x *  λ * j j x *  f x * (6.21) j1 For all admissible small variations (các biến thiên nhỏ có thể chấp nhận được) dx ⃗ from x ∗ , we have: ⃗         m   L (x ∗ ⃗ dx , λ* )  f x *  dx  λ* j j x *  dx  f x *  dx ⃗ (6.22) j1 Equation (6.22) – equation (6.21), we have: ⃗ ⃗ ⃗ L (x ∗ dx , λ∗ L x∗ ⃗ f x ∗ dx ⃗ ⃗ f x∗ ⃗ Since x ∗ is a local minimum point, then we have: ⃗ f x ∗ dx ⃗ ⃗ f x∗ > 0 ⃗ So, we have: ⃗ ⃗ ⃗ L (x ∗ dx , λ∗ L x∗ ⃗ 0 (6.23) ⃗, ⃗ Taylor series expansion of L (x λ about x ∗ gives: ⃗ ⃗ ⃗ ⃗ L (x ∗ dx , λ∗ L (x ∗ ⃗ ⃗ ⃗ ⃗ dx ∇ ⃗ L (x ∗ , λ∗ ⃗ ⃗ dx Q x ∗ , λ∗ dx ⃗ ⃗ ⃗ ⃗ Since ∇ ⃗ L (x ∗ , λ∗ 0, we have: L (x ∗ ⃗ ⃗ ⃗ dx , λ∗ L x∗ ⃗ ⃗ ⃗ dx Q x ∗ , λ∗ dx ⃗ ⃗ (6.24) From equation (6.23) and equation (6.24), we have: ⃗ ⃗ ⃗ dx Q x ∗ , λ∗ dx 0 ⃗ which are the sufficient conditions of a local minimum point. ⃗ ⃗ ⃗ It is also noted that if dx Q x ∗ , λ∗ dx 0, then x ∗ is a local maximum point. ⃗ ⃗ 6.1.4. Interpretation of the Lagrange Multipliers Sự diễn giải các nhân tử Lagrange Consider the modified problem of the form: Find x, such that: f x → min ⃗ ⃗ Subject to: ℓ x ⃗ b ,j 1,2, … , m Where b , j 1,2, … , m are small variations about zero. Adjusting the values of b can be considered as the tightening or relaxation of the j-th constraint. It is clear that the solution ⃗ ∗ of the perturbed problem (bài toán nhiễu) depends 𝑥 on ⃗, i.e. 𝑏 ⃗ ⃗ x ∗ x ∗ (b . We also have: ⃗ 120
  7. Chapter 6. Direct method for constrained problems            m L ( b)  f x b  λk ( k x b  bk ) k 1 L f x λ     x b   bk  k  x  b    b k   λ k (  xk b  δ ik )   x n m m n j j  b i j1 j i k 1 i k 1 j1 j i f x j λ         k x j n m m n =   bk  k x b  bk  λ k (  )  λi (6.25) j1  x j  b i k 1 i k 1 j1  x j  b i f  x j λ     bk  k  x  b    b k  λ i  n m m = (  λ k k )  j1 x j k 1 x j b i k 1 i ⃗ ⃗ From the necessary conditions, it is also noted that, at the optimum points x ∗ , λ∗ , we have:   L f  L    m 0  λ k k  0 and  0   k x b  bk  0 (6.26) x j x j k 1 x j λ k From equation (6.25) and equation (6.26), we have L  λ* i (6.27) bi   x*,λ* Since, at optimum points ⃗ ∗ , ⃗∗ , 𝑥 𝜆   L f L( b)  f b   bi    bi   (6.28) x* ,λ* x* ,λ* From equation (6.27) and equation (6.28): f  λ* i (6.29) bi   x*,λ* So, we can interpret λ as the sensitivity of f x w.r.t. b at the optimum point. If we ⃗ think of f x as dollars of return and b as the i-th resource, then λ∗ may be interpreted ⃗ as dollars per unit of the i-th resource. It is why the Lagrange multipliers are sometimes called “shadow prices”. The result of equation (6.29) is based on the form of equality constraints ℓ x ⃗ b 0. If we change the form to b ℓ x ⃗ 0, then the result will be: f  i* (6.30) bi   x*,* The sign of 𝜆∗ may be positive or negative, depending on the form of the equality constraints. 121 
  8. Chapter 6. Direct method for constrained problems 6.2. PROBLEMS WITH INEQUALITY CONSTRAINTS Các bài toán tối ưu với các ràng buộc bất đẳng thức 6.2.1. Kuhn-Tucker necessary condition - Điều kiện cần Kuhn-Tucker Theorem: For the minimization problem with inequality constraints of the form: find x, such that: f x → min ⃗ ⃗ subject to: g x ⃗ 0; j 1,2, … , m (6.31) The Kuhn-Tucker necessary condition states that, at the optimum point,    f (x * )  λ* g j (x * )  0 j jJ g x∗⃗ 0, j ∈ J (6.32) ∗ λ 0, j ∈ J where J is the set of active constraints. Proof: a. proof of    f (x * )   λ * g j (x * )  0 j jJ The problem of equation (6.31) can be converted to a problem with equality constraints by introducing slack variables s , j 1,2, … , m as: find x, such that f x → min ⃗ ⃗ subject to: g x ⃗ s 0; j 1,2, … , m (6.33) The necessary condition of equation (6.33) is equivalent to find the stationary value of the Lagrange function:     m  L ( x , s , λ)  f  x   λ j ( g j  x   s j2 ) j1     m  ∇ ⃗ L ( x , s , λ)  0   f  x   λ j g j  x   0 (6.34) j1 ⃗, ⃗ ⃗ ∇ ⃗ L (x s, λ 0 → 2λ s 0, j 1,2, … , m (6.35) ⃗, ⃗ ⃗ ∇ ⃗ L (x s, λ) = 0 → g x ⃗ s 0, j 1,2, … , m (6.36) Equation (6.35) states that, either λ or s should be zero at the optimum point. If s 0 then from equation (6.36) we have g x ∗ ⃗ 0 at optimum point. In other ∗ words, g x is active. ⃗ If s 0, then it must be λ∗ 0. From equation (6.36) we have g x ∗ ⃗ s 0 or g x∗ ⃗ ∗ 0. In other words, g x is inactive, and for inactive constraint, ⃗ λ∗ 0. Hence equation (6.34) can be rewritten as:    f (x * )   λ* g j (x * )  0 j (6.37) j J 122
  9. Chapter 6. Direct method for constrained problems b. Proof of λ∗ 0 for j ∈ J: Assume dx be any admissible small variations from the optimum point x ∗ . In order ⃗ ⃗ to keep x dx inside the feasible region, we must have: ⃗ ⃗ f x ⃗ f x∗ ⃗ x ⃗ x ∗ . ∇f x ∗ ⃗ ⃗ ⋯ f x ⃗ f x∗ ⃗ ⃗ x x ∗ . ∇f x ∗ ⃗ ⃗ ⋯ df dx. ∇f x ∗ ⃗ ⃗ 0 for any admissible variations dx ⃗ (6.38) ∗ ∗ ∗ g x ⃗ g x ⃗ x ⃗ x . ∇g x +… ⃗ ⃗ g x ⃗ g x∗ ⃗ x ⃗ x ∗ . ∇g x ∗ ⃗ ⃗ ⋯ dg dx ∇g x ∗ ⃗. ⃗ 0 for j ∈ J and any admissible variations dx ⃗ (6.39) dx equation (6.37) gives: ⃗.     dx . f (x * )   λ *dx . g j (x * )  0 j j J df +  λ*dg j j  0 or -df =  λ*jdg j j J j J From equation (6.38) and equation (6.39), it results that λ∗ 0 for j ∈ J and any admissible variations dx ⃗ * Figure 6.2: Illustration of the Kuhn-Tucker necessary conditions 6.2.2. Computation of Lagrange Multipliers Tính toán các nhân tử Lagrange The Kuhn-Tucker necessary conditions state that, at the optimum point ⃗ ∗ 𝑥   f (x * )   λ*g j (x* )  0 j j J ∗ λ 0 for j ∈ J 123 
  10. Chapter 6. Direct method for constrained problems Let: {B} = ∇f, or b ,i 1,2, … , n [N] = [ A A … A n×r n ; A ∇g ; j 1,2, … , r The Kuhn-Tucker necessary conditions expressed in matrix form are: λ* ∇g x ∗ j ⃗ ∇f x ∗ → N ∗ ⃗ λ∗ B∗ (6.40) jJ Equation (6.40) has n equations with r unknown variables ⃗∗ . In general r ≤ m and 𝜆 r ≤ n, and thus equation (6.40) is over determined. The solution of equation (6.40) m may be: (i) an unique solution, (ii) no solution, or (iii) infinite number of solutions. ⃗ It is not convenient to solve λ∗ directly from equation (6.40). However, it can be ⃗ solved by the least square error method. Define a residual vector R as: ⃗ R N λ B ⃗ Define the square of two residual vector R as:          L ( λ)  R.R   N λ  B  N λ  B  λ T  N T  N  λ  2λ T  N T B  BT B T ⃗ L (λ is minimized: L ∇ ⃗ L= 0, or  0, j  1, 2,  , r λ j 2N ⃗ Nλ ⃗ 2N B 0 or N ⃗ Nλ ⃗ ⃗ N B → λ∗ can be uniquely solved for non-singular N N. * * ⃗ For the solution λ∗ , if λ∗ ⃗ ⃗ 0, j = 1, 2, …, r, and L (λ ) = 0 (or R λ = 0, N λ = {B}), then the K.T, necessary conditions are satisfied. So, procedures for testing the Kuhn-Tucker necessary conditions are: ⃗ 1. Compute λ∗ N N ⃗ ⃗ N B and R N λ∗ B. 2. If λ∗ ⃗ 0 and R 0, then the Kuhn-Tucker necessary conditions are satisfied. 6.2.3. Projection vector and Projection matrix Ma trận hình chiếu và Véc tơ hình chiếu The projection of a vector ⃗ onto the intersection of all tangent planes of active 𝐵 constrains can be expressed as: ⃗ B B ⃗ ⃗ B → B ⃗ ⃗ B ⃗ B 124
  11. Chapter 6. Direct method for constrained problems Figure 6.3: Illustration of the projection vector and projection matrix ⃗ The vertical component B can be expressed as: B ⃗ ∑ ∈ u . ∇g , and     B P  B   u j . g j , B P   g j , j  J( g j  A j ) j J       A i .B P  0 , i  1, 2 , , r , or A i .B   u j .A i A j  0 , i  1, 2 , , r j J or N B N N u 0, or N N u N B, or {u} = N N N B       1 BP  B   u j .g j  B   N u  B   N   N   N   N T B T jJ = I N N N N B or, ⃗ B P B [P] = [I] – [N] N N N : the projection matrix - Ma trận hình chiếu For K.T. necessary conditions: ⃗ [N]{λ} = {B ∗ , and λ N N N B or, {B} – [N]{λ}= 0 {B} – [N] N N N B 0 ([I] – [N] N N N B 0 or, ⃗ [P]{B} = 0, or B 0 So, the Kuhn-Tucker necessary conditions state that, at the optimum point, the projection of ∇f x ∗ onto the intersection of tangent planes (các mặt phẳng tiếp ⃗ tuyến) of active constraints is a null vector (véc tơ không). 125 
  12. Chapter 6. Direct method for constrained problems 6.3. METHOD OF FEASIBLE DIRECTIONS Phương pháp tìm các hướng khả thi Figure 6.4: Illustration of the method of feasible directions Basic concepts - Các khái niệm cơ bản Feasible design: g x ⃗ 0 ⃗ ⃗ Feasible direction: S points to the feasible region, or S . ∇g x ⃗ 0, j ∈ J. ⃗ ⃗ Usable direction: S is in downward direction, or S . ∇f x ⃗ 0. ⃗ Usable-feasible direction: S . ∇g ⃗ 0, for j ∈ J and S . ∆f x ⃗ 0. Method of find a usable-feasible search direction: ⃗ (1) If no active constrain exists, then S ∇f x ⃗ ⃗ (2) If some active constraints exist, then S must satisfy conditions of the usable- feasible directions. It can be found by the following methods: a. Random method: Generate a random direction and test if it is a usable-feasible direction. b. Projection method: B⃗ ⃗ ∇f x , S ⃗ ⃗ B ⃗ PB P ∇f x ⃗ The projection direction can also be found by the following problem:  ⃗ STf find S, such that: cosγ    max S . f 126
  13. Chapter 6. Direct method for constrained problems ⃗ subject to: S. ∇g 0, j ∈ J 1≤S 1, i 1,2, … , m c. Method of feasible direction: The direction finding problem in the method of feasible direction is expressed as: ⃗ ⃗ Find S and β, such that f (S, β β → max ⃗ Subject to: (i) S. ∇g θ β 0, jϵJ ⃗ (ii) S. ∇f β 0 (iii) -1 ≤ S 1, i 1,2, … , n where θ for j ∈ J is a given positive number and called the push-off factors. ⃗ Suppose the solution of this problem is S∗ and β∗ , and ⃗ If β∗ 0, then S∗ . ∇g ⃗ θ β∗ 0, S ∗ . ∇f β∗ 0 ⃗ → S ∗ is a usable-feasible direction (hướng khả thi có thể sử dụng) If β∗ 0, then no usable-feasible sector exists. =0 Figure 6.5: Illustration of direction finding problem in the method of feasible direction  θ : push-off factors (1) θ 0 for linear constraints (2) θ 1 for nonlinear constraints θ ⃗ 0, S∗ is the projection of -∇f x ⃗ onto the tangent plane of g x ⃗ 0 at x ⃗  θ ⃗ 0, S. ∇g ⃗ θ β, S. ∇f β. 127 
  14. Chapter 6. Direct method for constrained problems Procedure for the method of feasible directions: ⃗ (1) Find a usable-feasible direction S . ⃗ ⃗ (2) 1-D search along S , x x ⃗ ⃗ α∗ . S There are two possibilities at the point x ⃗ : i. One or more active constraints is encountered. ii. No active constraint exists. (3) repeat step (1) & (2) until the convergent criteria are satisfied. 6.3.1. 1-D search in method of feasible directions: Sự tìm kiếm một chiều trong phương pháp các hướng khả thi There are two possibilities for the result of 1-D search in M.F.D: 1. One or more constraints is active. 2. No constraint is active. 0 0 Figure 6.6: Illustration of 1-D search in method of feasible directions The two situations can be identified as: 1. Find α such that at least one constraint is active in the feasible region. Let: ⃗ x ⃗ x α S ⃗ df 2. Compete at x : ⃗ dα df n f x    i  f  x  .Sq dα i1 xi α df    Sq .f  x t  dα αmax 128
  15. Chapter 6. Direct method for constrained problems df df 3. If  0, than 0  α*  α max ;if  0, than α*  α max . dα α max dα α max df  d   df  0 d  max 0   max  0 Figure 6.7: Illustration of finding αmax in the feasible region. Procedure to find ⃗ 𝐭 from ⃗ 𝐪 : 𝐱 𝐱 1) Find an initial trial step length 𝛂 𝐭 , ⃗ 𝐱 ⃗𝐪 𝐱 ⃗ 𝛂𝐒 𝐪 There are some procedures can be used to predict the initial step size α (gia số) as following: a α is selected to reduce f x by a given amount ∆ : ⃗ f (x ⃗ αS⃗ f α a bα α 0→f 0 a f x ⃗ df f 0 b ⃗ S . ∇f x ⃗ dα f α f x ⃗ ⃗ S . ∇f x α ⃗ f α f x ⃗ ⃗ S . ∇f x α → ⃗ ∆ ⃗ S . ∇f x α ⃗   f (x q ) αt    Sq .  f(x q ) If a percentage ∆ is given: ∆ ∆f x ;∆ ⃗ 0.05 ~ 0.10  αt   f xq    f Sq . f xq   𝑏 α is selected such that ∆x produces same predetermined change ε: x x ⃗ ⃗ ⃗ αS , ∆x α .S ,  ε  m ax α t .S q ,i , i  1 , 2 , , n i  ε or α t   max Sq,i ,i  1, 2, , n i  129 
  16. Chapter 6. Direct method for constrained problems If ε is the percentage change of ∆x , then, x i α t .Sq,i  xi xi  αt .Sq,i    ε  max  ,i  1, 2,, n  i  x   i  ε or α t   Sq,i    max  , i  1, 2,  , n  i  x   i  2) Correcting the step size 𝛂 𝐭 : Figure 6.8: Illustration of correcting the step size αt There are three possible situations at the design with α α : a) x⃗ x ⃗ ⃗ α S is feasible and no active constraint is encountered. b) x is feasible, and some constraints are active. ⃗ A constraint tolerance ε is given to identify whether a constraint is inactive, active, or violated: Inactive constraint: g x ⃗ ε Active constraint: |g x | ⃗ ε, ε g x ⃗ ε Violated constraint: g x ⃗ ε Usually, we let ε 10 c x is in infeasible region. ⃗ A linear interpolation is required to locate α on active constraint boundary. 130
  17. Chapter 6. Direct method for constrained problems Figure 6.9: Illustration of A linear interpolation is required to locate 𝛼 on active constraint boundary. g α a bα g x ⃗ g α 0 a g x ⃗ g α a bα      gk  x t   gk xq   a = gk xq , b  αt  a g x  k q g k  α t   0  a  bα I  0  α I       αt b   gk  xt   gk xq If several constraints are violated at x , then all α for each violated constraint g x ⃗ ⃗ should be found, and the final α is the minimum value of α , k ∈ J: α min α , k ∈ J If all the constraints are scaled properly, then the most violated constraint can be selected to find α . Usually, we scale the constraints in the range -1 ≤ g x ⃗ 0 for feasible x. ⃗ There is a special case in which the linear interpolation will fail. In this situation, the quadratic interpolation should be used to find α . The linear interpolation: g x ⃗ 0; g x⃗ 0  αI     gk x q   αt  0  Linear interpolation will fail in predicating α .   g k  x t   gk x q 131 
  18. Chapter 6. Direct method for constrained problems Quadratic interpolation to predict 𝛂 𝐈 : g α a bα cα α 0→a g α 0 g x ⃗ dgk  α  dg  α  Sq .gk  α ; k  b  2cα dα dα α 0→b ⃗ S . ∇g x ⃗ α α →g x ⃗ g α a bα cα     c=     g k  x t   g k x q  Sq .g k x q 2 αt b  b 2  4ac gk  αt   0  αt  2c 6.3.2. The zigzagging phenomena in MFD Hiện tượng đường zigzag trong phương pháp tìm các hướng khả thi The zigzag problem in MFD can be solved by: (1) Use large constraint tolerance 𝜀 in identifying active constraints. (2) If a constraint is active too many times in alternative successions, it is considered as an active constraint when finding the search direction, whether or not it is truly active. Figure 6.10: The zig-zag in method of feasible directions 6.4. CPP PROGRAMS FOR METHOD OF FEASIBLE DIRECTIONS Các chương trình máy tính C++ cho phương pháp tìm kiếm các hướng khả thi 6.4.1. Direction Finding in Method of Feasible Directions For the constrained problem expressed as: 132
  19. Chapter 6. Direct method for constrained problems   Find x, such that f (x)  min   (6.41) Subject to:g j  x   0, j  1, 2, , n c   Assume that x is a feasible design, and we are going to find a search direction ⃗ ⃗ ⃗ S that will yield an improved design. S must satisfy the conditions of usable and feasible direction:  S.f  0      (6.42) S.g j  0, j  J ; J = {j:g j (xq )  0, j  1, 2,, m   Where J is the set of active constraints. For any search direction {S}. If we define an artificial variable α as:    α  max S.f , S.g j for j  J  (6.43) If α ⃗ 0, then S. ∇f ⃗ 0 and S. ∇g 0 for j ∈ J. It means that {S} is a usable and feasible direction. This problem of finding the usable - feasible direction can be expressed as:   Find S and α, such that f (S, α  α  min    Subject to :S.f  α    (6.44) S.g j  α for j  J  1  Si  1,for i  1, 2, , n   Substituting α by – β, the optimization problem can be expressed as:   Find S and β,such that f(S, β)=  β  min    Subject to :S.f  β  0    (6.45) S. g j  β  0 for j  J  1  Si  1, for i  1, 2,  , n   If the solution of this problem, β ∗ ⃗ ∗ 0 then S is a usable feasible direction. Zoutendijk introduced “push-off” factors θ for each active constraints to push the search direction S ⃗ off the constraints boundary, and the direction finding problem is expressed as:   Find S and β,such that f (S, β)  β  min    Subject to : S.f  β  0    (6.46) S.g j  θ jβ  0 for j  J  1  Si  1, for i  1, 2, , n   In general, we use θ 1.0 for nonlinear constraints, θ 0.0 for linear constraints. In order to convert equation (6.46) to the standard form of linear programming, we let y S 1 for i 1,2, … , n and the linear problem is expressed as: 133 
  20. Chapter 6. Direct method for constrained problems   Find y and β, such that f  y, β    β  min (a)   n f  Subject to : y.f  β   (b)  i 1 x i  n g  (6.47)  j  y.g j  θ jβ   for j  J (c) i 1 x i  0  yi  2 for i  1, 2,, n (d)  Let yn+1 = β, and add n slack variables y , y , … , y , for equation (6.47-b), add one slack variable y for equation (6.47-a), and finally add n slack variables y ,y ,…,y for equation (6.47-c). The number of slack variables would be n 1 n . Then equation (6.47) can be converted to the standard form of linear programming as: g1,1 g1,2 … g1,n θ1 1 0 … 0 0 0 0 … 0 g2,1 g2,2 … g θ2 0 1 … 0 0 0 0 … 0 … … … … … … … … … … … … … … 2,n gn ,1 gn ,2 … gn ,n θnj 0 0 … 1 0 0 0 … 0 j j j f,1 f,2 … f,n 1 0 0 … 0 1 0 0 … 0 {y} = 1 0 … 0 0 0 0 … 0 0 1 0 … 0 0 1 … 0 0 0 0 … 0 0 0 1 … 0 … … … … … … … … … … … … … … 0 0 … 1 0 0 0 … 0 0 0 0 … 1  n   g1,i    in 1   g   i 1 2,i     n   g n ,i  =  i 1 j   n   f ,i  (6.48)  i 1   2   2        2  The function fdir() in MFDUtil.CPP is developed to find the feasible direction for constrained problems. The flowchart for finding the usable-feasible direction is shown in the following figure 6.11. Example FDIR01.CPP:   15  For the following problem with given initial point  2,1   1.9365, 1 : x0      134
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
11=>2