intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Báo cáo khoa học: "D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn"

Chia sẻ: Nguyễn Phương Hà Linh Linh | Ngày: | Loại File: PDF | Số trang:12

97
lượt xem
10
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Tuyển tập những báo cáo nghiên cứu khoa học hay nhất của trường đại học vinh tác giả: 3. Thái Doãn Chương, D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn...

Chủ đề:
Lưu

Nội dung Text: Báo cáo khoa học: "D­ưới vi phân giới hạn của hàm giá trị tối ưu trong một số bài toán "bệnh tật" quy hoạch trơn"

  1. Limiting Subgradients of the Marginal Function in Some Pathological Smooth Programming Problems Thai Doan Chuong (a) Abstract. In this paper we show that the results of Mordukhovich, Nam and Yen [6] on differential stability in parametric programming can be used to derive up- per estimates for the limiting subgradients of the marginal function in some patho- logical smooth programming problems proposed by Gauvin and Dubeau [2]. Introduction 1. Let ϕ : X × Y → R be a function taking values in the extended real line R := [−∞, ∞], Y a set-valued mapping between Banach spaces. Consider the parametric G:X programming problem minimize ϕ(x, y ) subject to y ∈ G(x). (1.1) The extended-real-valued function µ(x) := inf {ϕ(x, y ) | y ∈ G(x)} (1.2) is said to be the (or the value function) of (1.1). The marginal function solution map M (·) of the problem is defined by M (x) := {y ∈ G(x) | µ(x) = ϕ(x, y )}. (1.3) For (1.1), we say that ϕ is the and G is the constraint mapping. objective function Continuity and differentiability properties of µ in the case where X = Rn , Y = Rm , ϕ is a smooth function (i.e., a C 1 -function) and G(x) is the set of all x satisfying the parametric inequality/equality system gi (x, y ) 0, i = 1, . . . , p; hj (x, y ) = 0, j = 1, . . . , q ; (1.4) gi : X × Y → R (i = 1, . . . , p) and hj : X × Y → R (j = 1, . . . , q ) are smooth functions, were studied firstly by Gauvin and Tolle [3], Gauvin and Dubeau [2]. Their results and ideas have been extended and applied by many authors; see Mordukhovich, Nam and Yen [6], where the case ϕ is a nonsmooth function and G is an arbitrary set-valued map between Banach spaces is investigated, and the references therein. We will show that the results of [6] on differential stability in parametric program- ming can be used to estimate the set of limiting subgradients (i.e. the limiting subd- ifferential) of the marginal function in six ``pathological" smooth programming prob- lems proposed by Gauvin and Dubeau [2]. Thus, general results on differentiability NhËn bµi ngµy 30/3/2007. Söa ch÷a xong ngµy 23/5/2007.
  2. properties of the marginal function of (1.1) are very useful even for the classical finite- dimensional smooth setting of the problem. We also consider several illustrative ex- amples for the results of [6]. Unlike the corresponding examples in that paper, all the problems considered herein are smooth. The emphasis in [1]--[3] was made on the Clarke subgradients of µ, while the main concern of [6] is about the Frechet and the limiting subgradients of µ. The reader is referred to [4, 5] for interesting comments on the development of the concepts of subgradients just mentioned. Note that, under very mild assumptions on X and ϕ, the convex hull of the limiting subdifferential of ϕ at a given point x ∈ X coincides with the Clarke subdifferential of ϕ at the point. So, the limiting subdifferential can be considered as the (nonconvex) core of the corresponding Clarke subdifferential. Thus upper estimates for the limiting subdifferential of marginal functions can lead to sharp upper estimates for the Clarke subdifferential. Preliminaries 2. Let us recall some material on generalized differentiation, which is available in [4, 5]. All the spaces considered are Banach, unless otherwise stated. Definition 2.1. Let ϕ : X → R be an extended-real-valued function which is finite at x. Given any ε ≥ 0, we say that a vector x∗ from the topological dual space X ∗ of X is an ε − subgradient of ϕ at x if ϕ(x) − ϕ(x) − x∗ , x − x ≥ −ε. lim inf (2.1) ||x − x|| x→x ˆ ˆ ˆ Denote by ∂ε ϕ(x) the set of the ε-subgradients of ϕ at x. Clearly, ∂0 ϕ(x) ⊂ ∂ε ϕ(x) for ˆ ˆ every ε ≥ 0. The set ∂ϕ(x) := ∂0 ϕ(x) is called the of ϕ at x. Frechet subdifferential X ∗ between a Banach space X and Definition 2.2. For a set-valued mapping F : X ∗ its dual X , the sequential Painleve-Kuratowski upper limit of F (x) as x → x is defined by w∗ Lim supF (x) := {x∗ ∈ X ∗ | ∃ sequence xk → x and x∗ −→ x∗ k x→x with x∗ ∈ F (xk ) for all k = 1, 2, ...}, k where w ∗ denotes the weak∗ topology in X ∗ . Definition 2.3. The (or the limiting subdifferential Mordukhovich/basic subdifferen- tial) of ϕ at x is defined by setting ˆ ∂ϕ(x) := Lim sup ∂ε ϕ(x). (2.2) ϕ x−→x ε↓0 The of ϕ at x is given by singular subdifferential ˆ ∂ ∞ ϕ(x) := Lim supλ∂ε ϕ(x). (2.3) ϕ x−→x ε,λ↓0
  3. Remark 2.4 (see [4]). If X is an (i.e., such that its separable subspaces Asplund space have separable duals) and if ϕ is around x, then we can equiv- lower semicontinuous alently put ε = 0 in (2.2). Moreover, we have ∂ϕ(x) = ∅ for every locally Lipschitzian function. Definition 2.5. Let X be a Banach space, f : X → R a Lipschitzian function around x. The of f at x is the set Clarke subdifferential f (x + tv ) − f (x ) x∗ ∈ X ∗ | x∗ , v ∂ CL f (x) := , ∀v ∈ X lim sup . (2.4) t x →x,t→0+ Remark 2.6 (see [4, Theorem 3.57]). For the Clarke subdifferential in Asplund spaces, we have ∗ ∂ CL f (x) = cl co[∂f (x) + ∂ ∞ f (x)], (2.5) ∗ where ``co" denotes the convex hull and ``cl " stands for the closure in the weak∗ topol- ogy of X ∗ . Remark 2.7 (see [4]). If ϕ : Rn → R is strictly differentiable at x, then ∂ CL ϕ(x) = ∂ϕ(x) = { ϕ(x)}. (2.6) The domain and the graph of the map F : X Y are defined, respectively, by setting dom F := {x ∈ X | F (x) = ∅}, gph F := {(x, y ) ∈ X × Y | y ∈ F (x)}. Subgradients of the value function in smooth programming 3. problems Consider (1.1) in the special case where the objective function ϕ is smooth and the constraint set is given by G(x) := y ∈ Y | ϕi (x, y ) 0, i = 1, ..., m, ϕi (x, y ) = 0, i = m + 1, ..., m + r , (3.1) with ϕi : X × Y → R (i = 1, ..., m + r ) being some given smooth functions. Such problems are called smooth programming problems. Definition 3.1. The classical is defined by setting Lagrangian L(x, y, λ) = ϕ(x, y ) + λ1 ϕ1 (x, y ) + · · · + λm+r ϕm+r (x, y ), (3.2) where the scalars λ1 , ..., λm+r (and also the vector λ := (λ1 , ..., λm+r ) ∈ Rm+r ) are the Lagrangian multipliers.
  4. Given a point (x, y ) ∈ gph M in the graph of the solution M (·), we consider the set of Lagrange multipliers: m+r Λ(x, y ) := λ ∈ Rm+r | Ly (x, y, λ) := y ϕ(x, y ) + λi y ϕi (x, y ) = 0, i=1 λi ≥ 0, λi ϕi (x, y ) = 0 for i = 1, ..., m . (3.3) Definition 3.2. We say that the Mangasarian-Fromovitz constraint qualification con- holds at (x, y ) if dition the gradients ϕm+1 (x, y ), ..., ϕm+r (x, y ) are linearly independent; there is w ∈ X × Y such that ϕi (x, y ), w = 0 for i = m + 1, ..., m + r (3.4) and ϕi (x, y ), w < 0 whenever i = 1, ..., m with ϕi (x, y ) = 0. Definition 3.3. We say that the solution map M : domG Y admits a local upper at (x, y ) if there exists a single-valued mapping h : domG → Y Lipschitzian selection which satisfies h(x) = y and for which there are constants > 0, δ > 0 such that h(x) ∈ G(x) and h(x) − h(x) x − x for all x ∈ domG ∩ Bδ (x). Here Bδ (x) := {x ∈ X | x − x < δ }. The next statement follows from [6, Theorem 4.1]. Theorem 3.4 (Frechet subgradients of value functions in smooth nonlinear programs in Asplund spaces). x∈ y∈ µ(.) domM and be defined by (1.2). Take Let M (x) and assume that the gradients ϕ1 (x, y ), ..., ϕm+r (x, y ) (3.5) are linearly independent. Then we have the inclusion m+r ˆ ∂µ(x) ⊂ x ϕ(x, y ) + λi x ϕi (x, y ) . (3.6) i=1 λ∈Λ(x,y ) Furthermore, (3.6) reduces to the equality m+r ˆ ∂µ(x) = x ϕ(x, y ) + λi x ϕi (x, y ) (3.7) i=1 λ∈Λ(x,y ) M : domG Y (x, y ). if the solution map admits a local upper Lipschitzian selection at From [6, Corollary 4.3] we obtain the following result. Corollary 3.5. In the assumptions imposed in the first part of Theorem 3.4, suppose that the spaces X and Y are Asplund, and that the qualification condition (3.5) is re- placed by the (3.4). Then we have inclusion (3.6), which reduces to the equality (3.7)
  5. M : domG Y provided that the solution map admits a local upper Lipschitzian selec- (x, y ). tion at Let us consider some examples of programming problems illustrating the smooth results obtained in Theorem 3.4 and Corollary 3.5 and the assumptions made therein. We start with examples showing that the upper Lipschitzian assumptions of Theorem 3.4 is but to ensure the equality in the Frechet subgradient in- essential not necessary clusion (3.6). For convenience, denote by ``RHS'' and ``LHS'' the expressions standing on the right-hand side and left-hand side of inclusion (3.6), respectively. Example 3.6. (cf. [2, Example 3.4]). Let X = R, Y = R2 and x = 0, y = (0, 0). Consider the marginal function µ(.) in (1.2) with ϕ(x, y ) = −y2 , y = (y1 , y2 ) ∈ G(x), where G(x) := y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = y2 − y1 2 0, 2 ϕ2 (x, y ) = y2 + y1 − x 0. Then we have   −x if x 0 x if x 0 M (x) = y = (y1 , y2 ) ∈ G(x) | y2 = x µ(x) = , − x otherwise ;  otherwise 2 2 Λ(x, y ) = {(t, 1 − t) | 0 t 1}. Furthermore, ϕ2 (x, y ) = (−1, 0, 1) ϕ1 (x, y ) = (0, 0, 1), are linearly independent. Hence RHS=[−1, 0]. On the other hand, a direct computation 1 based on (2.1) gives LHS=[−1, − 2 ], i.e., inclusion (3.6) is strictly. Observe that the solu- tion map M (.) as above does not admit any upper Lipschitzian selection at (x, y ). This example shows that the latter assumption is essential for the validity of the equality in (3.6) by Theorem 3.4. Example 3.7. Let X = Y = R and x = y = 0. Consider the marginal function µ(.) in (1.2) with ϕ(x, y ) = (x − y 2 )2 , G(x) = {y ∈ R | ϕ1 (x, y ) = −(1 + y )2 0}. One can easily deduce from (1.2) and (1.3) that x2 if x if x {0} 0 0 √√ µ(x) = M (x) = otherwise ; otherwise ; {− x, x} 0 Λ(x, y ) = {0}. ˆ Furthermore, ϕ1 (x, y ) = (0, −2) = (0, 0). Hence RHS={0}. Besides, LHS=∂µ(0) = {0}. Thus (3.6) holds as equality although the solution map M (.) does not admit any upper Lipschitzian selection at (x, y ). We have seen that the upper Lipschitzian assumption is sufficient but not necessary for the equality assertion of Theorem 3.4.
  6. The next example shows that (3.4) is weaker than (3.5). Example 3.8. Let X = R2 , Y = R2 and x = (0, 0), y = (0, 0). Consider the marginal function µ(.) in (1.2) with ϕ(x, y ) = −y2 , y = (y1 , y2 ) ∈ G(x), G(x) := y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = y2 + y1 x1 + g (y1 ) 4 0 4 ϕ2 (x, y ) = y2 − y1 − x2 0 2 −5 ϕ3 (x, y ) = y1 0 ϕ4 (x, y ) = −y2 − 1 0; where if y1 = 0 0 g (y1 ) = y1 sin4 2π 4 otherwise . y1 Then we have ϕ1 (x, y ) = (0, 0, 0, 1), ϕ2 (x, y ) = (0, −1, 0, 1), ϕ3 (x, y ) = (0, 0, 0, 0), ϕ4 (x, y ) = (0, 0, 0, −1) are not linearly independent, i.e., the qualification condition of Theorem 3.4 is violated, but (3.4) is satisfied at (x, y ), i.e., the results of Corollary 3.5 are applicable for this problem. It is easy to find that Λ(x, y ) = {(t, 1 − t, 0, 0) | 0 1}. t Thus we have an upper estimate for the Frechet subdifferential of the value function ˆ ∂µ(x) ⊂ {0} × [−1, 0]. The next theorem follows from [6, Corollary 5.4]. Theorem 3.9 (Limiting subgradients of value functions in smooth nonlinear programs in Asplund spaces). Let M (.) be the solution mapping from with the (1.3) constraint mapping G(.) defined by (3.1), where both spaces X and Y are Asplund. Sup- pose that the Mangasarian-Fromovitz constraint qualification is satisfied. Then (3.4) one has the inclusions m+r ∂µ(x) ⊂ x ϕ(x, y ) + λi x ϕi (x, y ) , (3.8) i=1 λ∈Λ(x,y ) m+r ∂ ∞ µ(x) ⊂ λi x ϕi (x, y ) , (3.9) i=1 λ∈Λ(x,y ) where the set of multipliers Λ(x, y ) is defined in and where (3.3) m+r Λ∞ (x, y ) := λ ∈ Rm+r | = 0, λi ≥ 0, λi ϕi (x, y ) = 0 for i = 1, ..., m . λi y ϕi (x, y ) i=1 Moreover, holds as equality if M (·) admits a local upper Lipschitzian selection at (3.8) (x, y ).
  7. Similarly as in Theorem 3.4, the upper Lipschitzian assumption of Theorem 3.9 is sufficient but not necessary to ensure the equality in the inclusion (3.8). Observe that ∂ ∞ µ(x) = {0} due to (3.9) if ϕi satisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect to y , i.e., when the full gradients of ϕi in (3.4) are replaced by y ϕi (x, y ). By the representation (2.6), the results obtained in Theorem 3.9 immediately im- plies an upper estimate for the Clarke subdifferential of the value function in smooth programming, which extends the well-known results of Gauvin and Dubeau [1, Theo- rem 5.3] established in finite dimensions. Application to Gauvin-Dubeau's examples 4. Let us apply the results of Theorems 3.4, 3.9 and Corollary 3.5 to compute or esti- mate the Frechet, the limiting, and the Clarke subdifferentials of the value function in the ``pathological'' examples from Gauvin and Dubeau [2]. Example . (see [2, Example 2.1]). Let X = Y = R. Consider the problem 4.1 minimize ϕ(x, y ) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ1 (x, y ) = g (y ) − x 0}, where  5 1 −(y + 2 )2 + if y 0  g (y ) = 4  −y otherwise. e We have µ(x) = inf{ ϕ(x, y ) = −y | y ∈ G(x)}. One can find that 5  if x ≥ 4 R   (−∞, − 1 − 5 1 5 5 if 1 − x] ∪ [− 2 + − x, +∞)  x< 2 4 4 4  G(x) = (−∞, − 1 − 5 if 0 < x < 1 − x] ∪ [− ln x + ∞) 2 4    (−∞, − 1 − 5  if x − x] 0; 2 4 if x > 0 −∞ µ(x) = 1 5 otherwise; −x + 2 4 1 5 if x {y ∈ R | y = − 2 − − x} 0 4 M (x) = otherwise. ∅ √ √ Let x = 0 and y = − 1 − 25 . Note that ϕ1 (x, y ) = (−1, 5) = (0, 0) and the solution map 2 √ M (·) does not admit any upper Lipschitzian selection at (0, − 1 − 25 ). From (3.3) it follows 2 ˆ − that Λ(x, y ) = { √ }. Hence, applying the results of Theorem 3.4 we obtain ∂µ(x) ⊂ { √1 }. 1 5 5 − Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) ⊂ { √1 }. 5
  8. Remark 1. A direct computation based on (2.1) and (2.2) gives ˆ ∂µ(x) = ∂µ(x) = ∅. . (see [2, Example 2.2]). Let X = R, Y = R2 . Consider the problem Example 4.2 minimize ϕ(x, y ) = −y2 subject to y = (y1 , y2 ) ∈ G(x), 12 G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = (y1 + (y2 − 1)2 − )(y1 + (y2 + 1)2 − 1) 2 0 4 ϕ2 (x, y ) = y1 − x = 0, x ≥ 0}. We have  −1 − 1 − x2 1 if 0 x √4 2   µ(x) = 1 − 1 − x2 1 if 2 < x 1  if x > 1;  +∞  1 1 if 0 − x2 1+ x  4 2  {y = (y , y ) ∈ G(x), y = } √ 12 2 M (x) = if 1 < x 1 − x2 −1 + 1 2  if x > 1;  ∅ 1 ( 1 , 1) For x := ∈ M (x), we see that 2, y := 2 13 ϕ2 (x, y ) = (−1, 1, 0) ϕ1 (x, y ) = (0, , 0), 4 are linearly independent, and Λ( 2 , 1 , 1) = ∅. Hence, by Theorem 3.4 we obtain 1 2 ˆ ∂µ(x) = ∅. Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) = ∅. Taking into account the fact that ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain ∂ CL µ(x) = ∅. For x := 1, y := (1, −1) ∈ M (x), we obtain ˆ ∂µ(x) = ∂µ(x) = ∂ CL µ(x) = ∅. Example . (see [2, Example 3.1]). Taking X = Y = R, we consider the problem 4.3 minimize ϕ(x, y ) = −y subject to y ∈ G(x), G(x) = {y ∈ R | ϕ1 (x, y ) = y 3 − x 0}. We have √ G(x) = {y ∈ R | y ∈ (−∞, 3 x]}; √ µ(x) = − 3 x; √ M (x) = {y ∈ R | y = 3 x}.
  9. For x := 0, y := 0, we see that ϕ1 (x, y ) = (−1, 0) and Λ(0, 0) = ∅. Hence, applying Theorem 3.4 we obtain ˆ ∂µ(x) = ∅. Similarly, using Theorem 3.9 instead of Theorem 3.4, we get ∂µ(x) = ∅. µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain CL Taking into account the fact that ∂ ∂ CL µ(x) = ∅. . (see [2, Example 3.2]). Taking X = R, Y = R2 , we consider the problem Example 4.4 minimize ϕ(x, y ) = −y2 subject to y = (y1 , y2 ) ∈ G(x), G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = y1 − 100 2 0 ϕ2 (x, y ) = g (y1 ) − y2 = 0 8 ϕ3 (x, y ) = (y1 − x)y2 = 0}, where if y1 = 0 0 g (y1 ) = y1 cos( 2π ) 4 otherwise. y1 One can find that {(y1 , 0) ∈ R2 , y1 = 0 or y1 = 2k4 , k = 0, ±1, ±2, ...} if x 0 +1 √ √ √ G(x) = otherwise; G(0) ∪ {(± 8 x, g ( 8 x)) | 4 x 100} if x 0 0 √ µ(x) = min{0, −g ( otherwise; x)} 8 {(y1 , 0) ∈ R2 y1 = 0 or y1 = 2k4 , k = 0, ±1, ±2, ...} if x 0 +1 √ M (x) = {(y1 , y2 ) ∈ G(x), y2 = max {0, g ( 8 x)}} otherwise; 4 For x := 0, y := ( 2k+1 , 0), k = 0, ±1, ±2, .... with k = 2n, n = 0, ±1, ±2, ... Note that 8 32π 4 )8 ) , −1), ϕ3 (x, y ) = (0, 0, ( ϕ1 (x, y ) = (0, , 0), ϕ2 (x, y ) = (0, (4n + 1)2 4n + 1 4n + 1 are linearly independent, and the solution map M (.) does not admit any upper Lips- chitzian selection at (0, 4n4 , 0). From (3.3) it follows that get +1 4 1 , 0) = {(0, 0, (n + )8 )}. Λ(0, 4n + 1 4 Hence, applying the results of Theorem 3.4 we obtain ˆ ∂µ(x) ⊂ {0}. ˆ Similarly, with k = 2n + 1, n = 0, ±1, ±2, ..., we obtain ∂µ(x) ⊂ {0}. Using (3.8) and (3.9) in Theorem 3.9 and similarly computing as above we get ∂µ(x) ⊂ {0}, ∂ ∞ µ(x) ⊂ {0}.
  10. Taking into account the fact that ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we obtain ∂ CL µ(x) ⊂ {0}. ˆ Remark 2. Applying (2.1) and (2.2) again, we see that 0 neither belongs to ∂µ(x) nor ˆ to ∂µ(x), i.e., ∂µ(x) = ∂µ(x) = ∅. This implies ∂ CL µ(x) = ∅. Therefore, (3.6) and (3.8) hold strict inclusions. The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {0}) because ϕi , i = 1, 2, 3 satisfy the (partial) Mangasarian-Fromovitz constraint qualification with respect to y . . (see [2, Example 3.3]). Let X = R2 , Y = R2 . Consider the problem Example 4.5 minimize ϕ(x, y ) = −y2 subject to y = (y1 , y2 ) ∈ G(x), where G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = y2 + y1 x1 + g (y1 ) 4 0 4 ϕ2 (x, y ) = y2 − − x2 y1 0 2 −5 ϕ3 (x, y ) = y1 0 ϕ4 (x, y ) = −y1 − 1 0}, if y1 = 0 0 g (y1 ) = y1 sin4 ( 2π ) 4 otherwise. y1 For x := (0, 0), the optimal solution set is 2 M (x) = {(0, 0)} ∪ {( , 0) | k = ±1, ±2, ...}. k 2 For y := ( k , 0), k = ±1, ±2, ..., we see that ϕ2 (x, y ) = (0, −1, −32 , 1), 16 ϕ1 (x, y ) = ( k4 , 0, 0, 1), k3 4 ϕ4 (x, y ) = (0, 0, 0, −1) ϕ3 (x, y ) = (0, 0, k , 0), 2 are linearly independent. Besides, Λ(0, 0, k , 0) = {(1, 0, 0, 0)}. It follows from Theorem 3.4 that 16 ˆ ∂µ(x) ⊂ {( 4 , 0)}. k Using (3.8) and (3.9) and performing a similar computation as above, we get 16 ∂ ∞ µ(x) ⊂ {(0, 0)}. ∂µ(x) ⊂ {( , 0)}, k4 Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we have 16 ∂ CL µ(x) ⊂ {( , 0)}. k4 For y := (0, 0), we see that ϕ2 (x, y ) = (0, −1, 0, 1), ϕ1 (x, y ) = (0, 0, 0, 1), ϕ4 (x, y ) = (0, 0, 0, −1). ϕ3 (x, y ) = (0, 0, 0, 0),
  11. So (3.4) is satisfied at (x, y ), i.e., the results of Corollary 3.5 is applicable for this prob- lem. It is easy to find that Λ(0, 0, 0, 0) = {(t, 1 − t, 0, 0) | 0 1}. Hence, by Corollary 3.5 t we get ˆ ∂µ(x) ⊂ {0} × [−1, 0]. Using (3.8) and (3.9) we have ∂ ∞ µ(x) ⊂ {(0, 0)}. ∂µ(x) ⊂ {0} × [−1, 0], Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we get ∂ CL µ(x) ⊂ {0} × [−1, 0]. Remark 3. The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {(0, 0)}) because ϕi , i = 1, 2, 3, 4 satisfy the (partial) Mangasarian-Fromovitz constraint qualification with re- spect to y . . (see [2, Example 3.4]). Taking X = R, Y = R2 , we consider the problem Example 4.6 minimize ϕ(x, y ) = −y2 subject to y = (y1 , y2 ) ∈ G(x), G(x) = {y = (y1 , y2 ) ∈ R2 | ϕ1 (x, y ) = y2 − y1 2 0 2 −x 0}. ϕ2 (x, y ) = y2 + y1 It holds if x −x 0 µ(x) = −x otherwise; 2 if x x 0 y = (y1 , y2 ) ∈ G(x) | y2 = M (x) = . x otherwise 2 For x := 0, y := (0, 0), we see that ϕ2 (x, y ) = (−1, 0, 1) ϕ1 (x, y ) = (0, 0, 1), are linearly independent, and the solution map M (·) does not admit any upper Lips- chitzian selection at (x, y ). From (3.3) it follows that Λ(0, 0, 0) = {(t, 1 − t) | 0 1}. t Hence, by Theorem 3.4 we get ˆ ∂µ(x) ⊂ [−1, 0]. Using (3.8) and (3.9) we obtain ∂ ∞ µ(x) ⊂ {0}. ∂µ(x) ⊂ [−1, 0], Since ∂ CL µ(x) = co[∂µ(x) + ∂ ∞ µ(x)], we get ∂ CL µ(x) ⊂ [−1, 0]. Remark 4. In this example, the inclusions (3.6) and (3.8) are strict. A direct computa- ˆ 1 tion based on (2.1) and (2.2) gives us ∂µ(x) = ∂µ(x) = [−1, − 2 ]. The inclusion (3.9) holds as equality (i.e., ∂ ∞ µ(x) = {0}) because ϕ1 and ϕ2 satisfy the (partial) Mangasarian- Fromovitz constraint qualification with respect to y .
  12. Ackowledgement. I would like to thank Prof. N. D. Yen and Dr. N. Q. Huy for helpful discussions on the topic and useful remarks. References [1] J. Gauvin, F. Dubeau, Differential properties of the marginal function in mathe- Math. Program. Study, 19 (1982), 101--119. matical programming, [2] J. Gauvin, F. Dubeau, Somes examples and counterexamples for the stability anal- Math. Program. Study, 21 (1984), 69-78. ysis of nonlinear programming problems, [3] J. Gauvin, J. W. Tolle, SIAM J. Differential stability in nonlinear programming, Control and Optim, 15 (1977), 294-311. [4] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, I: Ba- Springer, Berlin, 2006. sic Theory, [5] B. S. Mordukhovich, Variational Analysis and Generalized Differentiation, II: Springer, Berlin, 2006. Applications, [6] B. S. Mordukhovich, N. M. Nam, N. D. Yen, Subgradients of marginal functions to appear in Mathematical Programming, in parametric mathematical programming, (2007). tãm t¾t D­íi vi ph©n giíi h¹n cña hµm gi¸ trÞ tèi ­u trong mét sè bµi to¸n ``bÖnh tËt" quy ho¹ch tr¬n Trong bµi b¸o nµy chóng t«i chØ ra r»ng c¸c kÕt qu¶ cña Modukhovich, Nam vµ Yªn [6] vÒ tÝnh æn ®Þnh vi ph©n trong quy ho¹ch chøa tham sè cã thÓ sö dông ®Ó ­íc l­îng trªn cho d­íi vi ph©n giíi h¹n cña hµm gi¸ trÞ tèi ­u ®èi víi mét sè bµi to¸n ``bÖnh tËt" quy ho¹ch tr¬n ®­îc ®Ò xuÊt bëi Gauvin vµ Dubeau [2]. (a) Khoa to¸n, Tr­êng §¹i häc s­ ph¹m §ång Th¸p
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
4=>1