HPU2. Nat. Sci. Tech. Vol 04, issue 01 (2025), 20-30.
HPU2 Journal of Sciences:
Natural Sciences and Technology
Journal homepage: https://sj.hpu2.edu.vn
Article type: Research article
Received date: 22-11-2024 ; Revised date: 25-12-2024 ; Accepted date: 25-3-2025
This is licensed under the CC BY-NC 4.0
20
A Lagrange function approach to study second-order optimality
conditions for infinite-dimensional optimization problems
Duc-Tam Luong
a
, Thu-Loan Vu Thi
b
, Thi-Ngoan Dang
c*
a
Yen Bai College, Yen Bai, Vietnam
b
Thai Nguyen University of Agriculture and Forestry, Thai Nguyen, Vietnam
c
Phenikaa University, Hanoi, Vietnam
Abstract
In this paper, we focus on the second-order optimality conditions for infinite-dimensional optimization
problems constrained by generalized polyhedral convex sets. Our aim is to further explore the role of
the generalized polyhedral convex property, which is inspired by the findings of other authors. To this
end, we employ the concept of FrΓ©chet second-order subdifferential, a tool in variational analysis, to
establish optimality conditions. Furthermore, by applying this concept to the Lagrangian function
associated with the problem, we are able to derive refined optimality conditions that surpass existing
results. The unique properties of generalized polyhedral convex sets play a crucial role in enabling these
improvements.
Keywords: Constrained optimization problem, Second-order necessary condition, Second-order
sufficient condition, FrΓ©chet second-order subdifferential, Generalized polyhedral convex set
1. Introduction
First- and second-order optimality conditions are fundamental and intriguing topics in both finite-
dimensional and infinite-dimensional mathematical programming. Due to their critical role in both
theoretical developments and practical applications, these conditions have attracted significant research
interest [1]–[8]. Many researchers have sought to extend these conditions to more general settings, as
seen in [9]–[13] and the references therein. It is recognized that first-order and second-order optimality
conditions are essential tools for solving optimization problems. In addition, the theory of optimality
conditions, especially second-order conditions, is closely linked with sensitivity analysis, see, e.g., [3]
*
Corresponding author, E-mail: ngoan.dangthi@phenikaa-uni.edu.vn
https://doi.org/10.56764/hpu2.jos.2024.4.1.20-30
HPU2. Nat. Sci. Tech. 2025, 4(1), 20-30
https://sj.hpu2.edu.vn 21
and [8]. In other words, various results concerning optimality conditions were obtained as products of
research on sensitivity analysis. Moreover, second-order analysis is crucial in studying the convergence
properties of algorithms for solving optimization problems.
Bonnans and Shapiro [3] first introduced the concept of generalized polyhedral convex sets. In a
topological vector space X, a set Ξ© is considered a generalized polyhedral convex set if it can be
expressed as the intersection of a finite collection of closed half-spaces and a closed affine subspace.
When the affine subspace encompasses the entire space, the set is specifically termed a polyhedral
convex set. While these two concepts are identical in finite-dimensional spaces, they exhibit distinct
characteristics in infinite-dimensional spaces, where generalized polyhedral convex sets do not reduce
to the standard polyhedral convex sets. Several applications of generalized polyhedral convex sets in
Banach space settings can be found in the works by Ban et al. [14] and [15]. The theories of generalized
linear programming along with quadratic programming in [3], [16]–[17] are mainly based on this
concept. The optimization problems discussed in this paper involving generalized polyhedral convex
constraint sets are important in optimization theory (see, for example, [18], where full stability of the
local minimizers of such problems was characterized). For further information on the properties of
generalized polyhedral convex sets and generalized polyhedral convex functions (the functions whose
epigraphs are generalized polyhedral convex sets), we refer the interested reader to [19] and [20].
The main goal of this paper is to investigate second-order optimality conditions for infinite-
dimensional optimization problems, where the constraint set is generalized polyhedral convex. Our aim
to explore more about the role of the generalized polyhedral convex property is inspired by the findings
presented in the paper [1]. In addition, Lagrange multipliers have been widely used to establish
optimality conditions in problems with constraints, making it interesting to explore how their application
and significance have been understood from various perspectives.
The paper is organized as follows: Section 2 provides the foundational groundwork by introducing
essential definitions and auxiliary results. Section 3 delves into the core results of the paper, and the
concluding section summarizes the key findings.
2. Preliminaries
Let 𝑿 and 𝒀 be Banach spaces over the field of real numbers. The corresponding duals of 𝑿 and 𝒀
are denoted by π‘Ώβˆ— and π’€βˆ—. Let 𝑨 be a nonempty subset of 𝑿. The set 𝑨 is said to be a cone if πœΆπ‘¨βŠ‚π‘¨
for any 𝜢>𝟎. Here, we abbreviate conv 𝑨 for the convex hull of 𝑨. In the notation of [21], we write
the smallest convex cone containing 𝑨 to cone 𝑨. Then, cone 𝑨={π’•π’™βˆ£π’•>𝟎,π’™βˆˆ conv 𝑨}. Let β„•
denote the set of positive integers. Given a linear operator 𝑻 between Banach spaces, the notation ker 𝑻
and rge 𝑻 represent the kernel and the range of 𝑻, respectively.
Firstly, we recall the concept of contingent cone which our second-order optimality conditions are
based on.
Definition 2.1. (See [8]) Let π‘¨βŠ‚π‘Ώ and π’™βˆˆπ‘¨. A direction 𝒗 is called tangent to 𝑨 at 𝒙 if there
exist sequences of points π’™π’Œβˆˆπ‘¨ and scalars π’•π’Œ, π’•π’Œβ‰₯𝟎,π’Œβˆˆβ„•, such that π’•π’Œβ†’πŸŽο„Ύand 𝒗=
π₯π’π¦π’Œβ†’ο†Άβ€Šο£π’•π’Œ
ο„ΏπŸ(π’™π’Œβˆ’π’™).
The set of all tangent directions to 𝐴 at π‘₯, denoted by 𝑇(𝐴,π‘₯), is called the contingent cone (or the
Bouligand-Severi tangent cone, see [22]) to 𝐴 at π‘₯.
HPU2. Nat. Sci. Tech. 2025, 4(1), 20-30
https://sj.hpu2.edu.vn 22
Remark 2.1. From the definition, it is not hard to show that a vector π‘£βˆˆπ‘‡(𝐴,π‘₯) if and only if we
can find a sequence {π‘‘ο‡ž} of positive scalars and a sequence of vectors {π‘£ο‡ž} with π‘‘ο‡žβ†’0ο„Ύand π‘£ο‡žβ†’π‘£ as
π‘˜β†’βˆž such that π‘₯ο‡ž:=π‘₯+π‘‘ο‡žπ‘£ο‡ž belongs to 𝐴 for all π‘˜βˆˆβ„•.
Secondly, we show the concept of the generalized polyhedral convex set which is the main
objective to study in this paper.
Definition 2.2. (See [3] and [21]) A subset π΄βŠ‚π‘‹ is called a generalized polyhedral convex set if
there exist π‘’ο‡œβˆ—βˆˆπ‘‹βˆ—, real numbers π›½ο‡œ,𝑖=1,2,…,𝑝, and a closed affine subspace πΏβŠ†π‘‹, such that
𝐴
=
{
𝑒
∈
𝑋
∣
𝑒
𝐿
,
⟨
𝑒
ο‡œ
βˆ—
,
𝑒
⟩
≀
𝛽
ο‡œ
,
𝑖
=
1
,
2
,
…
,
𝑝
}
.
(1)
In the case 𝐿=𝑋, one says that 𝐴 is a polyhedral convex set.
Remark 2.2. (i) Every generalized polyhedral convex set is closed.
(ii) If 𝑋 is finite-dimensional, it has been shown in [21] that a subset π΄βŠ‚π‘‹ is generalized
polyhedral convex if and only if it is polyhedral convex.
Let the generalized polyhedral convex set 𝐴 be given in (1). By [3], there exist a continuous
surjective linear mapping 𝑇 from 𝑋 to a Banach space π‘Œ and a vector π‘£βˆˆπ‘Œ such that 𝐿={π‘’βˆˆπ‘‹βˆ£π‘‡π‘’=
𝑣}. So,
𝐴
=
{
𝑒
∈
𝑋
∣
𝑇𝑒
=
𝑣
,
⟨
𝑒
ο‡œ
βˆ—
,
𝑒
⟩
≀
𝛽
ο‡œ
,
𝑖
=
1
,
2
,
…
,
𝑝
}
.
(2)
Set 𝐼={1,2,..,𝑝} and 𝐼(𝑒):={π‘–βˆˆπΌβˆ£βŸ¨π‘’ο‡œβˆ—,π‘’βŸ©=π›½ο‡œ} for any π‘’βˆˆπ΄. From now on, our work will
focus on the generalized polyhedral convex set 𝐴 which has the form as in (2).
Given a point π‘’β€Ύβˆˆπ΄, the following proposition gives the formula for computing the tangent cone
to the generalized polyhedral convex set 𝐴 at 𝑒‾.
Proposition 2.1. (See, e.g., [1], [14]) Let 𝑋 be a Banach space, 𝐴 be a generalized polyhedral
convex set in 𝑋. Given π‘’β€Ύβˆˆπ΄. The tangent cone to 𝐴 at 𝑒‾ is
𝑇(𝐴,𝑒‾)={π‘£βˆˆπ‘‹βˆ£π‘‡π‘£=0,βŸ¨π‘’ο‡œβˆ—,π‘£βŸ©β‰€0,π‘–βˆˆπΌ(𝑒‾)}.
Lastly, we recall the FrΓ©chet second-order subdifferential concept and some related constructions
from the book [22].
Definition 2.3. (See [22]) Let 𝐴 be a nonempty subset of 𝑋. For any π‘’β€Ύβˆˆπ΄, the FrΓ©chet normal
cone of 𝐴 at 𝑒‾ is defined by
𝑁(𝐴,𝑒‾):=οπ‘’βˆ—βˆˆπ‘‹βˆ—ο­ lim sup
→
 ο‡¨β€Ύβ€ŠβŸ¨π‘’βˆ—,π‘’βˆ’π‘’β€ΎβŸ©
β€–π‘’βˆ’π‘’β€Ύβ€–β‰€0,
where 𝑒→
ο†Ί 𝑒‾ means that 𝑒→𝑒‾ and π‘’βˆˆπ΄. If π‘’β€Ύβˆ‰π΄, one says that the set 𝑁(𝐴,𝑒‾) is an empty set.
If 𝐴 is a convex set in 𝑋, then by [22, Proposition 1.5], one has
𝑁(𝐴,𝑒‾):={π‘’βˆ—βˆˆπ‘‹βˆ—βˆ£βŸ¨π‘’βˆ—,π‘’βˆ’π‘’β€ΎβŸ©β‰€0,βˆ€π‘’βˆˆπ΄},
that is the FrΓ©chet normal cone of 𝐴 at 𝑒‾ coincides with the normal cone in the sense of convex
analysis.
Let 𝐺:π‘‹β‡‰π‘Œ be a set-valued map with the graph
gph𝐺={(𝑒,𝑣)βˆˆπ‘‹Γ—π‘Œβˆ£π‘£βˆˆπΊ(𝑒)}.
The product space π‘‹Γ—π‘Œ is equipped with the norm β€–(𝑒,𝑣)β€–:=‖𝑒‖+‖𝑣‖.
HPU2. Nat. Sci. Tech. 2025, 4(1), 20-30
https://sj.hpu2.edu.vn 23
Definition 2.4. (See [22]) Given (𝑒,𝑣)∈gph𝐺. The FrΓ©chet coderivative of 𝐺 at (𝑒,𝑣) is a
multifunction π·βˆ—πΊ(𝑒,𝑣):π‘Œβˆ—β‡‰π‘‹βˆ— given by
π·βˆ—πΊ(𝑒,𝑣)(π‘£βˆ—):=ο›π‘’βˆ—βˆˆπ‘‹βˆ—βˆ£(π‘’βˆ—,βˆ’π‘£βˆ—)βˆˆπ‘ο«gph𝐺,(𝑒,𝑣),βˆ€π‘£βˆ—βˆˆπ‘Œβˆ—.
If (𝑒,𝑣)βˆ‰gph𝐺 then one puts π·βˆ—πΊ(𝑒,𝑣)(π‘£βˆ—)=βˆ… for any π‘£βˆ—βˆˆπ‘Œβˆ—.
We omit 𝑣=𝑔(𝑒) in the above coderivative notion if 𝐺=𝑔:π‘‹β†’π‘Œ is a single valued map, i.e., we
will write 𝐷𝑔(𝑒)(π‘£βˆ—) instead of π·βˆ—π‘”(𝑒,𝑔(𝑒))(π‘£βˆ—). If 𝑔:π‘‹β†’π‘Œ is FrΓ©chet differentiable at 𝑒. Then by
[22] one has 𝐷𝑔(𝑒)(π‘£βˆ—)={βˆ‡π‘”(𝑒)βˆ—π‘£βˆ—} for every π‘£βˆ—βˆˆπ‘Œβˆ—, where βˆ‡π‘”(𝑒)βˆ— is the adjoint operator of
βˆ‡π‘”(𝑒). This formula shows that the coderivative under consideration is an appropriate extension of the
adjoint derivative operator of the real-valued function to the case of the set-valued map.
Consider a function β„Ž:𝑋→ℝ, where ℝ=[βˆ’βˆž,+∞] is the extended-real line. The epigraph of β„Ž
is given by epi β„Ž={(𝑒,𝑑)βˆˆπ‘‹Γ—β„βˆ£π‘‘β‰₯β„Ž(𝑒)}.
Definition 2.5. (See, e.g., [22]) Let β„Ž:𝑋→ℝ be finite at a point 𝑒. The FrΓ©chet subdifferential of
β„Ž at 𝑒 is given by πœ•β„Ž(𝑒):={π‘’βˆ—βˆˆπ‘‹βˆ—βˆ£(π‘’βˆ—,βˆ’1)βˆˆπ‘(epiβ„Ž,(𝑒,β„Ž(𝑒)))}.
If |β„Ž(𝑒)|=∞ then we put πœ•β„Ž(𝑒)=βˆ….
Throughout the paper, β„Žβˆˆπ’žο„΅ is understood that it is continuously FrΓ©chet differentiable and its
gradient mapping βˆ‡β„Ž is continuous. Similarly, β„Žβˆˆπ’žο„Ά means that β„Ž is twice continuously differentiable.
If β„Ž is a π’žο„΅ function, then for any 𝑒 with |β„Ž(𝑒)|<∞, the FrΓ©chet subdifferential contains only the
gradient {βˆ‡β„Ž(𝑒)} (see [22]).
One can use the notion of coderivative to define the second-order subdifferential of extended-real-
valued functions.
Definition 2.6. (See [22]) Let β„Ž:𝑋→ℝ be a function with a finite value at 𝑒. For any π‘£βˆˆπœ•β„Ž(𝑒),
the mapping πœ•ο„Άβ„Ž(𝑒,𝑣):π‘‹βˆ—βˆ—β‡‰π‘‹βˆ— with the values
πœ•ο„Άβ„Ž(𝑒,𝑣)(π‘£βˆ—):=(π·βˆ—πœ•β„Ž)(𝑒,𝑣)(π‘£βˆ—)={π‘’βˆ—βˆˆπ‘‹βˆ—βˆ£(π‘’βˆ—,βˆ’π‘£βˆ—)βˆˆπ‘(gphπœ•β„Ž,(𝑒,𝑣))}
is called the FrΓ©chet second-order subdifferential of β„Ž at 𝑒 relative to 𝑣.
The symbol 𝑣 in the notation πœ•ο„Άβ„Ž(𝑒,𝑣)(π‘£βˆ—) will be omitted if πœ•β„Ž(𝑒) is a singleton. Moreover, if β„Ž
is FrΓ©chet differentiable at 𝑒, then π·β„Ž(𝑒)(𝑣)={βˆ‡β„Ž(𝑒)βˆ—π‘£} for every π‘£βˆˆπ‘‹ as was mentioned after the
definition of coderivative. In addition, if β„Žβˆˆπ’žο„Ά around 𝑒, i.e. β„Ž is twice continuously differentiable in
an open neighborhood of 𝑒, then from the above fact and Definition 2.6, one has
πœ•ο„Άβ„Ž(𝑒)(π‘£βˆ—)={βˆ‡ο„Άβ„Ž(𝑒)βˆ—π‘£βˆ—} βˆ€π‘£βˆ—βˆˆπ‘‹βˆ—βˆ—
where βˆ‡ο„Άβ„Ž(𝑒)βˆ— is the adjoint operator of the second-order derivative βˆ‡ο„Άβ„Ž(𝑒) of β„Ž at 𝑒. In particular,
when 𝑋 is finite-dimensional, βˆ‡β„Ž(𝑒) reduces to the classical Hessian matrix for which βˆ‡ο„Άβ„Ž(𝑒)βˆ—=
βˆ‡ο„Άβ„Ž(𝑒).
3. Main results
In this paper, we consider the optimization problem
min
{
𝑓
(
π‘₯
)
∣
π‘₯
∈
𝛺
}
(P)
where 𝑓:𝑋→ℝ is a π’žο„΅ function and Ξ©βŠ‚π‘‹ is a generalized polyhedral convex set.
The Lagrange function corresponding to problem (P) is
HPU2. Nat. Sci. Tech. 2025, 4(1), 20-30
https://sj.hpu2.edu.vn 24
β„’(π‘₯,π‘£βˆ—,πœ†)=𝑓(π‘₯)+βŸ¨π‘£βˆ—,𝑇π‘₯βˆ’π‘¦βŸ©+οβ€Š

ο‡œο…€ο„΅ πœ†ο‡œ(βŸ¨π‘’ο‡œβˆ—,π‘₯βŸ©βˆ’π›½ο‡œ)
where πœ†ο„΅,πœ†ο„Ά,…,πœ†ο‡£ are real numbers and π‘£βˆ—βˆˆπ‘Œβˆ—.
The first-order necessary optimality conditions have been previously studied. As the statement in
the general setting is given in [23] without proof, we will present the proof for our case in detail for the
reader’s convenience.
Theorem 3.1. Let 𝛺 be a generalized polyhedral convex set given by (2) and π‘₯β€Ύ be a local solution
of (𝑃). Then there exist multipliers πœ†β€Ύ=ο«πœ†β€Ύο„΅,πœ†β€Ύο„Ά,…,πœ†β€Ύο‡£ο―βˆˆβ„ο‡£,πœ†β€Ύο‡œβ‰₯0 and π‘£β€Ύβˆ—βˆˆπ‘Œβˆ— such that

β„’


π‘₯
β€Ύ
,
𝑣
β€Ύ
βˆ—
,
πœ†
β€Ύ

=
𝛻𝑓
(
π‘₯
β€Ύ
)
+
𝑇
βˆ—
𝑣
β€Ύ
βˆ—
+

β€Š

ο‡œο…€ο„΅
β€Š
πœ†
β€Ύ
ο‡œ
𝑒
ο‡œ
βˆ—
=
0
,
πœ†
β€Ύ
ο‡œ
[
⟨
𝑒
ο‡œ
βˆ—
,
π‘₯
β€Ύ
⟩
βˆ’
𝛽
ο‡œ
]
=
0
,
𝑖
∈
𝐼
,
(3)
where ℒ denotes the partial derivative of β„’ with respect to the variable π‘₯.
Proof. Let π‘₯β€Ύ be a local solution of (P) and Ξ© be given by (2). Noting that Ξ© is convex, by Proposition
5.1 in [24], we have
βˆ’
βˆ‡
𝑓
(
π‘₯
β€Ύ
)
𝑁
(
Ξ©
,
π‘₯
β€Ύ
)
.
(4)
Since Ξ© is generalized polyhedral, it follows that its normal cone is generalized polyhedral as well.
Thanks to [21], one gets
𝑁
(
Ξ©
,
π‘₯
β€Ύ
)
=
cone
{
𝑒
ο‡œ
βˆ—
∣
𝑖
∈
𝐼
(
π‘₯
β€Ύ
)
}
+
(
ker
𝑇
)
ο…„
(5)
with (ker𝑇)ο…„ being the annihilator of the linear subspace ker𝑇, i.e.,
(ker𝑇)ο…„={π‘₯βˆ—βˆˆπ‘‹βˆ—βˆ£βŸ¨π‘₯βˆ—,π‘£βŸ©=0,βˆ€π‘£βˆˆker𝑇}.
Combining (4) with (5) implies that there exist multipliers ο«πœ†β€Ύο„΅,πœ†β€Ύο„Ά,…,πœ†β€Ύο‡£ο―βˆˆβ„ο‡£,πœ†β€Ύο‡œβ‰₯0, π‘–βˆˆπΌ(π‘₯β€Ύ)
satisfying
βˆ‡
𝑓
(
π‘₯
β€Ύ
)
+

β€Š

ο‡œ
ο…€
ο„΅
β€Š
πœ†
β€Ύ
ο‡œ
𝑒
ο‡œ
βˆ—
∈
βˆ’
(
ker
𝑇
)
ο…„
.
(6)
Moreover, since 𝑇 is surjective, by invoking the lemma on the annihilator [23] (see also [25]), one
has (ker𝑇)ο…„=rgeπ‘‡βˆ—. Hence, π‘₯βˆ—βˆˆβˆ’(ker𝑇)ο…„ if and only if we can find π‘£β€Ύβˆ—βˆˆπ‘Œβˆ— satisfying π‘₯βˆ—=
βˆ’π‘‡βˆ—π‘£β€Ύβˆ—. Thus (6) yields the existence of a vector π‘£β€Ύβˆ—βˆˆπ‘Œβˆ— such that
βˆ‡π‘“(π‘₯β€Ύ)+π‘‡βˆ—π‘£β€Ύβˆ—+οβ€Š

ο‡œο…€ο„΅ πœ†β€Ύο‡œπ‘’ο‡œβˆ—=0,βˆ€π‘–βˆˆπΌ(π‘₯β€Ύ).
Consequently, by choosing πœ†β€Ύο‡œ=0 for all π‘–βˆˆπΌβˆ–πΌ(π‘₯β€Ύ), we obtain a multiplier set πœ†β€Ύο‡œβ‰₯0, 𝑖=
1,2,…,𝑝 and π‘£β€Ύβˆ—βˆˆπ‘Œβˆ— such that (3) holds for every π‘–βˆˆπΌ.

Remark 3.1. The multipliers πœ†β€Ύ and functional π‘£β€Ύβˆ— in Theorem 3.1 are referred to as the Lagrange
multipliers. If π‘₯β€Ύ is a feasible point of (P) and there exists (π‘£βˆ—,πœ†)βˆˆπ‘Œβˆ—Γ—β„ο‡£ satisfying (3), then π‘₯β€Ύ is
called a stationary point of (P). The set of Lagrange multipliers of (P) at π‘₯β€ΎβˆˆΞ© is denoted by Ξ›(π‘₯β€Ύ).