- Research
- Open Access
- Published:
Iterative methods for constrained convex minimization problem in Hilbert spaces
Fixed Point Theory and Applications volume 2013, Article number: 105 (2013)
Abstract
In this paper, based on Yamada’s hybrid steepest descent method, a general iterative method is proposed for solving constrained convex minimization problem. It is proved that the sequences generated by proposed implicit and explicit schemes converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality.
MSC:58E35, 47H09, 65J15.
1 Introduction
Let H be a real Hilbert space with inner product and induced norm . Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.
Let be nonlinear operators.
-
T is nonexpansive if for all .
-
T is Lipschitz continuous if there exists a constant such that , for all .
-
is monotone if , for all .
-
Given is a number , is η-strongly monotone if , for all .
-
Given is a number . is υ-inverse strongly monotone (υ-ism) if , for all .
It is known that inverse strongly monotone operators have been studied widely (see [1–3]), and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).
-
is said to be an averaged mapping if , where α is a number in and is nonexpansive. In particular, projections are ()-averaged mappings.
Averaged mappings have received many investigations, see [6–10].
Consider the following constrained convex minimization problem:
where is a real valued convex function. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. It is known that the gradient-projection algorithm is one of the powerful methods for solving the minimization problem (1.1) (see [11–18]), and sometimes the minimization problem (1.1) has more than one solution. So, regularization is needed. We can use the idea of regularization to design an iterative algorithm for finding the minimum-norm solution of (1.1).
We consider the regularized minimization problem:
Here, is the regularization parameter, f is convex function with L-Lipschitz continuous gradient ∇f. Let be minimum-norm solution of (1.1), namely, satisfies the property:
can be obtained by two steps. First, observing that the gradient is -Lipschitzian and α-strongly monotone, the mapping is a contraction with coefficient , where . So, the regularized problem (1.2) has a unique solution, which is denoted as and which can be obtained via the Banach contraction principle. Secondly, letting yields in norm. The following result shows that for suitable choices of γ and α, the minimum-norm solution can be obtained by a single step.
Theorem 1.1 [9]
Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is L-Lipschitz continuous. Let be generated by the following iterative algorithm:
Let and satisfy the following conditions:
-
(i)
for all n;
-
(ii)
(and ) as ;
-
(iii)
;
-
(iv)
as .
Then as .
In the assumptions of Theorem 1.1, the sequence is forced to tend to zero. If we keep it as a constant, then we have weak convergence as follows.
Theorem 1.2 [19]
Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is L-Lipschitz continuous. Let be generated by the following iterative algorithm:
Assume that and . Then converges weakly to a solution of the minimization problem (1.1).
In 2001, Yamada [10] introduced the following hybrid steepest descent method:
where is k-Lipschitzian and η-strongly monotone, and . It is proved that the sequence generated by (1.5) converges strongly to , which solves the variational inequality:
In this paper, we introduce a modification of algorithm (1.4) which is based on Yamada’s method. It is proved that the sequence generated by our proposed algorithm converges strongly to a minimizer of (1.1), which is also a solution of a certain variational inequality.
2 Preliminaries
In this section, we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.
Let the operators be given:
-
(i)
If , for some and if S is averaged and V is nonexpansive, then T is averaged.
-
(ii)
The composition of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
-
(iii)
If the mappings are averaged and have a common fixed point, then
Here, the notations denotes the set of fixed point of the mapping T; that is, .
Let be given. We have:
-
(i)
T is nonexpansive, if and only if the complement is ()-ism;
-
(ii)
If T is υ-ism, then for , γT is ()-ism;
-
(iii)
T is averaged, if and only if the complement is υ-ism for some ; indeed, for , T is α-averaged, if and only if is ()-ism.
The so-called demiclosed principle for nonexpansive mappings will often be used.
Lemma 2.3 (Demiclosed Principle [21])
Let C be a closed and convex subset of a Hilbert space H and let be a nonexpansive mapping with . If is a sequence in C weakly converging to x and if converges strongly to y, then . In particular, if , then .
Recall the metric (nearest point) projection from a real Hilbert space H to a closed convex subset C of H is defined as follows: given , is the unique point in C with the property
is characterized as follows.
Lemma 2.4 Let C be a closed and convex subset of a real Hilbert space H. Given and , then if and only if there holds the inequality
Lemma 2.5 Assume that is a sequence of nonnegative real numbers such that
where and are sequences in and is a sequence in ℝ such that
-
(i)
;
-
(ii)
either or ;
-
(iii)
.
Then .
We adopt the following notation:
-
means that strongly;
-
means that weakly.
3 Main results
Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).
Let H be a real Hilbert space and C be a nonempty closed convex subset of Hilbert space H. Let be a k-Lipschitzian and η-strongly monotone operator with constant , such that . Suppose that ∇f is L-Lipschitz continuous. We now consider a mapping on C defined by:
where , and is nonexpansive. Let and satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
is continuous with respect to s and .
It is easy to see that is a contraction. Indeed, we have for each ,
where . Hence, has a unique fixed point in C, denoted by which uniquely solves the fixed-point equation
The following proposition summarizes the properties of the net .
Proposition 3.1 Let be defined by (3.1). Then the following properties for the net hold:
-
(a)
is bounded for ;
-
(b)
;
-
(c)
defines a continuous curve from into C.
Proof It is well known that: solves the minimization problem (1.1) if and only if solves the fixed-point equation
where is a constant. It is clear that , i.e., .
(a) Take a fixed , we obtain that

It follows that
For , note that
and
where and .
Then we get
Since and , there exists a real positive number such that
It follows from (3.2) and (3.3) that
Since , there exists a real positive number such that , and

Hence, is bounded.
(b) Note that the boundedness of implies that is also bounded. Hence, by the definition of , we have

(c) For , there exists
and
where and .
So for , we get

for some appropriate constant such that
Now take and calculate

It follows that
Since is bounded, and is continuous with respect to s, defines a continuous curve from into C. □
The following theorem shows that the net converges strongly as to a minimizer of (1.1), which solves some variational inequality.
Theorem 3.2 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let be a k-Lipschitzian and η-strongly monotone operator with constant , such that . Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is Lipschitzian with constant . Let be defined by (3.1), where the parameter and is nonexpansive. Let and satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
is continuous with respect to s and .
Then the net converges strongly as to a minimizer of (1.1), which solves the variational inequality
Equivalently, we have .
Proof It is easy to see the uniqueness of a solution of the variational inequality (3.4). Indeed, suppose both and are solutions to (3.4), then
and
Adding up (3.5) and (3.6) gets
The strong monotonicity of F implies that and the uniqueness is proved. Below we use to denote the unique solution of the variational inequality (3.4).
Let us show that as . Set
Then we have . For any given , we get

Since is the metric projection from H onto C, we have
Note that and , so we get , i.e., .
It follows from (3.7) that

By (3.3), we obtain that

Since is bounded, it is obvious that if is a sequence in such that , and .
By Proposition 3.1(b) and (3.3), we have

So, by Lemma 2.3, we get .
Since , we obtain from (3.8) that .
Next, we show that solves the variational inequality (3.4). Observe that
Hence, we conclude that
Since is nonexpansive, is monotone. Note that, for any given , and .
By (3.3), it follows that

Since , by Proposition 3.1(b), we obtain from (3.9) that
So is a solution of the variational inequality (3.4). We get by uniqueness. Therefore, as .
The variational inequality (3.4) can be rewritten as
So in terms of Lemma 2.4, it is equivalent to the following fixed point equation:
Next, we study the following iterative method. For a given arbitrary initial guess , we propose the following explicit scheme that generates a sequence in an explicit way:
where the parameters . Let and satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
.
It is proved that the sequence converges strongly to a minimizer of (1.1), which also solves the variational inequality (3.4). □
Theorem 3.3 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let be a k-Lipschitzian and η-strongly monotone operator with constant , such that . Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is Lipschitzian with constant . Let be generated by algorithm (3.10) and the parameters . Let , and satisfy the following conditions:
-
(C1)
and ;
-
(C2)
for all n;
-
(C3)
and ;
-
(C4)
;
-
(C5)
and .
Then the sequence generated by the explicit scheme (3.10) converges strongly to a minimizer of (1.1), which is also a solution of the variational inequality (3.4).
Proof It is well known that:
-
(a)
solves the minimization problem (1.1) if and only if solves the fixed-point equation
where is a constant. It is clear that , i.e., .
-
(b)
the gradient ∇f is -ism.
-
(c)
is averaged for , in particular, the following relation holds:
We observe that is bounded. Indeed, take a fixed , we get

It follows that
Note that, by using the same argument as in the proof of (3.3), there exists a real positive number such that
Since , there exists a real positive number such that and by (3.11) we get

It follows from induction that
Consequently, is bounded. It implies that is also bounded.
We claim that
Indeed, since
we obtain that
By using the same argument as in the proof of Proposition 3.1(c), we obtain that

for some appropriate constant such that
Thus, we get

for some appropriate constant such that
Consequently, we get
By Lemma 2.5, we obtain .
Next, we show that
Indeed, it follows from (3.13) that

Now we show that
where is a solution of the variational inequality (3.4).
Indeed, take a subsequence of such that
Without loss of generality, we may assume that .
We observe that
It follows from (3.11) that
By (3.14), we get .
In terms of Lemma 2.3, we get .
Consequently, from (3.16) and the variational inequality (3.4), it follows that
Finally, we show that .
As a matter of fact, set
Then, .
In terms of Lemma 2.4 and (3.11), we obtain

It follows that

since is bounded, we can take a constant such that
It then follows that
where .
By (3.15) and , we get . Now applying Lemma 2.5 to (3.17) concludes that as . □
4 Application
In this section, we give an application of Theorem 3.3 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving [22]. Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [7, 23, 24]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.
The SFP can mathematically be formulated as the problem of finding a point x with the property
where C and Q are nonempty, closed and convex subset of Hilbert space and , respectively. is a bounded linear operator.
It is clear that is a solution to the split feasibility problem (4.1) if and only if and . We define the proximity function f by
and consider the constrained convex minimization problem
Then solves the split feasibility problem (4.1) if and only if solves the minimization problem (4.2) with the minimize equal to 0. Byrne [7] introduced the so-called CQ algorithm to solve the (SFP).
where . He obtained that the sequence generated by (4.3) converges weakly to a solution of the (SFP).
In order to obtain strong convergence iterative sequence to solve the (SFP), we propose the following algorithm:
where the parameters and satisfy the following conditions:
-
(C1)
and ;
-
(C2)
for all n,
where is k-Lipschitzian and η-strongly monotone operator with constant , such that . We can show that the sequence generated by (4.4) converges strongly to a solution of the (SFP) (4.1) if the sequence and the sequence of parameters satisfy appropriate conditions.
Applying Theorem 3.3, we obtain the following result.
Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence be generated by (4.4). Where the sequence and the sequence satisfy the conditions (C3)-(C5). Then the sequence converges strongly to a solution of the split feasibility problem (4.1).
Proof By the definition of the proximity function f, we have
and ∇f is Lipschitz continuous, i.e.,
where .
Set , consequently
Then the iterative scheme (4.4) is equivalent to
where the parameters . satisfy the following conditions:
-
(C1)
and ;
-
(C2)
for all n.
Due to Theorem 3.3, we have the conclusion immediately. □
References
Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.
Jaiboon C, Kumam P: A Hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815
Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro Mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point prolems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051
Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965
Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5
Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157
Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z
Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.
Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1996, 6: 787–823.
Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073
Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.
Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.
Jung JS: A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/284363
Jung JS: A general composite iterative method for generalized mixed equilibrium problems, variational inequalities problems and optimization problems. J. Inequal. Appl. 2011. doi:10.1186/1029–242x-2011–51
Jitpeera T, Kumam P: A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 111. doi:10.1186/1687–1812–2012–111
Witthayarat U, Jitpeera T, Kumam P: A new modified hybrid steepest-descent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012., 2012: Article ID 206345
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018
Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018
Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowlege of matrix norms. Inverse Probl. 2012., 28: Article ID 085004
Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005
Acknowledgements
The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tian, M., Huang, LH. Iterative methods for constrained convex minimization problem in Hilbert spaces. Fixed Point Theory Appl 2013, 105 (2013). https://doi.org/10.1186/1687-1812-2013-105
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2013-105
Keywords
- iterative algorithm
- constrained convex minimization
- nonexpansive mapping
- fixed point
- variational inequality