- Research
- Open access
- Published:
Approximating common fixed points of averaged self-mappings with applications to the split feasibility problem and maximal monotone operators in Hilbert spaces
Fixed Point Theory and Applications volume 2013, Article number: 190 (2013)
Abstract
In this paper, a modified proximal point algorithm for finding common fixed points of averaged self-mappings in Hilbert spaces is introduced and a strong convergence theorem associated with it is proved. As a consequence, we apply it to study the split feasibility problem, the zero point problem of maximal monotone operators, the minimization problem and the equilibrium problem, and to show that the unique minimum norm solution can be obtained through our algorithm for each of the aforementioned problems. Our results generalize and unify many results that occur in the literature.
MSC:47H10, 47J25, 68W25.
1 Introduction
Throughout this paper, ℋ denotes a real Hilbert space with the inner product and the norm , I the identity mapping on ℋ, ℕ the set of all natural numbers and ℝ the set of all real numbers. For a self-mapping T on ℋ, denotes the set of all fixed points of T.
Let C and Q be nonempty closed convex subsets of two Hilbert spaces and respectively, and let be a bounded linear mapping. The split feasibility problem (SFP) is the problem of finding a point with the property:
The SFP was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and medical image reconstruction. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy. For details, the readers are referred to Xu [2] and the references therein.
Assume that the SFP has a solution. There are many iterative methods designed to approximate its solutions. The most popular algorithm is the CQ algorithm introduced by Byrne [3, 4]:
It starts with any and generates a sequence through the iteration
where , is the adjoint of A, and are the metric projections onto C and Q respectively.
The sequence generated by the CQ algorithm (2) converges weakly to a solution of SFP (1), cf. [2–4]. Under the assumption that SFP (1) has a solution, it is known that a point solves the SFP (1) if and only if is a fixed point of the operator
cf. [2], where Xu also proposed the regularized method
and proved that the sequence converges strongly to a minimum norm solution of SFP (1) provided the parameters and verify some suitable conditions. This regularized method was further investigated by Yao, Jiang and Liou [5], and Yao, Liou and Shahzad [6].
Motivated by the above works, it is desirable to devise an algorithm for approximating a point so that
where A, B are two bounded linear mappings from to .
On the other hand, it has been an interesting topic of finding zero points of maximal monotone operators. A set-valued map with the domain is called monotone if
for all and for any , , where is defined to be
A is said to be maximal monotone if its graph is not properly contained in the graph of any other monotone operator. For a positive real number α, we denote by the resolvent of a monotone operator A, that is, for any . A point is called a zero point of a maximal monotone operator A if . In the sequel, we shall denote the set of all zero points of A by , which is equal to for any . A well-known method to solve this problem is the proximal point algorithm which starts with any initial point and then generates the sequence in ℋ by
where is a sequence of positive real numbers. This algorithm was first introduced by Martinet [7] and then generally studied by Rockafellar [8], who devised the iterative sequence by
where is an error sequence in ℋ. Rockafellar showed that the sequence generated by (6) converges weakly to an element of provided that and . In 1991, Güler [9] established an example showing that the sequence generated by (6) converges weakly but not strongly. Since then, many authors have conducted research on modifying the sequence in (6) so that the strong convergence is guaranteed, cf. [10–19] and the references therein. Recently, Wang and Cui [16] considered the following algorithm:
where , , are sequences in with for all , and is an error sequence in ℋ. They showed that the sequence generated by (7) converges strongly to a zero point of A provided the following conditions (i) and (ii) are verified:
This theorem generalizes and unifies many results that occur in the literature, cf. [10–12, 18, 20].
For another maximal monotone operator B, we would like to seek appropriate conditions on the coefficient sequences , , and so that the sequence generated by
can converge strongly to a common zero of A and B.
We find that both of problems (5) and (8) can be solved simultaneously in a more general setting. As a matter of fact, any resolvent is firmly nonexpansive and any firmly nonexpansive mapping is -averaged, cf. [21], which is a special case of λ-averaged mappings (for the definition of λ-averaged mappings, we refer readers to Section 2). Also, as shown in the proof of Theorem 3.6 of [2], for any with , the operator (3) is -averaged. It is quite natural to ask whether the sequence generated by
can converge strongly to a point of provided the coefficient sequences , , and are imposed on appropriate conditions, where for any , each is -averaged by , and each is -averaged by . We shall show in Section 3 that the sequence generated by (9) converges strongly to a point of provided and , and the coefficient sequences , , and verify the conditions:
-
(i)
and are convergent sequences in with limit respectively;
-
(ii)
there are two nonnegative real-valued functions and on ℕ with
-
(iii)
, , and are sequences in with and , ;
-
(iv)
, , ;
-
(v)
, .
Based on this main result, we shall deduce many corollaries for averaged mappings in Section 3. Section 4 is devoted to applications. We apply our results in Section 3 to study the split feasibility problem, the zero point problem of maximal monotone operators, the minimization problem and the equilibrium problem, and to show that the unique minimum norm solution can be obtained through our algorithm for each of the aforementioned problems.
2 Preliminaries
In order to facilitate our investigation in Section 3, we recall some basic facts. Let C be a nonempty closed convex subset of ℋ. A mapping is said to be
-
(i)
nonexpansive if
-
(ii)
firmly nonexpansive if
-
(iii)
λ-averaged by K if
for some and some nonexpansive mapping K.
If is nonexpansive, then the fixed point set of T is closed and convex, cf. [21]. If is averaged, then T is nonexpansive with .
The metric projection from ℋ onto C is the mapping that assigns each the unique point in C with the property
It is known that is nonexpansive and characterized by the inequality: for any ,
For , the resolvent of maximal monotone operator A on ℋ has the following properties.
Lemma 2.1 Let A be a maximal monotone operator on ℋ. Then
-
(a)
is single-valued and firmly nonexpansive;
-
(b)
and ;
-
(c)
(The resolvent identity) for , the following identity holds:
We still need some lemmas that will be quoted in the sequel.
Lemma 2.2 Let . Then
-
(a)
;
-
(b)
for any ,
-
(c)
for with ,
Lemma 2.3 (Demiclosedness principle [21])
Let T be a nonexpansive self-mapping on a nonempty closed convex subset C of ℋ, and suppose that is a sequence in C such that converges weakly to some and . Then .
Lemma 2.4 [18]
Let be a sequence of nonnegative real numbers satisfying
where , and verify the conditions:
-
(i)
, ;
-
(ii)
;
-
(iii)
and .
Then .
Lemma 2.5 [22]
Let be a sequence in ℝ that does not decrease at infinity in the sense that there exists a subsequence such that
For any , define . Then as and , .
3 Strong convergence theorems
To establish a strong convergence theorem for averaged mappings , , , on ℋ associated with algorithm (9), we at first need a lemma.
Lemma 3.1 If is a λ-averaged self-mapping by K on a nonempty closed convex subset C of ℋ and , then for any , one has
Proof Let x be any point in C. Then, using and the nonexpansiveness of K, we have from Lemma 2.2(b) that
□
Theorem 3.2 For any , suppose that and are averaged self-mappings on a nonempty closed convex subset C of ℋ with , satisfying that
and there are two nonnegative real-valued functions and on ℕ with
Suppose further that , , and are sequences in with and for all , and that and are two bounded sequences in C. For an arbitrary norm convergent sequence in C with limit u, start with an arbitrary and define two sequences and by
Then both of and converge strongly to provided the following conditions are satisfied:
Moreover, when every is the identity mapping I, the result still holds without the condition .
Proof Put . Firstly, we show that converges strongly to p. It comes from the nonexpansiveness of and that
from which it follows that is a bounded sequence. Taking into account of Lemma 2.2 and using Lemma 3.1, we get
We now carry on with the proof by considering the following two cases: (I) is eventually decreasing, and (II) is not eventually decreasing.
Case I: Suppose that is eventually decreasing, that is, there is such that is decreasing. In this case, exists in ℝ. By condition (ii), we may assume that there are two such that and for all . Then from inequality (11) we have
and noting via condition (i) that
we conclude that
which implies that
Then from condition (3.2) we deduce for all that
Since is bounded, it has a subsequence such that converges weakly to some and
where the last inequality follows from (13) since by Lemma 2.3. Choose so that . From (11) we have
Accordingly, because of (14) and condition (i), we can apply Lemma 2.4 to inequality (15) with , , and to conclude that
Case II: Suppose that is not eventually decreasing. In this case, by Lemma 2.5, there exists a nondecreasing sequence in ℕ such that and
Then it follows from (11) and (16) that
Therefore,
and then proceeding just as in the proof in Case I, we obtain
which in conjunction with condition (3.2) shows for all that
and then it follows that
From (17) we have
and thus
Letting and using (19) and condition (i), we obtain
Also, since
which together with (18) implies , and so
by virtue of (20). Consequently, we conclude via (16) and (21). In addition, note that the condition is used to establish and in (12) and (18) respectively. However, both limits hold trivially without this condition provided every is the identity mapping I.
Next, we show that converges strongly to p too. Applying Lemma 2.4 to the following inequality
for all , we see that , and hence follows. This completes the proof. □
The following lemma is easily proved and so its proof is omitted.
Lemma 3.3 For any , suppose that and are averaged self-mappings on a nonempty closed convex subset C of ℋ such that condition (3.1) holds. Then and satisfy condition (3.2) if and only if and satisfy condition (3.2).
If the sequence (resp. ) of averaged mappings consists of a single mapping S (resp. T), then and obviously verify conditions (3.1) and (3.2), and hence from Lemma 3.3 we have the following corollary.
Corollary 3.4 Suppose S and T are two averaged self-mappings on a nonempty closed convex subset C of ℋ with , and suppose that , , and are sequences in with and for all , and and are two bounded sequences in C. For an arbitrary norm convergent sequence in C with limit u, start with an arbitrary and define two sequences and by
Then both of and converge strongly to provided the following conditions are satisfied:
Moreover, when S is the identity mapping I, the result still holds without the condition .
Theorem 3.5 For any , suppose and are firmly nonexpansive self-mappings on a nonempty closed convex subset C of ℋ with , satisfying condition (3.2). Suppose further that , , and are sequences in with and for all , and and are two bounded sequences in C. For an arbitrary norm convergent sequence in C with limit u, start with an arbitrary and define two sequences and by
Then both of and converge strongly to provided the following conditions are satisfied:
Moreover, when every is the identity mapping I, the result still holds without the condition .
Proof Since any firmly nonexpansive mapping is -averaged, condition (3.1) holds, and hence by Lemma 3.3 we see that all the requirements of Theorem 3.2 are verified. Therefore, the desired conclusion follows. □
If and for all in Theorem 3.2, then we have the following corollary.
Corollary 3.6 Suppose, for all , that is an averaged self-mapping on a nonempty closed convex subset C of ℋ with , and , and assume that condition (3.2) holds for and . Suppose further that , and are sequences in with and for all . Let be a sequence in . For an arbitrary fixed , start with an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Corollary 3.7 Suppose, for all , that is an averaged self-mapping on ℋ with , and , and assume that condition (3.2) holds for and . Suppose further that , and are sequences in with and for all , and that is a bounded sequence in ℋ. Let be a sequence in . For an arbitrary fixed , start with an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Proof Put . Let and define a sequence iteratively by
We have by Corollary 3.6. Since
the limit follows by applying Lemma 2.4 to (22), and thus,
□
4 Applications
In this section, we shall apply some of the strong convergence theorems in Section 3 to approximate a solution of the split feasibility problem, a common zero of maximal monotone operators, a minimizer of a proper lower semicontinuous convex function, and to study the related equilibrium problem.
Xu [2] transformed SFP (1) to the fixed point problem of the operator (3):
He proved Lemma 4.1 below.
Lemma 4.1 [2]
A point solves SFP (1) if and only if is a fixed point of the operator (3): .
Moreover, in the proof of Theorem 3.6 of [2], Xu showed the following lemma.
Lemma 4.2 [2]
For any with , the operator (3): is -averaged.
Invoking Lemmas 4.1 and 4.2, we obtain the theorem below from Corollary 3.4 by putting and for all .
Theorem 4.3 Let C and Q be nonempty closed convex subsets of two Hilbert spaces and respectively, and let be a bounded linear mapping. Put , where γ satisfies . Suppose that the solution set Ω of SFP (1) is nonempty, and suppose further that , , and are sequences in with and for all , and that is a bounded sequence in C. For an arbitrary fixed , start with an arbitrary and define the sequence by
Then converges strongly to provided the following conditions are satisfied:
When the point u in the above theorem is taken to be 0, we see that the limit point v of the sequence is the unique minimum norm solution of SFP (1), that is, .
Here, readers may compare the above theorem with Theorem 3.6 of [2], which says, for and sequence in satisfying
that the sequence generated by
converges weakly to a solution of SFP (1) provided the solution set of SFP (1) is nonempty. It is also interesting to compare Theorem 4.3 with Theorem 5.5 of [2] and Theorem 3.1 of [5]. Our method is different from those in [2] and [5] even in the case of , because our algorithm contains an error term and uses the operator directly without any regularization.
Theorem 4.4 Let C and Q be nonempty closed convex subsets of two Hilbert spaces and respectively, and let A, B be bounded linear mappings from to . Put and , where γ satisfies . Suppose the solution set Ω of SFP (5) is nonempty, and suppose further that , , and are sequences in with , and for all , and that is a bounded sequence in C. For an arbitrary fixed , start with an arbitrary and define the sequence by
Then converges strongly to provided the following conditions are satisfied:
Proof It is clear that this theorem follows from Lemmas 4.1 and 4.2 and Corollary 3.4. □
Replacing and in Theorem 3.2 with the resolvents and of two maximal monotone operators B and A respectively, we have Theorem 4.5 below.
Theorem 4.5 Suppose that B and A are two maximal monotone operators on ℋ with , and suppose that , , and are sequences in with and for all . Let and be sequences in , and let and be two bounded sequences in ℋ. For an arbitrary norm convergent sequence in ℋ with limit u, start with an arbitrary and define two sequences and by
Then both of the sequences and converge strongly to provided the following conditions are satisfied:
Proof Since all the requirements of Theorem 3.2 are satisfied except conditions (3.1) and (3.2), we have to check these two conditions. For any , let and . By Lemma 2.1(b), we have and for all . Moreover, since all and are firmly nonexpansive, all of them are -averaged, so condition (3.1) is satisfied with for all . According to Lemma 3.3, it remains to prove that condition (3.2) holds for and . Since condition (iii) holds, we may assume that there is such that and for all . Let and . Then, by virtue of the resolvent identity and the nonexpansiveness of , one has for all that
and thus
The same argument shows for all that
Therefore, condition (3.2) is true for and . □
Putting in Corollary 3.6 (resp. Corollary 3.7) and noting that and verifies condition (3.2) due to , we obtain the following two corollaries.
Corollary 4.6 Suppose that A is a maximal monotone operator on ℋ with , and suppose that , and are sequences in with and for all . Let be a sequence in . For an arbitrary fixed , choose an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Corollary 4.7 [16]
Suppose that A is a maximal monotone operator on ℋ with , and suppose that , and are sequences in with and for all . Let be a sequence in , and let be a bounded sequence in ℋ. For an arbitrary fixed , choose an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Let be a proper lower semicontinuous convex function. The set of minimizers of f is defined to be
and the subdifferential of f is defined as
for all . As shown in Rockafellar [23], ∂f is a maximal monotone operator. Moreover, one has
that is,
Hence for any , and then invoking Corollary 4.6, we obtain the following theorem.
Theorem 4.8 Let be a proper lower semicontinuous convex function, and suppose that , and are sequences in with and for all . Let be a sequence in , and put . For an arbitrary fixed , choose an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
For any , define by
for all . Then we have, for any ,
cf. [24]. Hence,
This means that , and thus the iterative scheme (23) is can be replaced with
Let . An equilibrium problem is the problem of finding such that
whose solution set is denoted by . For solving an equilibrium problem, we assume that the function f satisfies the following conditions:
-
(A1)
, ;
-
(A2)
f is monotone, that is, , ;
-
(A3)
for all , ;
-
(A4)
for all , is convex and lower semicontinuous.
The following lemma appears implicitly in Blum and Oetti [25] and is proved in detail by Aoyama et al. [26], while Lemma 4.10 is Lemma 2.12 of Combettes and Hirstoaga [27].
Let be a function satisfying conditions (A1)-(A4), and let and . Then there exists a unique such that
Lemma 4.10 [27]
Let be a function satisfying conditions (A1)-(A4). For , define by
for all . Then the following hold:
-
(a)
is single-valued;
-
(b)
is firmly nonexpansive;
-
(c)
;
-
(d)
is closed and convex.
We call the resolvent of f for . Using Lemmas 4.9 and 4.10, Takahashi et al. [15] established the lemma below.
Lemma 4.11 [15]
Let be a function satisfying conditions (A1)-(A4) and define a set-valued mapping of ℋ into itself by
Then the following hold:
-
(a)
is a maximal monotone operator with ;
-
(b)
;
-
(c)
for all .
Theorem 4.12 Let C be a nonempty closed convex subset of ℋ and let , , be functions satisfying conditions (A1)-(A4) with . Suppose that , , and are sequences in with and for all . Let and be sequences in , and let be a bounded sequence in ℋ. For an arbitrary fixed , choose an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Proof The set-valued mappings associated with , , defined in Lemma 4.11 are maximal monotone operators with , and it follows from Lemmas 4.10 and 4.11 that and for any . Putting and in Theorem 4.5, the desired conclusion follows. □
Here, it is worth mentioning, just as the SFP, that the unique minimum norm solution can be obtained through our algorithm for each of the minimization problem and the equilibrium problem by taking in Theorems 4.8 and 4.12.
For a nonempty closed convex subset C of ℋ, its indicator function defined by
is a proper lower semicontinuous convex function and its subdifferential defined by
is a maximal monotone operator, cf. Rockafellar [23]. As shown in Lin and Takahashi [28], the resolvent of for is the same as the metric projection , and .
Theorem 4.13 Let , , be two nonempty closed convex subsets of ℋ with . Suppose that , , and are sequences in with and for all . Let be a bounded sequence in ℋ. For an arbitrary fixed , choose an arbitrary and define
Then the sequence converges strongly to provided the following conditions are satisfied:
Proof Putting and in Theorem 4.5, the desired conclusion follows. □
References
Censor Y, Elfving T: A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018 10.1088/0266-5611/26/10/105018
Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Yao Y, Jigang W, Liou YC: Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012. 10.1155/2012/140679
Yao Y, Liou YC, Shahzad N: A strongly convergent method for the split feasibility problem. Abstr. Appl. Anal. 2012. 10.1155/2012/125046
Martinet B: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 1970, 4: 154–158.
Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056
Güler O: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022
Boikanyo OA, Moroşanu G: Inexact Halpern-type proximal point algorithm. J. Glob. Optim. 2011, 51: 11–26. 10.1007/s10898-010-9616-7
Boikanyo OA, Moroşanu G: Four parameter proximal point algorithms. Nonlinear Anal. 2011, 74: 544–555. 10.1016/j.na.2010.09.008
Boikanyo OA, Moroşanu G: A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4: 635–641. 10.1007/s11590-010-0176-z
Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493
Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.
Takahashi S, Takahashi W, Toyoda M: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147: 27–41. 10.1007/s10957-010-9713-2
Wang F, Cui H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54: 485–491. 10.1007/s10898-011-9772-4
Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66(2):240–256.
Yao Y, Noor MA: On convergence criteria of generalized proximal point algorithm. J. Comput. Appl. Math. 2008, 217: 46–55. 10.1016/j.cam.2007.06.013
Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.
Goebel K, Kirk WA: Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z
Rockafellar RT: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 1970, 33: 209–216. 10.2140/pjm.1970.33.209
Takahashi W: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama; 2009.
Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.
Aoyama K, Kimura T, Takahashi W: Maximal monotone operators and maximal monotone functions for equilibrium problems. J. Convex Anal. 2008, 15: 395–409.
Combettes PL, Hirstoaga SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.
Lin LJ, Takahashi W: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16: 429–453. 10.1007/s11117-012-0161-0
Acknowledgements
The work was supported by the National Science Council of Taiwan with contract No. NSC101-2221-E-020-031.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors contributed equally to this work. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Huang, YY., Hong, CC. Approximating common fixed points of averaged self-mappings with applications to the split feasibility problem and maximal monotone operators in Hilbert spaces. Fixed Point Theory Appl 2013, 190 (2013). https://doi.org/10.1186/1687-1812-2013-190
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2013-190