- Research
- Open Access
- Published:
Strong convergence of relaxed hybrid steepest-descent methods for triple hierarchical constrained optimization
Fixed Point Theory and Applications volume 2012, Article number: 29 (2012)
Abstract
Up to now, a large number of practical problems such as signal processing and network resource allocation have been formulated as the monotone variational inequality over the fixed point set of a nonexpansive mapping, and iterative algorithms for solving these problems have been proposed. The purpose of this article is to investigate a monotone variational inequality with variational inequality constraint over the fixed point set of one or finitely many nonexpansive mappings, which is called the triple-hierarchical constrained optimization. Two relaxed hybrid steepest-descent algorithms for solving the triple-hierarchical constrained optimization are proposed. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are included.
AMS Subject Classifications: 49J40; 65K05; 47H09.
1 Introduction
Let H be a real Hilbert space with inner product 〈·, ·〉 and norm ∥ · ∥, let C be a nonempty closed convex subset of H and let R be the set of all real numbers. For a given nonlinear operator A : H → H, the following classical variational inequality problem is formulated as finding a point x* ∈ C such that
The set of solutions of problem (1.1) is denoted by VI(C,A). Variational inequalities were initially studied by Stampacchia [1] and ever since have been widely studied, since they cover as diverse disciplines as partial differential equations, optimal control, optimization, mathematical programming, mechanics, and finance. On the other hand, a number of mathematical programs and iterative algorithms have been developed to resolve complex real world problems. In particular, monotone variational inequalities with a fixed point constraint [2–4] include such practical problems as signal recovery [3], beamforming [5], and power control [6], and many iterative algorithms for solving them have been presented.
The constraint set has been defined in [3, 5] as the intersection of finite, closed, and convex subsets, C0 and C i (i = 1,2,...,m), of a real Hilbert space, and is represented as the fixed point set of the direct product mapping composed of the metric projections onto the C i s. The case, in which the intersection of the C i s is empty, has been considered in [2, 6]. When C0 is the absolute set, for which the condition must be satisfied, the constraint set is defined as the subset of C0 with the elements closet to the C i s (i = 1,2,... ,m) in terms of the norm. This set is represented as the fixed point set of the mapping composed of the metric projections onto the C i s [[2], Proposition 4.2]. Iterative algorithms have been presented in [2–4] for the convex optimization problem with a fixed point constraint along with proof that these algorithms converge strongly to the unique solution of problems with a strongly monotone operator. The strong monotonicity condition guarantees the uniqueness of the solution. A hierarchical fixed point problem, equivalent to the variational inequality for a monotone operator over the fixed point set, has been discussed [7, 8] along with iterative algorithms for solving it. The solution presented [7, 8] is not always unique, so that there may be many solutions to the problem. In that case, a solution, that results in practical systems and networks being more stable and reliable, must be found from among candidate solutions. Hence, it would be reasonable to identify the unique minimizer of an appropriate objective function over the hierarchical fixed point constraint. Very recently, related iterative methods and their convergence analysis for solving hierarchical fixed point problems, hierarchical optimization problems and hierarchical variational inequality problems can be found in [9–16].
Let T : H → H be a self-mapping on H. We denote by Fix(T) the set of fixed points of T. A mapping T : H → H is called L-Lipschitz continuous if there exists a constant L ≥ 0 such that
In particular, if L ∈ [0,1), T is called a contraction; if L = 1, T is called a nonexpansive mapping. A mapping A : H → H is called α-inverse strongly monotone if there exists α > 0 such that
Obviously, every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping; see, e.g., [17].
In 2001, Yamada [2] introduced a hybrid steepest-descent method for finding an element of VI(C, F). His idea is stated now. Assume that C is the fixed point set of a nonexpansive mapping T : H → H; that is,
Support that F is η-strongly monotone and κ-Lipschitz continuous with constants η,κ > 0. Take a fixed number μ ∈ (0, 2η/κ2) and a sequence {λ n } ⊂ (0,1) satisfying the conditions below:
(L1) limn→∞λ n = 0;
(L2) ;
(L3) .
Starting with an arbitrary initial guess x0 ∈ H, one can generate a sequence {u n } by the following algorithm:
Then, Yamada [2] proved that {u n } converges strongly to the unique element of VI(C, F). In the case where C is expressed as the intersection of the fixed-point sets of N nonexpansive mappings T i : H → H with N ≥ 1 an integer, Yamada [2] proposed another algorithm,
where T[k]:= Tk modN, for integer k ≥ 1, with the mod function taking values in the set {1,2,..., N} [i.e., if k = jN + q for some integers j ≥ 0 and 0 ≤ q < N, then T[k]= T N if q = 0 and T[k]= T q if 1 < q < N], 0 where μ ∈ (0, 2η/κ2) and where the sequence {λ n } of parameters satisfies conditions (L1), (L2), and (L4),
(L4) is convergent.
Under these conditions, Yamada [2] proved the strong convergence of {u n } to the unique element of VI(C,F).
In 2003, Xu and Kim [18] continued the convergence study of the hybrid steepest-descent algorithms (1.4) and (1.5). The major contribution is that the strong convergence of the algorithms (1.4) and (1.5) holds with the condition (L3) replaced by the condition
(L3)' limn→∞λ n /λn+1= 1, or equivalently, limn→∞(λ n - λn+1)/λn+1= 0, and with condition (L4) replaced by the condition
(L4)' limn→∞λ n /λn+N= 1, or equivalently, limn→∞(λ n - λn+N)/λn+N= 0.
Theorem XK1 (see [[18], Theorem 3.1]). Assume that 0 < μ < 2η/κ2. Assume also that the control conditions (L1), (L2), and (L3)' hold for {λ n }. Then, the sequence {u n } generated by algorithm (1.4) converges strongly to the unique element u* of VI(C, F).
Theorem XK2 (see [[18], Theorem 3.2]). Let μ ∈ (0,2η/κ2) and let conditions (L1), (L2), and (L4)' be satisfied. Assume in addition that
Then, the sequence {u n } generated by algorithm (1.5) converges in norm to the unique element u* of VI(C,F).
Recall the variational inequality for a monotone operator A1 : H → H over the fixed point set of a nonexpansive mapping T : H → H:
where . Very recently Iiduka [19] introduced the following monotone variational inequality with the variational inequality constraint over the fixed point set of a nonexpansive mapping:
Problem I (see [[19], Problem 3.1]). Assume that
-
(i)
T : H → H is a nonexpansive mapping with ;
-
(ii)
A1 : H → H is α-inverse strongly monotone;
-
(iii)
A2: H → H is β-strongly monotone and L-Lipschitz continuous, that is, there are constants β, L > 0 such that
for all x, y ∈ H;
-
(iv)
.
Then the objective is to
Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problem, it is referred to as a triple-hierarchical constrained optimization problem (THCOP). He presented some examples of the THCOP and proposed an iterative algorithm for finding solutions of such problem.
Algorithm I (see [[19], Algorithm 4.1]). Let T : H → H and A i : H → H (i = 1, 2) satisfy Assumptions (i)-(iv) in Problem I. The following steps are presented for solving Problem I.
Step 0. Take , and μ > 0, choose x0 ∈ H arbitrarily, and let n := 0.
Step 1. Given x n ∈ H, compute xn+1∈ H as
Update n := n + 1 and go to Step 1.
The convergence analysis of the proposed algorithm was also studied in [19]. The following strong convergence theorem is established for Algorithm I.
Theorem I (see [[19], Theorem 4.1]). Assume that in Algorithm I is bounded. If μ ∈ (0, 2β/L2) is used and if and satisfying (i) limn→∞α n = 0, (ii) , (iii) , (iv) , and (v) λ n ≤ α n ∀n ≥ 0 are used, then the sequence, , generated by Algorithm I satisfies the following properties.
-
(a)
is bounded;
-
(b)
limn→∞∥x n - y n ∥ = 0 and limn→∞∥x n - Tx n ∥ = 0 hold;
-
(c)
If ∥x n - y n ∥ = o(λ n ), converges strongly to the unique solution of Problem I.
Motivated and inspired by the above research work, we continue the convergence study of Iiduka's relaxed hybrid steepest-descent Algorithm I. It is proven that under the lack of the boundedness assumption of converges strongly to the unique solution of Problem I.
On the other hand, we introduce the following monotone variational inequality with the variational inequality constraint over the intersection of the fixed point sets of N nonexpan-sive mappings T i : H → H, with N ≥ 1 an integer.
Problem II. Assume that
-
(i)
each T i : H → H is a nonexpansive mapping with ;
-
(ii)
A1 : H → H is α-inverse strongly monotone;
-
(iii)
A2: H → H is β-strongly monotone and L-Lipschitz continuous;
-
(iv)
.
Then the objective is to
Another algorithm is proposed for Problem II.
Algorithm II. Let T i : H → H (i = 1,2,..., N) and A i : H → H (i = 1,2) satisfy Assumptions (i)-(iv) in Problem II. The following steps are presented for solving Problem II.
Step 0. Take , choose x0 ∈ H arbitrarily, and let n := 0.
Step 1. Given x n ∈ H, compute xn+1∈ H as
Update n := n + 1 and go to Step 1.
In this article, suppose first that there hold the following conditions:
(A1) limn→∞α n = 0;
(A2) ;
(A3) limn→∞(α n - αn+1)/αn+1= 0 or ;
(A4) limn→∞(λ n - λn+1)/λn+1= 0 or ;
(A5) λ n ≤ α n for all n ≥ 0.
It is proven that under Conditions (A1)-(A5), the sequence generated by Algorithm I converges strongly to the unique solution of Problem I.
Second, assume that there hold the following conditions:
(B1) limn→∞α n = 0;
(B2) ;
(B3) limn→∞(α n - αn+N)/αn+N= 0 or ;
(B4) limn→∞(λ n - λn+N)/λn+N= 0 or ;
(B5) λ n ≤ α n for all n ≥ 0.
It is proven that under Conditions (B1)-(B5), the sequence generated by Algorithm II converges strongly to the unique solution of Problem II. It is worth pointing out that in our results there is no assumption of the boundedness imposed on the sequences {x n } and {y n } generated by Algorithms I or II.
In addition, if N = 1, then Algorithm II reduces to the above Algorithm I. Hence, Algorithm II is more general and more flexible than Algorithm I. Obviously, our problem of finding the unique element of is more general and more subtle than the problem of finding the unique element of VI(VI(Fix(T), A1),A2). Beyond question, our results represent the modification, supplement, extension, and development of the above Theorem I.
The rest of the article is organized as follows. After some preliminaries in Section 2, we introduce two relaxed hybrid steepest-descent algorithms for solving Problems I and II in Section 3, respectively. Strong convergence for them is proven. Applications of these results to constrained generalized pseudoinverse are given in the last section, Section 4.
2 Preliminaries
Let H be a real Hilbert space with an inner product 〈·,·〉 and its induced norm ∥ · ∥. Throughout this article, we write x n ⇀ x to indicate that the sequence {x n } converges weakly to x. x n → x implies that {x n } converges strongly to x. A function f : H → R is said to be convex iff, for any x, y ∈ H and for any λ ∈ [0,1], f(λx + (1 - λ)y) ≤ λf(x) + (1 - λ)f(y). It is said to be strongly convex iff, α > 0 exists such that, for all x,y ∈ H and for all .
A : H → H is referred to as a strongly monotone operator with α > 0 [[20], Definition 25.2(iii)] iff 〈Ax - Ay, x - y〉 ≥ α∥x - y∥2 for all x, y ∈ H. It is said to be inverse-strongly monotone with α > 0 (α-inverse-strongly monotone) [[17], Definition, p. 200] (see [[21], Definition 2.3.9(e)] for the definition of this operator, called a co-coercive operator, on the finite dimensional spaces) iff 〈Ax -Ay, x - y〉 ≥ α∥Ax - Ay∥2 for all x,y ∈ H.
A : H → H is said to be hemicontinuous [[22], p. 204], [[20], Definition 27.14] iff, for any x,y ∈ H, the mapping g : [0,1] → H, defined by g(t) := A(tx + (1 - t)y) (t ∈ [0,1]), is continuous, where H has a weak topology. A : H → H is referred to as a Lipschitz continuous (L-Lipschitz continuous) operator [[23], Sect. 1.1], [[20], Definition 27.14] iff L > 0 exists such that ∥Ax - Ay∥ ≤ L∥x - y∥ for all x,y ∈ H. The fixed point set of the mapping A : H → H is denoted by Fix(A) := {x ∈ H : Ax = x}.
Let f : H → R be a Frechet differentiable function. This means that f is convex (resp. strongly convex) iff ∇f : H → H is monotone (resp. strongly monotone) [[20], Proposition 25.10], [[24], Sect. IV, Theorem 4.1.4]. If f : H → R is convex and if ∇f : H → H is 1/L-Lipschitz continuous, ∇f is L-inverse-strongly monotone [[25], Theorem 5].
The metric projection onto the nonempty, closed and convex set C (⊂ H), denoted by P C , is defined by, for all x ∈ H, P C x ∈ C and ∥x - P C x∥ = infy∈C∥x - y∥.
The variational inequality [1, 26] for a monotone operator A : H → H over a nonempty, closed, and convex set C (⊂ H), is to find a point in
Some properties of the solution set of the monotone variational inequality are as follows:
Proposition 2.1. Let C (⊂ H) be nonempty, closed and convex, A : H → H be monotone and hemicontinuous, and f : H → R be convex and Frechet differentiable. Then,
-
(i)
[[22], Lemma 7.1.7] VI(C,A) = {x* ∈ C : 〈Ay, y - x*〉 ≥ 0, ∀y ∈ C}.
-
(ii)
[[20], Theorem 25.C] when C is bounded.
-
(iii)
[[27], Lemma 2.24] VI(C, A) = Fix(P C (I - λA)) for all λ > 0, where I stands for the identity mapping on H.
-
(iv)
[[27], Theorem 2.31] VI(C, A) consists of one point, if A is strongly monotone and Lipschitz continuous.
-
(v)
[[26], Chap. II, Proposition 2.1 (2.1) and (2.2)] VI(C, ∇f) = Argminx∈Cf(x) := {x* ∈ C: f(x*) = minx∈Cf(x)}.
On the other hand, the mapping T : H → H is referred to as a nonexpansive mapping [22, 23, 28–30] iff, ∥Tx - Ty∥ ≤ ∥x - y∥ for all x,y ∈ H. The metric projection P C onto a given nonempty, closed, and convex set C (⊂ H), satisfies the nonexpansivity with Fix(P C ) = C [[22], Theorem 3.1.4(i)], [[29], p. 371], [[30], Theorem 2.4-3]. The fixed point set of a nonexpansive mapping has the following properties:
Proposition 2.2. Let C (⊂ H) be nonempty, closed, and convex, and T : C → C be nonexpansive. Then,
-
(i)
[[23], Proposition 5.3] Fix(T) is closed and convex;
-
(ii)
[[23], Theorem 5.1] when C is bounded.
The following proposition provides an example of a nonexpansive mapping in which the fixed point set is equal to the solution set of the monotone variational inequality.
Proposition 2.3 (see [[19], Proposition 2.3]). Let C (⊂ H) be nonempty, closed, and convex, and A : H → H be α-inverse-strongly monotone. Then, for any given λ ∈ [0, 2α], S λ : H → H defined by
satisfies the nonexpansivity and Fix(S λ ) = VI(C,A).
The following proposition is needed to prove the main theorems in this article.
Proposition 2.4 (see [[2], Lemma 3.1]). Let A : H → H be β-strongly monotone and L-Lipschitz continuous, let T : H → H be a nonexpansive mapping and let μ ∈ (0,2β/L2). For λ ∈ [0,1], define Tλ: H → H by Tλx := Tx - λμATx for all x ∈ H. Then, for all x, y ∈ H,
holds, where .
The following lemmas will be used for the proof of our main results in this article.
Lemma 2.1 (see [31]). Let {a n } be a sequence of nonnegative real numbers satisfying the property
where {s n } ⊂ (0,1] and {t n } are such that
-
(i)
;
-
(ii)
either lim supn→ ∞t n ≤ 0 or ;
-
(iii)
.
Then limn→ ∞, a n = 0.
Lemma 2.2 (see [[23], Demiclosedness Principle]). Assume that T is a nonexpansive self-mapping of a closed convex subset C of a Hilbert space H. If T has a fixed point, then I - T is demiclosed. That is, whenever {x n } is a sequence in C weakly converging to some x ∈ C and the sequence {(I - T)x n } strongly converges to some y, it follows that (I - T)x = y. Here I is the identity operator of H.
The following lemma is an immediate consequence of an inner product.
Lemma 2.3. In a real Hilbert space H, there holds the inequality
Lemma 2.4. Let be a bounded sequence of nonnegative real numbers and be a sequence of real numbers such that lim supn→ ∞b n ≤ 0. Then, lim supn→ ∞a n b n ≤ 0.
Proof. Since is a bounded sequence of nonnegative real numbers, there is a constant a > 0 such that 0 ≤ a n ≤ a for all n ≥ 0. Note that lim supn→ ∞b n ≤ 0. Hence, given ε > 0 arbitrarily, there exists an integer n0 ≥ 1 such that b n < ε for all n ≥ n0. This implies that
Therefore, we have
From the arbitrariness of ε > 0, it follows that lim supn→ ∞a n b n ≤ 0.
3 Relaxed hybrid steepest-descent algorithms
In this section, T : H → H and A i : H → H (i = 1, 2) are assumed to satisfy Assumptions (i)-(iv) in Problem I. First the following algorithm is presented for Problem I.
Algorithm 3.1.
Step 0. Take , choose x0 ∈ H arbitrarily, and let n := 0.
Step 1. Given x n ∈ H, compute xn+1∈ H as
Update n := n + 1 and go to Step 1.
The following convergence analysis is presented for Algorithm 3.1:
Theorem 3.1. Let and such that
-
(i)
limn→ ∞α n = 0;
-
(ii)
;
-
(iii)
limn→ ∞(α n - αn+1)/αn+1= 0 or ;
-
(iv)
limn→ ∞(λ n - λn+1)/λn+1= 0 or ;
-
(v)
λ n ≤ α n for all n ≥ 0.
Then the sequence generated by Algorithm 3.1 satisfies the following properties:
-
(a)
is bounded;
-
(b)
limn→ ∞∥x n - y n ∥ = 0 and limn→ ∞∥x n - Tx n ∥ = 0 hold;
-
(c)
converges strongly to the unique solution of Problem I provided ∥x n - y n ∥ = o(λ n ).
Proof. Let {x*} = VI(VI(Fix(T), A1), A2). Assumption (iii) in Problem I guarantees that
Putting z n = x n - λ n A1x n for all n ≥ 0, we have
We divide the rest of the proof into several steps.
Step 1. {x n } is bounded. Indeed, since A1 is α-inverse strongly monotone and , we have
Utilizing Proposition 2.4 and Condition (v) we have (note that )
where . By induction, it is easy to see that
This implies that is bounded. Assumption (ii) in Problem I guarantees that A1 is 1/α-Lipschitz continuous; that is,
Thus, the boundedness of {x n } ensures the boundedness of {A1x n }. From y n = T(x n - λ n A1x n ) and the nonexpansivity of T, it follows that is bounded. Since A2 is L-Lipschitz continuous, {A2y n } is also bounded.
Step 2. limn→ ∞∥x n - y n ∥ = limn→ ∞∥x n - Tx n ∥ = 0. Indeed, utilizing Proposition 2.4, we obtain from the α-inversely strong monotonicity of A1 that
Since both {A1x n } and {A2y n } are bounded, from Lemma 2.1 and Conditions (iii), (iv) it follows that
In the meantime, from ∥xn+1-y n ∥ = α n μ∥ A2y n ∥ and Condition (i), we get limn→ ∞∥xn+1-y n ∥ = 0. Since ∥x n - y n ∥ ≤ ∥x n - xn+1∥ + ∥xn+1- y n ∥,
is obtained from (3.2). Moreover, the nonexpansivity of T guarantees that
Hence, Conditions (i) and (v) lead to limn→ ∞∥y n - Tx n ∥ = 0. Therefore,
is obtained from (3.3).
Step 3. lim supn→ ∞〈A1x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence of {x n } such that
The boundedness of implies the existence of a subsequence of and a point such that . We may assume without loss of generality that , that is, .
First, we can readily see that . As a matter of fact, utilizing Lemma 2.2 we deduce immediately from (3.4) and that . From x* ∈ VI(Fix(T), A1), we derive
Step 4. lim supn→∞〈A2x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence of {x n } such that
The boundedness of implies that there is a subsequence of which converges weakly to a point . Without loss of generality, we may assume that . Utilizing Lemma 2.2 we conclude immediately from (3.4) and that .
Let y ∈ Fix(T) be fixed arbitrarily. Then, in terms of Lemma 2.3, we conclude from the nonexpansivity of T and monotonicity of A1 that for all n ≥ 0,
which implies that for all n ≥ 0,
where M0 := sup{∥x n - y∥ + ∥y n - y∥ + ∥A1x n ∥2 : n ≥ 0} < ∞. From ∥x n - y n ∥ = o(λ n ) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 ≥ 0 such that M0(∥x n - y n ∥/λ n + λ n ) ≤ ε for all n ≥ m0. Hence, 0 ≤ ε + 2〈A1y,y - x n 〉 for all n ≥ m0. Putting n := n k , we derive as k → ∞, from . Since ε > 0 is arbitrary, it is clear that for all y ∈ Fix(T). Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that . Therefore, from {x*} = VI(VI(Fix(T), A1), A2), we have
Step 5. limn→∞∥x n - x*∥ = 0. Indeed, observe first that for all n ≥ 0,
Utilizing Lemma 2.3 and Proposition 2.4, we deduce from Inequality (3.6) that for all n ≥ 0,
It is easy to see that both and are bounded and nonnegative sequences. Since , λ n ≤ α n → 0 (n → ∞), lim supn→∞ 〈A1x*,x* - x n ) ≤ 0 and lim supn→∞〈A2x*,x* - xn+1〉 ≤ 0, we conclude that and
(according to Lemma 2.4.) Therefore, utilizing Lemma 2.1 we have
This completes the proof.
On the other hand, T i : H → H (i = 1,2,... ,N) and A i : H → H (i = 1,2) are assumed to satisfy Assumptions (i)-(iv) in Problem II. Then the following algorithm is presented for Problem II.
Algorithm 3.2.
Step 0. Take , choose x0 ∈ H arbitrarily, and let n := 0.
Step 1. Given x n ∈ H, compute xn+1∈ H as
Update n := n + 1 and go to Step 1.
The following convergence analysis is presented for Algorithm 3.2:
Theorem 3.2. Let μ ∈ (0, 2β/L2), , and such that
-
(i)
limn→∞α n = 0;
-
(ii)
;
-
(iii)
limn→∞(α n - αn+N)/αn+N= 0 or ;
-
(iv)
limn→∞(λ n - λn+N)/λn+N= 0 or ;
-
(v)
λ n ≤ α n for all n ≥ 0.
Assume in addition that
Then the sequence generated by Algorithm 3.2 satisfies the following properties:
-
(a)
is bounded;
-
(b)
limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - T[n+N]... T[n+1]x n ∥ = 0 hold;
-
(c)
converges strongly to the unique solution of Problem II provided ∥x n - y n ∥ = o(λ n ).
Proof. Let . Assumption (iii) in Problem II guarantees that
Putting z n = x n - λ n A1x n for all n ≥ 0, we have
We divide the rest of the proof into several steps.
Step 1. {x n } is bounded. Indeed, since A1 is α-inverse strongly monotone and , we have
Utilizing Proposition 2.4 and Condition (v) we have (note that , for all n ≥ 0)
where . From this, we get by induction
Hence is bounded. Assumption (ii) in Problem II guarantees that A1 is 1/α-Lipschitz continuous; that is,
Thus, the boundedness of {x n } ensures the boundedness of {A1x n }. From y n = T[n+1](x n -λ n A1x n ) and the nonexpansivity of T[n+1], it follows that is bounded. Since A2 is L-Lipschitz continuous, {A2y n } is also bounded.
Step 2. limn→∞∥xn+N- x n ∥ = limn→∞∥x n - T[n+N],... ,T[n+1]x n ∥ = 0. Indeed, from the nonexpansivity of each T i (i = 1, 2,..., N), Proposition 2.3, and the condition λ n ≤ 2α (∀n ≥ 0) we conclude that for all n ≥ 0,
where M1 := sup{∥A1x n ∥ : n ≥ 0} < ∞. From Proposition 2.4, it is found that
where M2 := sup{∥A2y n ∥ : n ≥ 0} < ∞. Utilizing Lemma 2.1 we deduce from Conditions
(iii), (iv) that
From ∥xn+1-y n ∥ = μα n ∥A2y n ∥ ≤ μM2α n and Condition (i), we get limn→∞∥xn+1-y n ∥ = 0. Now we observe that the following relation holds:
Since ∥xn+1- y n ∥ → 0 and λ n → 0 as n → ∞, from the nonexpansivity of each T i (i = 1,2,..., N) and boundedness of {A1x n } it follows that as n → ∞ we have
Hence from (3.10) and (3.11) it follows that
Note that
That is,
Step 3. lim supn→∞〈A1x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence of {x n } such that
The boundedness of implies the existence of a subsequence of and a point such that . We may assume without loss of generality that , that is, .
First, we can readily see that . As a matter of fact, since the pool of mappings {T i : 1 ≤ i ≤ N} is finite, we may further assume (passing to a further subsequence if necessary) that, for some integer k ∈ {1,2,... ,N},
Then, it follows from (3.12) that
Hence, by Lemma 2.2, we conclude that
Together with Assumption (3.9) this implies that . Now, since
we obtain
Step 4. lim supn→∞〈A2x*,x* - x n 〉 ≤ 0. Indeed, choose a subsequence of {x n } such that
The boundedness of implies that there is a subsequence of which converges weakly to a point . Without loss of generality, we may assume that . Repeating the same argument as in the proof of , we have .
Let be fixed arbitrarily Then, it follows from the nonexpansivity of each T i (i = 1, 2,..., N) and monotonicity of A1 that for all n ≥ 0,
which implies that for all n ≥ 0,
where M3 := sup{∥x n -y∥ + ∥y n -y∥ : n ≥ 0} < ∞. From ∥x n - y n ∥ = o(λ n ) and Conditions (i) and (v), for any ε > 0, there exists an integer m0 > 0 such that for all n ≥ m0. Hence, 0 ≤ ε + 2〈A1y, y - x n 〉 for all n ≥ m0. Putting n := n k , we derive as k → ∞, from . Since ε > 0 is arbitrary, it is clear that for all . Accordingly, utilizing Proposition 2.1 (i) we deduce from the α-inverse strong monotonicity of A1 that . Therefore, from , we have
Step 5. limn→∞∥x n - x*∥ = 0. Indeed, repeating the same argument as in Step 5 of the proof of Theorem 3.1, from (3.14) we can derive
This completes the proof.
Remark 3.1. If we set N = 1 in Theorem 3.2, then the limit limn→∞∥xn+N- x n ∥ = 0 reduces to the one limn→∞∥xn+1- x n ∥ = 0. In this case, we have
that is, limn→∞∥x n -y n ∥ = 0.
Remark 3.2. Recall that a self-mapping T of a nonempty closed convex subset K of a real Hilbert space H is called attracting nonexpansive [32, 33] if T is nonexpansive and if, for x, p ∈ K with x ∉ Fix(T) and p ∈ Fix(T),
Recall also that T is firmly nonexpansive [32, 33] if
It is known that Assumption (3.9) in Theorem 3.2 is automatically satisfied if each T i is attracting nonexpansive. Since a projection is firmly nonexpansive, we have the following consequence of Theorem 3.2.
Corollary 3.1. Let , and such that
-
(i)
limn→∞α n = 0;
-
(ii)
;
-
(iii)
limn→∞(α n - αn+N)/αn+N= 0 or ;
-
(iv)
limn→∞(λ n - λn+N)/λn+N= 0 or ;
-
(v)
λ n ≤ α n for all n ≥ 0.
Take x0 ∈ H arbitrarily and let the sequence be generated by the iterative algorithm
where
and A1 is the same as in Problem I. Then the sequence satisfies the following properties:
-
(a)
is bounded;
-
(b)
limn→∞∥xn+N-x n ∥ = 0 and limn→∞∥x n - P[n+N]... P[n+1]x n ∥ = 0 hold;
-
(c)
converges strongly to the unique element of provided ∥x n -y n ∥ = o(λ n ).
Proof. In Theorem 3.2, putting T i = P i (i = 1, 2,..., N), we have
It is easy to see that Assumption (3.9) is automatically satisfied and that
Therefore, in terms of Theorem 3.2 we obtain the desired result.
4 Applications to constrained pseudoinverse
Let K be a nonempty closed convex subset of a real Hilbert space H. Let A be a bounded linear operator on H. Given an element b ∈ H, consider the minimization problem
Let S b denotes the solution set. Then, S b is closed and convex. It is known that S b is nonempty if and only if
In this case, S b has a unique element with minimum norm; that is, there exists a unique point x† ∈ S b satisfying