In this section, we convert algorithms (1.5) and (1.6) by releasing projection and construct two algorithms for finding the minimum norm element of .
Let be a nonexpansive mapping and be an α-inverse strongly monotone mapping. Let be a bifunction which satisfies conditions (H1)-(H4). Let r and μ be two constants such that and . In order to find a solution of the minimization problem (1.1), we construct the following implicit algorithm
(3.1)
We will show that the net defined by (3.1) converges to a solution of the minimization problem (1.1). As matter of fact, in this paper, we study the following general algorithm: Taking a ρ-contraction , for each , let be the net defined by
(3.2)
It is clear that if , then (3.2) reduces to (3.1). Next, we show that (3.2) is well defined. From Lemma 2.1, we know that . We define a mapping . From Lemma 2.2, for , the mapping is nonexpansive. Also, note that the mappings S and are nonexpansive, then we have
This indicates that is a contraction. Using the Banach contraction principle, there exists a unique fixed point of in C. Hence, (3.2) is well defined.
In the sequel, we assume:
-
(1)
C is a nonempty closed convex subset of a real Hilbert space H;
-
(2)
is a nonexpansive mapping, is an α-inverse strongly monotone mapping and is a ρ-contraction;
-
(3)
is a bifunction which satisfies conditions (H1)-(H4);
-
(4)
.
In order to prove our first main result, we need the following propositions.
Proposition 3.1 The net generated by the implicit method (3.2) is bounded.
Proof Take . It is clear that for all . Since and are nonexpansive, we have
(3.3)
It follows from (3.2) that
Hence,
(3.4)
that is,
So, is bounded. Hence , , and are also bounded. This completes the proof. □
Proposition 3.2 The net generated by the implicit method (3.2) is relatively norm compact as .
Proof From (3.3) and Lemma 2.2, we have
(3.5)
From (3.4) and (3.5), we have
Thus,
Since , we derive
(3.6)
From Lemma 2.1 and Lemma 2.2, we obtain
(3.7)
It follows that
By the nonexpansivity of , we have
Thus
Hence
Since (by (3.6)), we deduce
So
(3.8)
Next we show that is relatively norm compact as . Let be a sequence such that as . Put and . From (3.8), we get
By (3.7), we deduce
that is,
Hence,
It follows that
In particular,
(3.10)
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Also and . Noticing (3.9) we can use Lemma 2.3 to get .
Now we show . Since for any , we have
From (H2), we have
(3.11)
Put for all and . Then we have . So, from (3.11), we have
Since A is Lipschitz continuous and , we have . Further, from the monotonicity of A, we have . So, from (H4), we have
(3.12)
From (H1), (H4) and (3.12), we also have
and hence
Letting , we have, for each ,
This implies . Therefore we can substitute for z in (3.10) to get
Consequently, the weak convergence of (and ) to actually implies that . This has proved the relative norm-compactness of the net as . This completes the proof. □
Now we show our first main result.
Theorem 3.3 The net generated by the implicit method (3.2) converges in norm, as , to the unique solution of the following variational inequality:
(3.13)
In particular, if we take , then the net converges in norm, as , to a solution of the minimization problem (1.1).
Proof Now we return to (3.10) and take the limit as to get
(3.14)
In particular, solves the following variational inequality
or the equivalent dual variational inequality
Therefore, . That is, is the unique fixed point in Γ of the contraction . Clearly, this is sufficient to conclude that the entire net converges in norm to as .
Finally, if we take , then (3.14) is reduced to
Equivalently,
This clearly implies that
Therefore, is a solution of the minimization problem (1.1). This completes the proof. □
Next, we introduce an explicit algorithm for finding a solution of the minimization problem (1.1).
Algorithm 3.4 For given arbitrarily, let the sequence be generated iteratively by
(3.15)
where is a real number sequence in .
Next, we give our second main result.
Theorem 3.5 Assume that the sequence satisfies the conditions: , and . Then the sequence generated by (3.15) converges strongly to which is the unique solution of the variational inequality (3.13). In particular, if , then the sequence converges strongly to a solution of the minimization problem (1.1).
Proof Pick . From Lemma 2.2, we know that . Set for all n. From (3.15), we get
(3.16)
Hence,
By induction, we have
Therefore, is bounded. Hence, , , are also bounded.
From (3.16), we obtain
Since A is α-inverse strongly monotone, we know from Lemma 2.3 that
It follows that
(3.17)
Note that
(3.18)
From Lemma 2.3, we know that is nonexpansive for all . Thus, we have is nonexpansive for all n due to the fact that . Then we get
(3.19)
From (3.15), (3.18) and (3.19), we obtain
By Lemma 2.4, we get
From (3.15) and (3.17), we have
Then we obtain
Since , and , we have
(3.20)
Next, we show . By using the firm nonexpansivity of , we have
From (3.16) and (3.17), we have
Thus,
That is,
It follows that
Hence,
Since , and , we deduce
(3.21)
This together with implies that
(3.22)
Put , where is the net defined by (3.2). We will finally show that .
Set for all n. Take in (3.17) to get . First, we prove . We take a subsequence of such that
It is clear that is bounded due to the boundedness of . Then there exists a subsequence of which converges weakly to some point . Hence, also converges weakly to w. From (3.22), we have
(3.23)
By the demi-closedness principle of the nonexpansive mapping (see Lemma 2.3) and (3.23), we deduce . Furthermore, by a similar argument as that of Theorem 3.3, we can show that w is also in EPA. Hence, we have . This implies that
From (3.15), we have
It is clear that and
We can therefore apply Lemma 2.4 to conclude that .
Finally, if we take , by a similar argument as that in Theorem 3.3, we deduce immediately that is a minimum norm element in Γ. This completes the proof. □