Variant extragradient-type method for monotone variational inequalities

Abstract

Korpelevich’s extragradient method has been studied and extended extensively due to its applicability to the whole class of monotone variational inequalities. In the present paper, we propose a variant extragradient-type method for solving monotone variational inequalities. Convergence analysis of the method is presented under reasonable assumptions on the problem data.

MSC:47H05, 47J05, 47J25.

1 Introduction

Let H be a real Hilbert space with the inner product $〈\cdot ,\cdot 〉$ and its induced norm $\parallel \cdot \parallel$. Let C be a nonempty, closed and convex subset of H and let $A:C\to H$ be a nonlinear operator. The variational inequality problem for A and C, denoted by $VI\left(C,A\right)$, is the problem of finding a point ${x}^{\ast }\in C$ satisfying

$〈A{x}^{\ast },x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C.$
(1)

We denote the solution set of this problem by $SVI\left(C,A\right)$. Under the monotonicity assumption, the solution set of $SVI\left(C,A\right)$ is always closed and convex.

The variational inequality problem is a fundamental problem in variational analysis and, in particular, in optimization theory. There are several iterative methods for solving it. See, e.g., [138]. The basic idea consists of extending the projected gradient method for constrained optimization, i.e., for the problem of minimizing $f\left(x\right)$ subject to $x\in C$. For ${x}_{0}\in C$, compute the sequence $\left\{{x}_{n}\right\}$ in the following manner:

${x}_{n+1}={P}_{C}\left[{x}_{n}-{\alpha }_{n}\mathrm{\nabla }f\left({x}_{n}\right)\right],\phantom{\rule{1em}{0ex}}n\ge 0,$
(2)

where ${\alpha }_{n}>0$ is the stepsize and ${P}_{C}$ is the metric projection onto C. See [1] for convergence properties of this method for the case in which f is convex and $f:{R}^{n}\to R$ is a differentiable function, which are related to the results in this article. An immediate extension of the method (2) to $VI\left(C,A\right)$ is the iterative procedure given by

${x}_{n+1}={P}_{C}\left[{x}_{n}-{\alpha }_{n}A{x}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0.$
(3)

Convergence results for this method require some monotonicity properties of A. Note that for the method given by (3) there is no chance of relaxing the assumption on A to plain monotonicity. The typical example consists of taking $C={\mathbb{R}}^{2}$ and A, a rotation with a $\frac{\pi }{2}$ angle. A is monotone and the unique solution of $VI\left(C,A\right)$ is ${x}^{\ast }=0$. However, it is easy to check that $\parallel {x}_{n}-{\alpha }_{n}A{x}_{n}\parallel >\parallel {x}_{n}\parallel$ for all ${x}_{n}\ne 0$ and all ${\alpha }_{n}>0$, therefore the sequence generated by (3) moves away from the solution, independently of the choice of the stepsize ${\alpha }_{n}$.

To overcome this weakness of the method defined by (3), Korpelevich [20] proposed a modification of the method, called the extragradient algorithm. It generates iterates using the following formulae:

$\begin{array}{r}{y}_{n}={P}_{C}\left[{x}_{n}-\lambda A{x}_{n}\right],\\ {x}_{n+1}={P}_{C}\left[{x}_{n}-\lambda A{y}_{n}\right],\phantom{\rule{1em}{0ex}}n\ge 0,\end{array}$
(4)

where $\lambda >0$ is a fixed number. The difference in (4) is that A is evaluated twice and the projection is computed twice at each iteration, but the benefit is significant, because the resulting algorithm is applicable to the whole class of monotone variational inequalities. However, we note that Korpelevich assumed that A is Lipschitz continuous and that an estimate of the Lipschitz constant is available. When A is not Lipschitz continuous, or it is Lipschitz but the constant is not known, the fixed parameter λ must be replaced by stepsizes computed through an Armijo-type search, as in the following method, presented in [39] (see also [18] for another related approach).

Let $\delta \in \left(0,1\right)$, $\left\{{\beta }_{n}\right\}\subset \left[\stackrel{ˆ}{\beta },\overline{\beta }\right]$ and ${x}_{0}\in C$. Given ${x}_{n}$ define

${z}_{n}={x}_{n}-{\beta }_{n}A{x}_{n}.$

If ${x}_{n}={P}_{C}\left[{z}_{n}\right]$, then stop. Otherwise, let

$\begin{array}{rl}j\left(n\right)& =min\left\{j\ge 0:〈A\left(\frac{1}{{2}^{j}}{P}_{C}\left[{z}_{n}\right]+\left(1-\frac{1}{{2}^{j}}{x}_{n}\right)\right),{x}_{n}-{P}_{C}\left[{z}_{n}\right]〉\\ \ge \frac{\delta }{{\beta }_{n}}{\parallel {x}_{n}-{P}_{C}\left[{z}_{n}\right]\parallel }^{2}\right\}\end{array}$
(5)

and

${\alpha }_{n}={2}^{-j\left(n\right)},\phantom{\rule{1em}{0ex}}{y}_{n}={\alpha }_{n}{P}_{C}\left[{z}_{n}\right]+\left(1-{\alpha }_{n}\right){x}_{n}.$

Define

$\begin{array}{c}{H}_{n}=\left\{x\in H:〈A{y}_{n},z-{y}_{n}〉\le 0\right\},\hfill \\ {W}_{n}=\left\{z\in H:〈{x}_{0}-{x}_{n},z-{x}_{n}〉\le 0\right\},\hfill \\ {x}_{n+1}={P}_{{H}_{n}\cap {W}_{n}\cap C}{x}_{0}.\hfill \end{array}$
(6)

It is proved that if A is maximal monotone, point-to-point and uniformly continuous on bounded sets, and if $VI\left(C,A\right)$ is nonempty, then ${\left\{{x}_{n}\right\}}_{n}$ strongly converges to ${P}_{VI\left(C,A\right)}{x}_{0}$.

We now know that the difficult implementation of these methods is in computational respect. First, we note that in order to get ${\alpha }_{n}$, we have to compute $j\left(n\right)$, which may be time-consuming. At the same time, we observe that (6) involves two half-spaces ${H}_{n}$ and ${W}_{n}$. If the sets C, ${H}_{n}$ and ${W}_{n}$ are simple enough, then ${P}_{C}$, ${P}_{{H}_{n}}$ and ${P}_{{W}_{n}}$ are easily executed. But ${H}_{n}\cap {W}_{n}\cap C$ may be complicated, so that the projection ${P}_{{H}_{n}\cap {W}_{n}\cap C}$ is not easily executed. This might seriously affect the efficiency of the method.

The literature on the $VI\left(C,A\right)$ is vast and Korpelevich’s method has received great attention from many authors, who improved it in various ways; see, e.g., [33, 3944] and references therein. It is known that Korpelevich’s method (4) has only weak convergence in the infinite-dimensional Hilbert spaces (please refer to a recent result of Censor et al. [40] and [41]). So, to obtain strong convergence, the original method was modified by several authors. For example, in [4, 43] it was proved that some very interesting Korpelevich-type algorithms strongly converge to a solution of $VI\left(C,A\right)$. Very recently, Yao et al. [33] suggested modified Korpelevich’s method which converges strongly to the minimum norm solution of variational inequality (1) in infinite-dimensional Hilbert spaces.

Motivated by the works given above, in the present paper, we propose a variant extragradient-type method for solving monotone variational inequalities. Strong convergence analysis of the method is presented under reasonable assumptions on the problem data in the infinite-dimensional Hilbert spaces.

2 Preliminaries

In this section, we present some definitions and results that are needed for the convergence analysis of the proposed method. Let C be a closed convex subset of a real Hilbert space H.

A mapping $F:C\to H$ is said to be Lipschitz if there exists a positive real number $L>0$ such that

$\parallel F\left(x\right)-F\left(y\right)\parallel \le L\parallel x-y\parallel$

for all $x,y\in C$. In the case $L\in \left(0,1\right)$, F is called L-contractive. A mapping $A:C\to H$ is called α-inverse-strongly-monotone if there exists a positive real number α such that

$〈Au-Av,u-v〉\ge \alpha {\parallel Au-Av\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }u,v\in C.$

The following result is well known.

Proposition 1 [45]

Let C be a bounded closed convex subset of a real Hilbert space H and let A be an α-inverse strongly monotone operator of C into H. Then $SVI\left(C,A\right)$ is nonempty.

For any $u\in H$, there exists a unique ${u}_{0}\in C$ such that

$\parallel u-{u}_{0}\parallel =inf\left\{\parallel u-x\parallel :x\in C\right\}.$

We denote ${u}_{0}$ by ${P}_{C}u$, where ${P}_{C}$ is called the metric projection of H onto C. The following is a useful characterization of projections.

Proposition 2 Given $x\in H$. We have

$〈x-{P}_{C}x,y-{P}_{C}x〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,$

which is equivalent to

$〈x-y,{P}_{C}x-{P}_{C}y〉\ge {\parallel {P}_{C}x-{P}_{C}y\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in H.$

Consequently, we deduce immediately that ${P}_{C}$ is nonexpansive, that is,

$\parallel {P}_{C}x-{P}_{C}y\parallel \le \parallel x-y\parallel$

for all $x,y\in H$.

It is well known that $2{P}_{C}-I$ is nonexpansive.

Lemma 1 [45]

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping $A:C\to H$ be α-inverse strongly monotone and $r>0$ be a constant. Then we have

${\parallel \left(I-rA\right)x-\left(I-rA\right)y\parallel }^{2}\le {\parallel x-y\parallel }^{2}+r\left(r-2\alpha \right){\parallel Ax-Ay\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C.$

In particular, if $0\le r\le 2\alpha$, then $I-rA$ is nonexpansive.

Lemma 2 [46]

Let $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ be bounded sequences in a Banach space X and let $\left\{{\beta }_{n}\right\}$ be a sequence in $\left[0,1\right]$ with $0<{lim inf}_{n\to \mathrm{\infty }}{\beta }_{n}\le {lim sup}_{n\to \mathrm{\infty }}{\beta }_{n}<1$.

Suppose that

1. (1)

${x}_{n+1}=\left(1-{\beta }_{n}\right){y}_{n}+{\beta }_{n}{x}_{n}$ for all $n\ge 0$;

2. (2)

${lim sup}_{n\to \mathrm{\infty }}\left(\parallel {y}_{n+1}-{y}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \right)\le 0$.

Then ${lim}_{n\to \mathrm{\infty }}\parallel {y}_{n}-{x}_{n}\parallel =0$.

Lemma 3 [47]

Assume that $\left\{{a}_{n}\right\}$ is a sequence of nonnegative real numbers, which satisfies

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\delta }_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$

where $\left\{{\gamma }_{n}\right\}$ is a sequence in $\left(0,1\right)$ and $\left\{{\delta }_{n}\right\}$ is a sequence such that

1. (1)

${\sum }_{n=1}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$;

2. (2)

${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}/{\gamma }_{n}\le 0$ or ${\sum }_{n=1}^{\mathrm{\infty }}|{\delta }_{n}|<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

3 Algorithm and its convergence analysis

In this section, we present the formal statement of our proposal for the algorithm.

Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let $A:C\to H$ be an α-inverse-strongly-monotone mapping and let $F:C\to H$ be a ρ-contractive mapping. Consider the sequences $\left\{{\alpha }_{n}\right\}\subset \left[0,1\right]$, $\left\{{\lambda }_{n}\right\}\subset \left[0,2\alpha \right]$, $\left\{{\mu }_{n}\right\}\subset \left[0,2\alpha \right]$ and $\left\{{\gamma }_{n}\right\}\subset \left[0,1\right]$.

1. 1.

Initialization:

${x}_{0}\in C.$
2. 2.

Iterative step: Given ${x}_{n}$, define

$\left\{\begin{array}{c}{y}_{n}={P}_{C}\left[{x}_{n}-{\lambda }_{n}A{x}_{n}+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\right],\hfill \\ {x}_{n+1}={P}_{C}\left[{x}_{n}-{\mu }_{n}A{y}_{n}+{\gamma }_{n}\left({y}_{n}-{x}_{n}\right)\right],\phantom{\rule{1em}{0ex}}n\ge 0.\hfill \end{array}$
(7)

Remark 1 Note that algorithm (7) includes Korpelevich’s method (4) as a special case.

Next, we shall perform a study on the convergence analysis of the proposed algorithm (7).

Theorem 1 Suppose that $SVI\left(C,A\right)\ne \mathrm{\varnothing }$. Assume that the algorithm parameters $\left\{{\alpha }_{n}\right\}$, $\left\{{\lambda }_{n}\right\}$, $\left\{{\mu }_{n}\right\}$ and $\left\{{\gamma }_{n}\right\}$ satisfy the following conditions:

1. (C1)

${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

2. (C2)

${\lambda }_{n}\in \left[a,b\right]\subset \left(0,2\alpha \right)$ and ${lim}_{n\to \mathrm{\infty }}\left({\lambda }_{n+1}-{\lambda }_{n}\right)=0$;

3. (C3)

${\gamma }_{n}\in \left(0,1\right)$, ${\mu }_{n}\le 2\alpha {\gamma }_{n}$ and ${lim}_{n\to \mathrm{\infty }}\left({\gamma }_{n+1}-{\gamma }_{n}\right)={lim}_{n\to \mathrm{\infty }}\left({\mu }_{n+1}-{\mu }_{n}\right)=0$.

Then the sequence $\left\{{x}_{n}\right\}$ generated by (7) converges strongly to $\stackrel{˜}{x}\in SVI\left(C,A\right)$, which solves the following variational inequality:

$〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{x}^{\ast }〉\le 0\phantom{\rule{1em}{0ex}}\mathit{\text{for all}}\phantom{\rule{0.5em}{0ex}}{x}^{\ast }\in SVI\left(C,A\right).$

We shall prove our main result in several steps, included into the propositions given bellow.

Proposition 3 The sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ are bounded. Therefore, the sequences $\left\{F{x}_{n}\right\}$, $\left\{A{x}_{n}\right\}$ and $\left\{A{y}_{n}\right\}$ are all bounded.

Proof From conditions (C1) and (C2), since ${\alpha }_{n}\to 0$ and ${\lambda }_{n}\in \left[a,b\right]\subset \left(0,2\alpha \right)$, we have ${\alpha }_{n}<1-\frac{{\lambda }_{n}}{2\alpha }$, for n large enough. Without loss of generality, we may assume that, for all $n\in \mathbb{N}$, ${\alpha }_{n}<1-\frac{{\lambda }_{n}}{2\alpha }$. So, $\frac{{\lambda }_{n}}{1-{\alpha }_{n}}\in \left(0,2\alpha \right)$.

Consider any ${x}^{\ast }\in SVI\left(C,A\right)$. By the property of the metric projection, we know ${x}^{\ast }={P}_{C}\left[{x}^{\ast }-\delta A{x}^{\ast }\right]$ for any $\delta >0$. Hence,

$\begin{array}{rl}{x}^{\ast }& ={P}_{C}\left[{x}^{\ast }-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}^{\ast }\right]={P}_{C}\left[{x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right]\\ ={P}_{C}\left[{\alpha }_{n}{x}^{\ast }+\left(1-{\alpha }_{n}\right)\left({x}^{\ast }-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}^{\ast }\right)\right],\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0.\end{array}$
(8)

Thus, by (7) and (8), we have

$\begin{array}{rl}\parallel {y}_{n}-{x}^{\ast }\parallel =& \parallel {P}_{C}\left[{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right){x}_{n}-{\lambda }_{n}A{x}_{n}\right]-{x}^{\ast }\parallel \\ =& \parallel {P}_{C}\left[{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right)\left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)\right]\\ -{P}_{C}\left[{\alpha }_{n}{x}^{\ast }+\left(1-{\alpha }_{n}\right)\left({x}^{\ast }-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}^{\ast }\right)\right]\parallel \\ \le & \parallel {\alpha }_{n}\left(F{x}_{n}-{x}^{\ast }\right)+\left(1-{\alpha }_{n}\right)\left[\left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)-\left({x}^{\ast }-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}^{\ast }\right)\right]\parallel .\end{array}$
(9)

Since $\frac{{\lambda }_{n}}{1-{\alpha }_{n}}\in \left(0,2\alpha \right)$, from Lemma 1, we know that $I-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A$ is nonexpansive. From (9), we get

$\begin{array}{rcl}\parallel {y}_{n}-{x}^{\ast }\parallel & \le & {\alpha }_{n}\parallel F{x}_{n}-{x}^{\ast }\parallel +\left(1-{\alpha }_{n}\right)\parallel \left(I-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\right){x}_{n}-\left(I-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\right){x}^{\ast }\parallel \\ \le & {\alpha }_{n}\parallel F{x}_{n}-F{x}^{\ast }\parallel +{\alpha }_{n}\parallel F{x}^{\ast }-{x}^{\ast }\parallel +\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-{x}^{\ast }\parallel \\ \le & {\alpha }_{n}\rho \parallel {x}_{n}-{x}^{\ast }\parallel +{\alpha }_{n}\parallel F{x}^{\ast }-{x}^{\ast }\parallel +\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-{x}^{\ast }\parallel \\ =& \left[1-\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-{x}^{\ast }\parallel +{\alpha }_{n}\parallel F{x}^{\ast }-{x}^{\ast }\parallel .\end{array}$

By (C3), we obtain $\frac{{\mu }_{n}}{{\gamma }_{n}}\le 2\alpha$. So, $I-\frac{{\mu }_{n}}{{\gamma }_{n}}A$ is also nonexpansive. Therefore,

$\begin{array}{rl}\parallel {x}_{n+1}-{x}^{\ast }\parallel =& \parallel {P}_{C}\left[{x}_{n}-{\mu }_{n}A{y}_{n}+{\gamma }_{n}\left({y}_{n}-{x}_{n}\right)\right]-{x}^{\ast }\parallel \\ =& \parallel {P}_{C}\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\\ -{P}_{C}\left[\left(1-{\gamma }_{n}\right){x}^{\ast }+{\gamma }_{n}\left({x}^{\ast }-\frac{{\mu }_{n}}{{\gamma }_{n}}A{x}^{\ast }\right)\right]\parallel \\ \le & \left(1-{\gamma }_{n}\right)\parallel {x}_{n}-{x}^{\ast }\parallel +{\gamma }_{n}\parallel \left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)-\left({x}^{\ast }-\frac{{\mu }_{n}}{{\gamma }_{n}}A{x}^{\ast }\right)\parallel \\ \le & \left(1-{\gamma }_{n}\right)\parallel {x}_{n}-{x}^{\ast }\parallel +{\gamma }_{n}\parallel {y}_{n}-{x}^{\ast }\parallel \\ \le & \left(1-{\gamma }_{n}\right)\parallel {x}_{n}-{x}^{\ast }\parallel +{\gamma }_{n}{\alpha }_{n}\parallel F{x}^{\ast }-{x}^{\ast }\parallel \\ +{\gamma }_{n}\left[1-\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-{x}^{\ast }\parallel \\ =& \left[1-\left(1-\rho \right){\gamma }_{n}{\alpha }_{n}\right]\parallel {x}_{n}-{x}^{\ast }\parallel +{\gamma }_{n}{\alpha }_{n}\parallel F{x}^{\ast }-{x}^{\ast }\parallel \\ \le & max\left\{\parallel {x}_{n}-{x}^{\ast }\parallel ,\frac{\parallel F{x}^{\ast }-{x}^{\ast }\parallel }{1-\rho }\right\}.\end{array}$
(10)

By induction, we get

$\parallel {x}_{n+1}-{x}^{\ast }\parallel \le max\left\{\parallel {x}_{0}-{x}^{\ast }\parallel ,\frac{\parallel F{x}^{\ast }-{x}^{\ast }\parallel }{1-\rho }\right\}.$

Then $\left\{{x}_{n}\right\}$ is bounded, and so are $\left\{{y}_{n}\right\}$, $\left\{F{x}_{n}\right\}$, $\left\{A{x}_{n}\right\}$ and $\left\{A{y}_{n}\right\}$. Therefore, the proof is complete. □

Proposition 4 The following two properties hold:

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =0,\phantom{\rule{2em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{y}_{n}\parallel =0.$

Proof Let $S=2{P}_{C}-I$. From the property of the metric projection, we known that S is nonexpansive. Therefore, we can rewrite ${x}_{n+1}$ in (7) as

$\begin{array}{rcl}{x}_{n+1}& =& \frac{I+S}{2}\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\\ =& \frac{1-{\gamma }_{n}}{2}{x}_{n}+\frac{{\gamma }_{n}}{2}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)+\frac{1}{2}S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\\ =& \frac{1-{\gamma }_{n}}{2}{x}_{n}+\frac{1+{\gamma }_{n}}{2}{z}_{n},\end{array}$

where

$\begin{array}{rcl}{z}_{n}& =& \frac{\frac{{\gamma }_{n}}{2}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)+\frac{1}{2}S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]}{\frac{1+{\gamma }_{n}}{2}}\\ =& \frac{{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)+S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]}{1+{\gamma }_{n}}.\end{array}$

It follows that

$\begin{array}{r}{z}_{n+1}-{z}_{n}\\ \phantom{\rule{1em}{0ex}}=\frac{{\gamma }_{n+1}\left({y}_{n+1}-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A{y}_{n+1}\right)+S\left[\left(1-{\gamma }_{n+1}\right){x}_{n+1}+{\gamma }_{n+1}\left({y}_{n+1}-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A{y}_{n+1}\right)\right]}{1+{\gamma }_{n+1}}\\ \phantom{\rule{2em}{0ex}}-\frac{{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)+S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]}{1+{\gamma }_{n}}.\end{array}$

So,

$\begin{array}{rl}\parallel {z}_{n+1}-{z}_{n}\parallel \le & \frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel \left({y}_{n+1}-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A{y}_{n+1}\right)-\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\parallel \\ +|\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|\parallel {y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\parallel \\ +\frac{1}{1+{\gamma }_{n+1}}\parallel S\left[\left(1-{\gamma }_{n+1}\right){x}_{n+1}+{\gamma }_{n+1}\left({y}_{n+1}-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A{y}_{n+1}\right)\right]\\ -S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel \\ +|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|\parallel S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel \\ \le & \frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel \left(I-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A\right){y}_{n+1}-\left(I-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A\right){y}_{n}\parallel \\ +\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}|\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}-\frac{{\mu }_{n}}{{\gamma }_{n}}|\parallel A{y}_{n}\parallel \\ +|\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|\parallel {y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\parallel \\ +\frac{1}{1+{\gamma }_{n+1}}\parallel S\left[\left(1-{\gamma }_{n+1}\right){x}_{n+1}+{\gamma }_{n+1}\left({y}_{n+1}-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A{y}_{n+1}\right)\right]\\ -S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel \\ +|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|\parallel S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel .\end{array}$

Again, by using the nonexpansivity of $I-\frac{{\mu }_{n}}{{\gamma }_{n}}A$ and S, we have

$\begin{array}{rl}\parallel {z}_{n+1}-{z}_{n}\parallel \le & \frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel {y}_{n+1}-{y}_{n}\parallel +|\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|\parallel {y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\parallel \\ +\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}|\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}-\frac{{\mu }_{n}}{{\gamma }_{n}}|\parallel A{y}_{n}\parallel +\frac{1}{1+{\gamma }_{n+1}}\parallel \left(1-{\gamma }_{n+1}\right)\left({x}_{n+1}-{x}_{n}\right)\\ +{\gamma }_{n+1}\left[\left(I-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A\right){y}_{n+1}-\left(I-\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}A\right){y}_{n}\right]\\ +\left({\gamma }_{n+1}-{\gamma }_{n}\right)\left({y}_{n}-{x}_{n}\right)+\left({\mu }_{n}-{\mu }_{n+1}\right)A{y}_{n}\parallel \\ +|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|\parallel S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel \\ \le & \frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel {y}_{n+1}-{y}_{n}\parallel +|\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|\parallel {y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\parallel \\ +\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}|\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}-\frac{{\mu }_{n}}{{\gamma }_{n}}|\parallel A{y}_{n}\parallel +\frac{1-{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel {x}_{n+1}-{x}_{n}\parallel \\ +\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}\parallel {y}_{n+1}-{y}_{n}\parallel \\ +\frac{|{\gamma }_{n+1}-{\gamma }_{n}|}{1+{\gamma }_{n+1}}\left(\parallel {x}_{n}\parallel +\parallel {y}_{n}\parallel \right)+\frac{|{\mu }_{n+1}-{\mu }_{n}|}{1+{\gamma }_{n+1}}\parallel A{y}_{n}\parallel \\ +|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|\parallel S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel .\end{array}$

Next, we estimate $\parallel {y}_{n+1}-{y}_{n}\parallel$.

By (7), we have

$\begin{array}{rl}\parallel {y}_{n+1}-{y}_{n}\parallel =& \parallel {P}_{C}\left[{x}_{n+1}-{\lambda }_{n+1}A{x}_{n+1}+{\alpha }_{n+1}\left(F{x}_{n+1}-{x}_{n+1}\right)\right]\\ -{P}_{C}\left[{x}_{n}-{\lambda }_{n}A{x}_{n}+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\right]\parallel \\ \le & \parallel \left[{x}_{n+1}-{\lambda }_{n+1}A{x}_{n+1}\right]-\left[{x}_{n}-{\lambda }_{n}A{x}_{n}\right]\parallel +{\alpha }_{n+1}\parallel F{x}_{n+1}-{x}_{n+1}\parallel \\ +{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \\ =& \parallel \left(I-{\lambda }_{n+1}A\right){x}_{n+1}-\left(I-{\lambda }_{n+1}A\right){x}_{n}+\left({\lambda }_{n}-{\lambda }_{n+1}\right)A{x}_{n}\parallel \\ +{\alpha }_{n+1}\parallel F{x}_{n+1}-{x}_{n+1}\parallel +{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \\ \le & \parallel {x}_{n+1}-{x}_{n}\parallel +|{\lambda }_{n+1}-{\lambda }_{n}|\parallel {x}_{n}\parallel +{\alpha }_{n+1}\parallel F{x}_{n+1}-{x}_{n+1}\parallel +{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel .\end{array}$

So, we deduce

$\begin{array}{rl}\parallel {z}_{n+1}-{z}_{n}\parallel \le & |\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|\parallel {y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\parallel +\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}|\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}-\frac{{\mu }_{n}}{{\gamma }_{n}}|\parallel A{y}_{n}\parallel \\ +\frac{|{\gamma }_{n+1}-{\gamma }_{n}|}{1+{\gamma }_{n+1}}\left(\parallel {x}_{n}\parallel +\parallel {y}_{n}\parallel \right)+\frac{|{\mu }_{n+1}-{\mu }_{n}|}{1+{\gamma }_{n+1}}\parallel A{y}_{n}\parallel +\parallel {x}_{n+1}-{x}_{n}\parallel \\ +|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|\parallel S\left[\left(1-{\gamma }_{n}\right){x}_{n}+{\gamma }_{n}\left({y}_{n}-\frac{{\mu }_{n}}{{\gamma }_{n}}A{y}_{n}\right)\right]\parallel \\ +|{\lambda }_{n+1}-{\lambda }_{n}|\parallel {x}_{n}\parallel \\ +{\alpha }_{n+1}\parallel F{x}_{n+1}-{x}_{n+1}\parallel +{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel .\end{array}$

Since ${lim}_{n\to \mathrm{\infty }}\left({\gamma }_{n+1}-{\gamma }_{n}\right)=0$ and ${lim}_{n\to \mathrm{\infty }}\left({\mu }_{n+1}-{\mu }_{n}\right)=0$, we derive that

$\underset{n\to \mathrm{\infty }}{lim}|\frac{{\gamma }_{n+1}}{1+{\gamma }_{n+1}}-\frac{{\gamma }_{n}}{1+{\gamma }_{n}}|=0,\phantom{\rule{2em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}|\frac{{\mu }_{n+1}}{{\gamma }_{n+1}}-\frac{{\mu }_{n}}{{\gamma }_{n}}|=0,\phantom{\rule{2em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}|\frac{1}{1+{\gamma }_{n+1}}-\frac{1}{1+{\gamma }_{n}}|=0.$

At the same time, note that $\left\{{x}_{n}\right\}$, $\left\{F{x}_{n}\right\}$, $\left\{{y}_{n}\right\}$ and $\left\{A{y}_{n}\right\}$ are bounded. Therefore,

$\underset{n\to \mathrm{\infty }}{lim sup}\left(\parallel {z}_{n+1}-{z}_{n}\parallel -\parallel {x}_{n+1}-{x}_{n}\parallel \right)\le 0.$

By Lemma 2, we obtain

$\underset{n\to \mathrm{\infty }}{lim}\parallel {z}_{n}-{x}_{n}\parallel =0.$

Hence,

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =\underset{n\to \mathrm{\infty }}{lim}\frac{1+{\gamma }_{n}}{2}\parallel {z}_{n}-{x}_{n}\parallel =0.$

From (9), (10), Lemma 1 and the convexity of the norm, we deduce

$\begin{array}{rl}{\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}\le & \left(1-{\gamma }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\gamma }_{n}{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}\\ \le & {\gamma }_{n}{\parallel {\alpha }_{n}\left(F{x}_{n}-{x}^{\ast }\right)+\left(1-{\alpha }_{n}\right)\left[\left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)-\left({x}^{\ast }-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}^{\ast }\right)\right]\parallel }^{2}\\ +\left(1-{\gamma }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}\\ \le & {\gamma }_{n}\left[{\alpha }_{n}{\parallel F{x}_{n}-{x}^{\ast }\parallel }^{2}+\left(1-{\alpha }_{n}\right){\parallel \left(I-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\right){x}_{n}-\left(I-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\right){x}^{\ast }\parallel }^{2}\right]\\ +\left(1-{\gamma }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}\\ \le & \left(1-{\alpha }_{n}\right){\gamma }_{n}\left[{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+\frac{{\lambda }_{n}}{1-{\alpha }_{n}}\left(\frac{{\lambda }_{n}}{1-{\alpha }_{n}}-2\alpha \right){\parallel A{x}_{n}-A{x}^{\ast }\parallel }^{2}\right]\\ +\left(1-{\gamma }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\alpha }_{n}{\gamma }_{n}{\parallel F{x}_{n}-{x}^{\ast }\parallel }^{2}\\ \le & {\alpha }_{n}{\gamma }_{n}{\parallel F{x}_{n}-{x}^{\ast }\parallel }^{2}+{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\gamma }_{n}a\left(\frac{b}{1-{\alpha }_{n}}-2\alpha \right){\parallel A{x}_{n}-A{x}^{\ast }\parallel }^{2}.\end{array}$

Therefore, we have

$\begin{array}{r}{\gamma }_{n}a\left(2\alpha -\frac{b}{1-{\alpha }_{n}}\right){\parallel A{x}_{n}-A{x}^{\ast }\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}{\gamma }_{n}{\parallel F{x}_{n}-{x}^{\ast }\parallel }^{2}+{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}-{\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le {\alpha }_{n}{\gamma }_{n}{\parallel F{x}_{n}-{x}^{\ast }\parallel }^{2}+\left(\parallel {x}_{n}-{x}^{\ast }\parallel +\parallel {x}_{n+1}-{x}^{\ast }\parallel \right)×\parallel {x}_{n}-{x}_{n+1}\parallel .\end{array}$

Since ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${lim}_{n\to \mathrm{\infty }}\parallel {x}_{n}-{x}_{n+1}\parallel =0$ and ${lim inf}_{n\to \mathrm{\infty }}{\gamma }_{n}a\left(2\alpha -\frac{b}{1-{\alpha }_{n}}\right)>0$, we deduce

$\underset{n\to \mathrm{\infty }}{lim}\parallel A{x}_{n}-A{x}^{\ast }\parallel =0.$

By the property (ii) of the metric projection ${P}_{C}$, we have

$\begin{array}{rl}{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}=& {\parallel {P}_{C}\left[{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right){x}_{n}-{\lambda }_{n}A{x}_{n}\right]-{P}_{C}\left[{x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right]\parallel }^{2}\\ \le & 〈{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right){x}_{n}-{\lambda }_{n}A{x}_{n}-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right),{y}_{n}-{x}^{\ast }〉\\ =& \frac{1}{2}\left\{{\parallel {x}_{n}-{\lambda }_{n}A{x}_{n}-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel }^{2}+{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}\\ -{\parallel {\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right){x}_{n}-{\lambda }_{n}A{x}_{n}-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)-\left({y}_{n}-{x}^{\ast }\right)\parallel }^{2}\right\}\\ \le & \frac{1}{2}\left\{{\parallel \left({x}_{n}-{\lambda }_{n}A{x}_{n}\right)-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)\parallel }^{2}\\ +2{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{\lambda }_{n}A{x}_{n}-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel \\ +{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}-{\parallel \left({x}_{n}-{y}_{n}\right)-{\lambda }_{n}\left(A{x}_{n}-A{x}^{\ast }\right)+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel }^{2}\right\}\\ \le & \frac{1}{2}\left\{{\parallel \left({x}_{n}-{\lambda }_{n}A{x}_{n}\right)-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)\parallel }^{2}+{\alpha }_{n}M+{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}\\ -{\parallel \left({x}_{n}-{y}_{n}\right)-{\lambda }_{n}\left(A{x}_{n}-A{x}^{\ast }\right)+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel }^{2}\right\}\\ \le & \frac{1}{2}\left\{{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\alpha }_{n}M+{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}-{\parallel {x}_{n}-{y}_{n}\parallel }^{2}+2{\lambda }_{n}〈{x}_{n}-{y}_{n},A{x}_{n}-A{x}^{\ast }〉\\ -2{\alpha }_{n}〈F{x}_{n}-{x}_{n},{x}_{n}-{y}_{n}〉-{\parallel {\lambda }_{n}\left(A{x}_{n}-A{x}^{\ast }\right)-{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel }^{2}\right\}\\ \le & \frac{1}{2}\left\{{\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\alpha }_{n}M+{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}-{\parallel {x}_{n}-{y}_{n}\parallel }^{2}\\ +2{\lambda }_{n}\parallel {x}_{n}-{y}_{n}\parallel \parallel A{x}_{n}-A{x}^{\ast }\parallel +2{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{y}_{n}\parallel \right\},\end{array}$

where $M>0$ is some constant satisfying

$\underset{n}{sup}\left\{2\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{\lambda }_{n}A{x}_{n}-\left({x}^{\ast }-{\lambda }_{n}A{x}^{\ast }\right)+{\alpha }_{n}\left(F{x}_{n}-{x}_{n}\right)\parallel \right\}\le M.$

It follows that

$\begin{array}{rcl}{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}& \le & {\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\alpha }_{n}M-{\parallel {x}_{n}-{y}_{n}\parallel }^{2}+2{\lambda }_{n}\parallel {x}_{n}-{y}_{n}\parallel \parallel A{x}_{n}-A{x}^{\ast }\parallel \\ +2{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{y}_{n}\parallel ,\end{array}$

and hence

$\begin{array}{rcl}{\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}& \le & \left(1-{\gamma }_{n}\right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\gamma }_{n}{\parallel {y}_{n}-{x}^{\ast }\parallel }^{2}\\ \le & {\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{\alpha }_{n}M-{\gamma }_{n}{\parallel {x}_{n}-{y}_{n}\parallel }^{2}+2{\lambda }_{n}\parallel {x}_{n}-{y}_{n}\parallel \parallel A{x}_{n}-A{x}^{\ast }\parallel \\ +2{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{y}_{n}\parallel ,\end{array}$

which implies that

$\begin{array}{rl}{\gamma }_{n}{\parallel {x}_{n}-{y}_{n}\parallel }^{2}\le & \left(\parallel {x}_{n}-{x}^{\ast }\parallel +\parallel {x}_{n+1}-{x}^{\ast }\parallel \right)\parallel {x}_{n+1}-{x}_{n}\parallel +2{\lambda }_{n}\parallel {x}_{n}-{y}_{n}\parallel \parallel A{x}_{n}-A{x}^{\ast }\parallel \\ +{\alpha }_{n}M+2{\alpha }_{n}\parallel F{x}_{n}-{x}_{n}\parallel \parallel {x}_{n}-{y}_{n}\parallel .\end{array}$

Since ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${lim}_{n\to \mathrm{\infty }}\parallel {x}_{n}-{x}_{n+1}\parallel =0$ and ${lim}_{n\to \mathrm{\infty }}\parallel A{x}_{n}-A{x}^{\ast }\parallel =0$, we derive

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{y}_{n}\parallel =0,$

and this concludes the proof. □

Proposition 5 ${lim sup}_{n\to \mathrm{\infty }}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉\le 0$, where $\stackrel{˜}{x}={P}_{SVI\left(C,A\right)}F\stackrel{˜}{x}$.

Proof In order to show that ${lim sup}_{n\to \mathrm{\infty }}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉\le 0$, we choose a subsequence $\left\{{y}_{{n}_{i}}\right\}$ of $\left\{{y}_{n}\right\}$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉=\underset{i\to \mathrm{\infty }}{lim}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{{n}_{i}}〉.$

As $\left\{{y}_{{n}_{i}}\right\}$ is bounded, we deduce that a subsequence $\left\{{y}_{{n}_{ij}}\right\}$ of $\left\{{y}_{{n}_{i}}\right\}$ converges weakly to z.

Next, we show that $z\in SVI\left(C,A\right)$. The following proofs are similar to the one in [45]. Since the involved algorithms are not different, we still give the details. Now, we define a mapping T by the formula

$Tv=\left\{\begin{array}{ll}Av+{N}_{C}v,& v\in C,\\ \mathrm{\varnothing },& v\notin C.\end{array}$

Then T is maximal monotone.

Let $\left(v,w\right)\in G\left(T\right)$. Since $w-Av\in {N}_{C}v$ and ${y}_{n}\in C$, we have $〈v-{y}_{n},w-Av〉\ge 0$. On the other hand, from ${y}_{n}={P}_{C}\left[{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right){x}_{n}-{\lambda }_{n}A{x}_{n}\right]$, we obtain

$〈v-{y}_{n},{y}_{n}-{\alpha }_{n}F{x}_{n}-\left(1-{\alpha }_{n}\right){x}_{n}+{\lambda }_{n}A{x}_{n}〉\ge 0,$

that is,

$〈v-{y}_{n},\frac{{y}_{n}-{x}_{n}}{{\lambda }_{n}}+A{x}_{n}-\frac{{\alpha }_{n}}{{\lambda }_{n}}\left(F{x}_{n}-{x}_{n}\right)〉\ge 0.$

Therefore, we have

$\begin{array}{rcl}〈v-{y}_{{n}_{i}},w〉& \ge & 〈v-{y}_{{n}_{i}},Av〉\\ \ge & 〈v-{y}_{{n}_{i}},Av〉-〈v-{y}_{{n}_{i}},\frac{{y}_{{n}_{i}}-{x}_{{n}_{i}}}{{\lambda }_{{n}_{i}}}+A{x}_{{n}_{i}}-\frac{{\alpha }_{{n}_{i}}}{{\lambda }_{{n}_{i}}}\left(F{x}_{{n}_{i}}-{x}_{{n}_{i}}\right)〉\\ =& 〈v-{y}_{{n}_{i}},Av-A{x}_{{n}_{i}}-\frac{{y}_{{n}_{i}}-{x}_{{n}_{i}}}{{\lambda }_{{n}_{i}}}+\frac{{\alpha }_{{n}_{i}}}{{\lambda }_{{n}_{i}}}\left(F{x}_{{n}_{i}}-{x}_{{n}_{i}}\right)〉\\ =& 〈v-{y}_{{n}_{i}},Av-A{y}_{{n}_{i}}〉+〈v-{y}_{{n}_{i}},A{y}_{{n}_{i}}-A{x}_{{n}_{i}}〉\\ -〈v-{y}_{{n}_{i}},\frac{{y}_{{n}_{i}}-{x}_{{n}_{i}}}{{\lambda }_{{n}_{i}}}-\frac{{\alpha }_{{n}_{i}}}{{\lambda }_{{n}_{i}}}\left(F{x}_{{n}_{i}}-{x}_{{n}_{i}}\right)〉\\ \ge & 〈v-{y}_{{n}_{i}},A{y}_{{n}_{i}}-A{x}_{{n}_{i}}〉-〈v-{y}_{{n}_{i}},\frac{{y}_{{n}_{i}}-{x}_{{n}_{i}}}{{\lambda }_{{n}_{i}}}-\frac{{\alpha }_{{n}_{i}}}{{\lambda }_{{n}_{i}}}\left(F{x}_{{n}_{i}}-{x}_{{n}_{i}}\right)〉.\end{array}$

Noting that ${\alpha }_{{n}_{i}}\to 0$, $\parallel {y}_{{n}_{i}}-{x}_{{n}_{i}}\parallel \to 0$ and A is Lipschitz continuous, we obtain $〈v-z,w〉\ge 0$. Since T is maximal monotone, we have $z\in {T}^{-1}\left(0\right)$ and hence $z\in SVI\left(C,A\right)$. Therefore,

$\underset{n\to \mathrm{\infty }}{lim sup}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉=\underset{i\to \mathrm{\infty }}{lim}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{{n}_{i}}〉=〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-z〉\le 0.$

The proof of this proposition is now complete. □

Finally, by using Propositions 3-5, we prove Theorem 1.

Proof By the property of the metric projection ${P}_{C}$, we have

$\begin{array}{rl}{\parallel {y}_{n}-\stackrel{˜}{x}\parallel }^{2}=& \parallel {P}_{C}\left[{\alpha }_{n}F{x}_{n}+\left(1-{\alpha }_{n}\right)\left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)\right]\\ -{P}_{C}\left[{\alpha }_{n}\stackrel{˜}{x}+\left(1-{\alpha }_{n}\right)\left(\stackrel{˜}{x}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)\right]{\parallel }^{2}\\ \le & 〈{\alpha }_{n}\left(F{x}_{n}-\stackrel{˜}{x}\right)+\left(1-{\alpha }_{n}\right)\left[\left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)-\left(\stackrel{˜}{x}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)\right],{y}_{n}-\stackrel{˜}{x}〉\\ \le & {\alpha }_{n}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉+{\alpha }_{n}〈F\stackrel{˜}{x}-F{x}_{n},\stackrel{˜}{x}-{y}_{n}〉\\ +\left(1-{\alpha }_{n}\right)\parallel \left({x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A{x}_{n}\right)-\left(\stackrel{˜}{x}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)\parallel \parallel {y}_{n}-\stackrel{˜}{x}\parallel \\ \le & {\alpha }_{n}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉+{\alpha }_{n}\parallel F\stackrel{˜}{x}-F{x}_{n}\parallel \parallel \stackrel{˜}{x}-{y}_{n}\parallel +\left(1-{\alpha }_{n}\right)\parallel {x}_{n}-\stackrel{˜}{x}\parallel \parallel {y}_{n}-\stackrel{˜}{x}\parallel \\ \le & {\alpha }_{n}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉+\left[1-\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-\stackrel{˜}{x}\parallel \parallel \stackrel{˜}{x}-{y}_{n}\parallel \\ \le & {\alpha }_{n}〈\stackrel{˜}{x}-G\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉+\frac{1-\left(1-\rho \right){\alpha }_{n}}{2}{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\frac{1}{2}{\parallel {y}_{n}-\stackrel{˜}{x}\parallel }^{2}.\end{array}$

Hence,

${\parallel {y}_{n}-\stackrel{˜}{x}\parallel }^{2}\le \left[1-\left(1-\rho \right){\alpha }_{n}\right]{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+2{\alpha }_{n}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉.$

Therefore,

$\begin{array}{rcl}{\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel }^{2}& \le & \left(1-{\gamma }_{n}\right){\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+{\gamma }_{n}{\parallel {y}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \le & \left[1-\left(1-\rho \right){\alpha }_{n}{\gamma }_{n}\right]{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+2{\alpha }_{n}{\gamma }_{n}〈\stackrel{˜}{x}-F\stackrel{˜}{x},\stackrel{˜}{x}-{y}_{n}〉.\end{array}$

We apply Lemma 3 to the last inequality to deduce that ${x}_{n}\to \stackrel{˜}{x}$.

The proof of our main result is completed. □

Remark 2 Our algorithm (7) includes Korpelevich’s method (4) as a special case. However, it is well known that Korpelevich’s algorithm (4) has only weak convergence in the setting of infinite-dimensional Hilbert spaces. But our algorithm (7) has strong convergence in the setting of infinite-dimensional Hilbert spaces.

If we take $F\equiv 0$, then we have the following algorithm:

1. 1.

Initialization:

${x}_{0}\in C.$
2. 2.

Iterative step: Given ${x}_{n}$, define

$\left\{\begin{array}{c}{y}_{n}={P}_{C}\left[{x}_{n}-{\lambda }_{n}A{x}_{n}-{\alpha }_{n}{x}_{n}\right],\hfill \\ {x}_{n+1}={P}_{C}\left[{x}_{n}-{\mu }_{n}A{y}_{n}+{\gamma }_{n}\left({y}_{n}-{x}_{n}\right)\right],\phantom{\rule{1em}{0ex}}n\ge 0.\hfill \end{array}$
(11)

Corollary 1 Suppose that $SVI\left(C,A\right)\ne \mathrm{\varnothing }$. Assume that the algorithm parameters $\left\{{\alpha }_{n}\right\}$, $\left\{{\lambda }_{n}\right\}$, $\left\{{\mu }_{n}\right\}$ and $\left\{{\gamma }_{n}\right\}$ satisfy the following conditions:

1. (C1)

${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$;

2. (C2)

${\lambda }_{n}\in \left[a,b\right]\subset \left(0,2\alpha \right)$ and ${lim}_{n\to \mathrm{\infty }}\left({\lambda }_{n+1}-{\lambda }_{n}\right)=0$;

3. (C3)

${\gamma }_{n}\in \left(0,1\right)$, ${\mu }_{n}\le 2\alpha {\gamma }_{n}$ and ${lim}_{n\to \mathrm{\infty }}\left({\gamma }_{n+1}-{\gamma }_{n}\right)={lim}_{n\to \mathrm{\infty }}\left({\mu }_{n+1}-{\mu }_{n}\right)=0$.

Then the sequence $\left\{{x}_{n}\right\}$ generated by (11) converges strongly to the minimum norm element $\stackrel{˜}{x}$ in $SVI\left(C,A\right)$.

Proof It is clear that algorithm (11) is a special case of algorithm (7). So, from Theorem 1, we have that the sequence $\left\{{x}_{n}\right\}$ defined by (11) converges strongly to $\stackrel{˜}{x}\in SVI\left(C,A\right)$, which solves

(12)

Applying the characterization of the metric projection, we can deduce from (12) that

${x}^{\ast }={P}_{SVI\left(C,A\right)}\left(0\right).$

This indicates that $\stackrel{˜}{x}$ is the minimum-norm element in $SVI\left(C,A\right)$. This completes the proof. □

Remark 3 Corollary 1 includes the main result in [1] as a special case.

References

1. Alber YI, Iusem AN: Extension of subgradient techniques for nonsmooth optimization in Banach spaces. Set-Valued Anal. 2001, 9: 315–335. 10.1023/A:1012665832688

2. Bao TQ, Khanh PQ: A projection-type algorithm for pseudomonotone non-Lipschitzian multivalued variational inequalities. Nonconvex Optim. Appl. 2005, 77: 113–129. 10.1007/0-387-23639-2_6

3. Bauschke HH: The approximation of fixed points of composition of nonexpansive mapping in Hilbert space. J. Math. Anal. Appl. 1996, 202: 150–159. 10.1006/jmaa.1996.0308

4. Bello Cruz JY, Iusem AN: A strongly convergent direct method for monotone variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 2009, 30: 23–36. 10.1080/01630560902735223

5. Bello Cruz JY, Iusem AN: Convergence of direct methods for paramonotone variational inequalities. Comput. Optim. Appl. 2010, 46: 247–263. 10.1007/s10589-009-9246-5

6. Bnouhachem A, Noor MA, Hao Z: Some new extragradient iterative methods for variational inequalities. Nonlinear Anal. 2009, 70: 1321–1329. 10.1016/j.na.2008.02.014

7. Bruck RE: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 1977, 61: 159–164. 10.1016/0022-247X(77)90152-4

8. Cegielski A, Zalas R: Methods for variational inequality problem over the intersection of fixed point sets of quasi-nonexpansive operators. Numer. Funct. Anal. Optim. 2013, 34: 255–283. 10.1080/01630563.2012.716807

9. Censor Y, Gibali A, Reich S: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26: 827–845. 10.1080/10556788.2010.551536

10. Censor Y, Gibali A, Reich S: Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59: 301–323. 10.1007/s11075-011-9490-5

11. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementarity Problems, vols. I and II. Springer, New York; 2003.

12. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

13. He BS: A new method for a class of variational inequalities. Math. Program. 1994, 66: 137–144. 10.1007/BF01581141

14. He BS, Yang ZH, Yuan XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 2004, 300: 362–374. 10.1016/j.jmaa.2004.04.068

15. Hirstoaga SA: Iterative selection methods for common fixed point problems. J. Math. Anal. Appl. 2006, 324: 1020–1035. 10.1016/j.jmaa.2005.12.064

16. Iiduka H, Takahashi W: Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 2008, 339: 668–679. 10.1016/j.jmaa.2007.07.019

17. Iusem AN: An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13: 103–114.

18. Khobotov EN: Modification of the extra-gradient method for solving variational inequalities and certain optimization problems. U.S.S.R. Comput. Math. Math. Phys. 1989, 27: 120–127.

19. Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.

20. Korpelevich GM: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 1976, 12: 747–756.

21. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517. 10.1002/cpa.3160200302

22. Lu X, Xu HK, Yin X: Hybrid methods for a class of monotone variational inequalities. Nonlinear Anal. 2009, 71: 1032–1041. 10.1016/j.na.2008.11.067

23. Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518

24. Solodov MV, Svaiter BF: A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37: 765–776. 10.1137/S0363012997317475

25. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34: 1814–1830. 10.1137/S0363012994268655

26. Verma RU: General convergence analysis for two-step projection methods and applications to variational problems. Appl. Math. Lett. 2005, 18: 1286–1292. 10.1016/j.aml.2005.02.026

27. Xu HK, Kim TH: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201.

28. Yamada I, Ogura N: Hybrid steepest descent method for the variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 2004, 25: 619–655.

29. Yao JC: Variational inequalities with generalized monotone operators. Math. Oper. Res. 1994, 19: 691–705. 10.1287/moor.19.3.691

30. Yao Y, Chen R, Xu HK: Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72: 3447–3456. 10.1016/j.na.2009.12.029

31. Yao Y, Liou YC, Kang SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. 2013, 55(4):801–811. 10.1007/s10898-011-9804-0

32. Yao Y, Liou YC, Shahzad N: Construction of iterative methods for variational inequality and fixed point problems. Numer. Funct. Anal. Optim. 2012, 33: 1250–1267. 10.1080/01630563.2012.660796

33. Yao Y, Marino G, Muglia L: A modified Korpelevich’s method convergent to the minimum norm solution of a variational inequality. Optimization 2012. 10.1080/02331934.2012.674947

34. Yao Y, Noor MA, Liou YC, Kang SM: Iterative algorithms for general multi-valued variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 768272

35. Yao Y, Noor MA: On viscosity iterative methods for variational inequalities. J. Math. Anal. Appl. 2007, 325: 776–787. 10.1016/j.jmaa.2006.01.091

36. Zeng LC, Hadjisavvas N, Wong NC: Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46: 635–646. 10.1007/s10898-009-9454-7

37. Zeng LC, Teboulle M, Yao JC: Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed point problems. J. Optim. Theory Appl. 2010, 146: 19–31. 10.1007/s10957-010-9650-0

38. Zhu D, Marcotte P: New classes of generalized monotonicity. J. Optim. Theory Appl. 1995, 87: 457–471. 10.1007/BF02192574

39. Iusem AN, Svaiter BF: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42: 309–321. 10.1080/02331939708844365

40. Censor Y, Gibali A, Reich S: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2010. 10.1080/02331934.2010.539689

41. Censor Y, Gibali A, Reich S: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148: 318–335. 10.1007/s10957-010-9757-3

42. Iusem AN, Lucambio Peŕez LR: An extragradient-type algorithm for non-smooth variational inequalities. Optimization 2000, 48: 309–332. 10.1080/02331930008844508

43. Mashreghi J, Nasri M: Forcing strong convergence of Korpelevich’s method in Banach spaces with its applications in game theory. Nonlinear Anal. 2010, 72: 2086–2099. 10.1016/j.na.2009.10.009

44. Noor MA: New extragradient-type methods for general variational inequalities. J. Math. Anal. Appl. 2003, 277: 379–394. 10.1016/S0022-247X(03)00023-4

45. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

46. Suzuki T: Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005, 2005: 103–123.

47. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

Acknowledgements

Yonghong Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105. Yeong-Cheng Liou was partially supported by NSC 100-2221-E-230-012.

Author information

Authors

Corresponding author

Correspondence to Mihai Postolache.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Reprints and permissions

Yao, Y., Postolache, M. & Liou, YC. Variant extragradient-type method for monotone variational inequalities. Fixed Point Theory Appl 2013, 185 (2013). https://doi.org/10.1186/1687-1812-2013-185