# Conversion of algorithms by releasing projection for minimization problems

## Abstract

The projection methods for solving the minimization problems have been extensively considered in many practical problems, for example, the least-square problem. However, the computational difficulty of the projection might seriously affect the efficiency of the method. The purpose of this paper is to construct two algorithms by releasing projection for solving the minimization problem with the feasibility sets such as the set of fixed points of nonexpansive mappings and the solution set of the equilibrium problem.

MSC:47J05, 47J25, 47H09, 65J15.

## 1 Introduction

In the present paper, our main purpose is to solve the following minimization problem of finding ${x}^{\ast }$ such that

$\parallel {x}^{\ast }\parallel =\underset{x\in Fix\left(S\right)\cap \mathit{EPA}}{min}\parallel x\parallel ,$
(1.1)

where $Fix\left(S\right)$ is the set of fixed points of nonexpansive mapping S and EPA is the solution set of the following equilibrium problem:

(1.2)

where C is a nonempty closed convex subset of a real Hilbert space H, $F:C×C\to R$ is a bifunction and $A:C\to H$ is an α-inverse-strongly monotone mapping. The reasons why we focus on the above minimization problem (1.1) are mainly in two respects.

Reason 1 This problem is motivated by the following least-square problem:

$\left\{\begin{array}{c}Bx=b,\hfill \\ x\in \mathrm{\Omega },\hfill \end{array}$
(1.3)

where Ω is a nonempty closed convex subset of a real Hilbert space H, B is a bounded linear operator from H to another real Hilbert space ${H}_{1}$, ${B}^{\ast }$ is the adjoint of B and b is a given point in ${H}_{1}$. The least-squares solution to (1.3) is the least-norm minimizer of the minimization problem

$\underset{x\in \mathrm{\Omega }}{min}\parallel Bx-b\parallel .$
(1.4)

For some related works, please see Reich and Xu , Sabharwal and Potter , Xu  and Yao et al. .

Reason 2 The problem (1.2) is very general in the sense that it includes optimization problems, variational inequalities, minimax problems and the Nash equilibrium problem in noncooperative games as special cases. At the same time, fixed point algorithms for non-expansive mappings have received vast investigations due to their extensive applications in a variety of applied areas of the inverse problem, partial differential equations, image recovery and signal processing.

Based on the above facts, it is an interesting topic to construct algorithms for solving the above problems. Now we next briefly review some historic approaches which relate to the problems (1.2) and (1.4).

For solving the equilibrium problem, Combettes and Hirstoaga  introduced an iterative algorithm of finding the best approximation to the initial data and proved a strong convergence theorem. Moudafi  introduced an iterative algorithm and proved a weak convergence theorem. In 2007, Takahashi and Takahashi  introduced the following new scheme for finding a common element of the set of solutions of the equilibrium problem and the set of fixed point points of a nonexpansive mapping:

$\left\{\begin{array}{c}F\left({u}_{n},y\right)+〈A{x}_{n},y-{u}_{n}〉+\frac{1}{{\lambda }_{n}}〈y-{u}_{n},{u}_{n}-{x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}={\beta }_{n}{x}_{n}+\left(1-{\beta }_{n}\right)S\left[{\alpha }_{n}u+\left(1-{\alpha }_{n}\right){u}_{n}\right],\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\in N.\hfill \end{array}$

Subsequently, algorithms constructed for solving the equilibrium problems and fixed point problems have been further developed by some authors. For some works related to the equilibrium problem, fixed point problems and the variational inequality problem, please see Blum and Oettli , Chang et al. , Chantarangsi et al. , Cianciaruso et al. , Colao et al. [12, 13], Fang et al. , Jung , Mainge , Mainge and Moudafi , Moudafi and Théra , Nadezhkina and Takahashi , Noor et al. , Peng et al. , Peng and Yao , Plubtieng and Punpaeng , Takahashi and Takahashi , Yao et al. , Yao and Liou  and the references therein.

We observe that the solution set of (1.3) has a unique element with a minimum norm and finding the least-squares solution of the constrained linear inverse problem is equivalent to finding the minimum-norm fixed point of the nonexpansive mapping $x↦{P}_{C}\left(x-\lambda {B}^{\ast }\left(Bx-b\right)\right)$. Hence, a natural idea is that we can use projection to construct algorithms for finding the minimum-norm solution. By using this idea, Yao and Liou  constructed two algorithms for solving the minimization problem (1.1):

${x}_{t}=\mu {P}_{C}\left[\left(1-t\right)S{x}_{t}\right]+\left(1-\mu \right){T}_{r}\left({x}_{t}-rA{x}_{t}\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\in \left(0,1\right),$
(1.5)

and

${x}_{n+1}={\mu }_{n}{P}_{C}\left[{\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right)S{x}_{n}\right]+\left(1-{\mu }_{n}\right){T}_{r}\left({x}_{n}-rA{x}_{n}\right),\phantom{\rule{1em}{0ex}}n\ge 0.$
(1.6)

Remark 1.1 It is well known that projection methods are used extensively in a variety of methods in optimization theory. Apart from theoretical interest, the main advantage of projection methods, which makes them successful in real-word applications, is computational. The field of projection methods is vast; see, e.g., Bauschke and Borwein , Combettes , Combettes and Pesquet . However, it is clear that if the set C is simple enough, so that the projection onto it is easily executed, then this method is particularly useful; but if C is a general closed and convex set, then a minimal distance problem has to be solved in order to obtain the next iterative. This might seriously affect the efficiency of the method. Hence, it is a very interesting work of solving (1.1) without involving projection.

Motivated and inspired by the results in the literature, in this paper we suggest two algorithms:

$\left\{\begin{array}{c}F\left({u}_{t},y\right)+\frac{1}{r}〈y-{u}_{t},{u}_{t}-\left(tf+\left(1-t\right)I-rA\right){x}_{t}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{t}=\mu S{x}_{t}+\left(1-\mu \right){u}_{t},\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\in \left(0,1-\frac{r}{2\alpha }\right),\hfill \end{array}$

and

$\left\{\begin{array}{c}F\left({u}_{n},y\right)+\frac{1}{r}〈y-{u}_{n},{u}_{n}-\left({\alpha }_{n}f+\left(1-{\alpha }_{n}\right)I-rA\right){x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}=\mu S{x}_{n}+\left(1-\mu \right){u}_{n},\phantom{\rule{1em}{0ex}}n\ge 0.\hfill \end{array}$

It is shown that under some mild conditions, the net $\left\{{x}_{t}\right\}$ and the sequences $\left\{{x}_{n}\right\}$ converge strongly to $\stackrel{˜}{x}$ which is the unique solution of the VI:

$\stackrel{˜}{x}\in Fix\left(S\right)\cap \mathit{EFA},\phantom{\rule{1em}{0ex}}〈\left(I-f\right)\stackrel{˜}{x},x-\stackrel{˜}{x}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in Fix\left(S\right)\cap \mathit{EFA}.$

In particular, if we take $f=0$, then the net $\left\{{x}_{t}\right\}$ and the sequences $\left\{{x}_{n}\right\}$ converge in norm to a solution of the minimization problem (1.1). It should be pointed out that our suggested algorithms solve the above minimization problem (1.1) without involving the metric projection.

## 2 Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H. Recall that a mapping $A:C\to H$ is called α-inverse-strongly monotone if there exists a positive real number α such that $〈Ax-Ay,x-y〉\ge \alpha {\parallel Ax-Ay\parallel }^{2}$, $\mathrm{\forall }x,y\in C$. It is clear that any α-inverse-strongly monotone mapping is monotone and $\frac{1}{\alpha }$-Lipschitz continuous. A mapping $S:C\to C$ is said to be nonexpansive if $\parallel Sx-Sy\parallel \le \parallel x-y\parallel$, $\mathrm{\forall }x,y\in C$. Denote the set of fixed points of S by $Fix\left(S\right)$.

Let $F:C×C\to R$ be a bifunction. Throughout this paper, we assume that a bifunction $F:C×C\to R$ satisfies the following conditions:

1. (H1)

$F\left(x,x\right)=0$ for all $x\in C$;

2. (H2)

F is monotone, i.e., $F\left(x,y\right)+F\left(y,x\right)\le 0$ for all $x,y\in C$;

3. (H3)

for each $x,y,z\in C$, ${lim}_{t↓0}F\left(tz+\left(1-t\right)x,y\right)\le F\left(x,y\right)$;

4. (H4)

for each $x\in C$, $y↦F\left(x,y\right)$ is convex and lower semicontinuous.

The metric (or nearest point) projection from H onto C is the mapping ${P}_{C}:H\to C$ which assigns to each point $x\in C$ the unique point ${P}_{C}x\in C$ satisfying the property

$\parallel x-{P}_{C}x\parallel =\underset{y\in C}{inf}\parallel x-y\parallel =:d\left(x,C\right).$

It is well known that ${P}_{C}$ is a nonexpansive mapping and satisfies

$〈x-y,{P}_{C}x-{P}_{C}y〉\ge {\parallel {P}_{C}x-{P}_{C}y\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in H.$

We need the following lemmas for proving our main results.

Lemma 2.1 ()

Let C be a nonempty closed convex subset of a real Hilbert space H. Let $F:C×C\to R$ be a bifunction which satisfies conditions (H1)-(H4). Let $r>0$ and $x\in C$. Then there exists $z\in C$ such that

$F\left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

Further, if ${T}_{r}\left(x\right)=\left\{z\in C:F\left(z,y\right)+\frac{1}{r}〈y-z,z-x〉\ge 0,\mathrm{\forall }y\in C\right\}$, then the following hold:

1. (i)

${T}_{r}$ is single-valued and ${T}_{r}$ is firmly nonexpansive, i.e., for any $x,y\in H$, ${\parallel {T}_{r}x-{T}_{r}y\parallel }^{2}\le 〈{T}_{r}x-{T}_{r}y,x-y〉$;

2. (ii)

EP is closed and convex and $EP=Fix\left({T}_{r}\right)$.

Lemma 2.2 ()

Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping $A:C\to H$ be α-inverse strongly monotone and $r>0$ be a constant. Then we have

${\parallel \left(I-rA\right)x-\left(I-rA\right)y\parallel }^{2}\le {\parallel x-y\parallel }^{2}+r\left(r-2\alpha \right){\parallel Ax-Ay\parallel }^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in C.$

In particular, if $0\le r\le 2\alpha$, then $I-rA$ is nonexpansive.

Lemma 2.3 ()

Let C be a closed convex subset of a real Hilbert space H, and $S:C\to C$ be a nonexpansive mapping. Then the mapping $I-S$ is demiclosed. That is, if $\left\{{x}_{n}\right\}$ is a sequence in C such that ${x}_{n}\to {x}^{\ast }$ weakly and $\left(I-S\right){x}_{n}\to y$ strongly, then $\left(I-S\right){x}^{\ast }=y$.

Lemma 2.4 ()

Assume that $\left\{{a}_{n}\right\}$ is a sequence of nonnegative real numbers such that

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\delta }_{n}{\gamma }_{n},$

where $\left\{{\gamma }_{n}\right\}$ is a sequence in $\left(0,1\right)$ and $\left\{{\delta }_{n}\right\}$ is a sequence such that

1. (1)

${\sum }_{n=1}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$;

2. (2)

${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$ or ${\sum }_{n=1}^{\mathrm{\infty }}|{\delta }_{n}{\gamma }_{n}|<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

## 3 Main results

In this section, we convert algorithms (1.5) and (1.6) by releasing projection ${P}_{C}$ and construct two algorithms for finding the minimum norm element ${x}^{\ast }$ of $\mathrm{\Gamma }:=\mathit{EPA}\cap Fix\left(S\right)$.

Let $S:C\to C$ be a nonexpansive mapping and $A:C\to H$ be an α-inverse strongly monotone mapping. Let $F:C×C\to R$ be a bifunction which satisfies conditions (H1)-(H4). Let r and μ be two constants such that $r\in \left(0,2\alpha \right)$ and $\mu \in \left(0,1\right)$. In order to find a solution of the minimization problem (1.1), we construct the following implicit algorithm

$\left\{\begin{array}{c}F\left({u}_{t},y\right)+\frac{1}{r}〈y-{u}_{t},{u}_{t}-\left(\left(1-t\right)I-rA\right){x}_{t}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{t}=\mu S{x}_{t}+\left(1-\mu \right){u}_{t},\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\in \left(0,1-\frac{r}{2\alpha }\right).\hfill \end{array}$
(3.1)

We will show that the net $\left\{{x}_{t}\right\}$ defined by (3.1) converges to a solution of the minimization problem (1.1). As matter of fact, in this paper, we study the following general algorithm: Taking a ρ-contraction $f:C\to H$, for each $t\in \left(0,1-\frac{r}{2\alpha }\right)$, let $\left\{{x}_{t}\right\}$ be the net defined by

$\left\{\begin{array}{c}F\left({u}_{t},y\right)+\frac{1}{r}〈y-{u}_{t},{u}_{t}-\left(tf+\left(1-t\right)I-rA\right){x}_{t}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{t}=\mu S{x}_{t}+\left(1-\mu \right){u}_{t},\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\in \left(0,1-\frac{r}{2\alpha }\right).\hfill \end{array}$
(3.2)

It is clear that if $f=0$, then (3.2) reduces to (3.1). Next, we show that (3.2) is well defined. From Lemma 2.1, we know that ${u}_{t}={T}_{r}\left[tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}\right]$. We define a mapping ${W}_{t}:=\mu S+\left(1-\mu \right){T}_{r}\left[tf+\left(1-t\right)I-rA\right]$. From Lemma 2.2, for $0, the mapping $I-\frac{r}{1-t}A$ is nonexpansive. Also, note that the mappings S and ${T}_{r}$ are nonexpansive, then we have

$\begin{array}{r}\parallel {W}_{t}x-{W}_{t}y\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel \mu \left(Sx-Sy\right)+\left(1-\mu \right)\left({T}_{r}\left[tf\left(x\right)+\left(1-t\right)x-rAx\right]-{T}_{r}\left[tf\left(y\right)+\left(1-t\right)y-rAy\right]\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \mu \parallel Sx-Sy\parallel +\left(1-\mu \right)\parallel {T}_{r}\left[tf\left(x\right)+\left(1-t\right)\left(x-\frac{r}{1-t}Ax\right)\right]\\ \phantom{\rule{2em}{0ex}}-{T}_{r}\left[tf\left(y\right)+\left(1-t\right)\left(y-\frac{r}{1-t}Ay\right)\right]\parallel \\ \phantom{\rule{1em}{0ex}}\le \mu \parallel x-y\parallel +\left(1-\mu \right)t\parallel f\left(x\right)-f\left(y\right)\parallel \\ \phantom{\rule{2em}{0ex}}+\left(1-\mu \right)\left(1-t\right)\parallel \left(x-\frac{r}{1-t}Ax\right)-\left(y-\frac{r}{1-t}Ay\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \mu \parallel x-y\parallel +\left(1-\mu \right)t\rho \parallel x-y\parallel +\left(1-\mu \right)\left(1-t\right)\parallel x-y\parallel \\ \phantom{\rule{1em}{0ex}}=\left[1-\left(1-\mu \right)\left(1-\rho \right)t\right]\parallel x-y\parallel .\end{array}$

This indicates that ${W}_{t}$ is a contraction. Using the Banach contraction principle, there exists a unique fixed point ${x}_{t}$ of ${W}_{t}$ in C. Hence, (3.2) is well defined.

In the sequel, we assume:

1. (1)

C is a nonempty closed convex subset of a real Hilbert space H;

2. (2)

$S:C\to C$ is a nonexpansive mapping, $A:C\to H$ is an α-inverse strongly monotone mapping and $f:C\to H$ is a ρ-contraction;

3. (3)

$F:C×C\to R$ is a bifunction which satisfies conditions (H1)-(H4);

4. (4)

$\mathrm{\Gamma }\ne \mathrm{\varnothing }$.

In order to prove our first main result, we need the following propositions.

Proposition 3.1 The net $\left\{{x}_{t}\right\}$ generated by the implicit method (3.2) is bounded.

Proof Take $z\in \mathrm{\Gamma }$. It is clear that $Sz=z={T}_{r}\left(z-rAz\right)={T}_{r}\left[tz+\left(1-t\right)\left(z-\frac{r}{1-t}Az\right)\right]$ for all $t\in \left(0,1-\frac{r}{2\alpha }\right)$. Since ${T}_{r}$ and $I-\frac{r}{1-t}A$ are nonexpansive, we have

$\begin{array}{rl}\parallel {u}_{t}-z\parallel & =\parallel {T}_{r}\left[tf\left({x}_{t}\right)+\left(1-t\right)\left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)\right]-{T}_{r}\left[tz+\left(1-t\right)\left(z-\frac{r}{1-t}Az\right)\right]\parallel \\ =\parallel t\left(f\left({x}_{t}\right)-z\right)+\left[\left(1-t\right)\left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)-\left(z-\frac{r}{1-t}Az\right)\right]\parallel \\ \le t\parallel f\left({x}_{t}\right)-z\parallel +\left(1-t\right)\parallel \left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)-\left(z-\frac{r}{1-t}Az\right)\parallel \\ \le t\parallel f\left({x}_{t}\right)-f\left(z\right)\parallel +t\parallel f\left(z\right)-z\parallel +\left(1-t\right)\parallel {x}_{t}-z\parallel \\ \le t\rho \parallel {x}_{t}-z\parallel +\left(1-t\right)\parallel {x}_{t}-z\parallel +t\parallel f\left(z\right)-z\parallel \\ =\left[1-\left(1-\rho \right)t\right]\parallel {x}_{t}-z\parallel +t\parallel f\left(z\right)-z\parallel .\end{array}$
(3.3)

It follows from (3.2) that

$\begin{array}{rl}\parallel {x}_{t}-z\parallel & =\parallel \mu \left(S{x}_{t}-z\right)+\left(1-\mu \right)\left({u}_{t}-z\right)\parallel \le \mu \parallel S{x}_{t}-z\parallel +\left(1-\mu \right)\parallel {u}_{t}-z\parallel \\ \le \mu \parallel {x}_{t}-z\parallel +\left(1-\mu \right)\parallel {u}_{t}-z\parallel .\end{array}$

Hence,

$\parallel {x}_{t}-z\parallel \le \parallel {u}_{t}-z\parallel \le \left[1-\left(1-\rho \right)t\right]\parallel {x}_{t}-z\parallel +t\parallel f\left(z\right)-z\parallel ,$
(3.4)

that is,

$\parallel {x}_{t}-z\parallel \le \frac{\parallel f\left(z\right)-z\parallel }{1-\rho }.$

So, $\left\{{x}_{t}\right\}$ is bounded. Hence $\left\{{u}_{t}\right\}$, $\left\{S{x}_{t}\right\}$, $\left\{A{x}_{t}\right\}$ and $\left\{f\left({x}_{t}\right)\right\}$ are also bounded. This completes the proof. □

Proposition 3.2 The net $\left\{{x}_{t}\right\}$ generated by the implicit method (3.2) is relatively norm compact as $t\to 0$.

Proof From (3.3) and Lemma 2.2, we have

$\begin{array}{rl}{\parallel {u}_{t}-z\parallel }^{2}& \le t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}+\left(1-t\right){\parallel \left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)-\left(z-\frac{r}{1-t}Az\right)\parallel }^{2}\\ \le t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}+\left(1-t\right)\left[{\parallel {x}_{t}-z\parallel }^{2}+\frac{r}{1-t}\left(\frac{r}{1-t}-2\alpha \right){\parallel A{x}_{t}-Az\parallel }^{2}\right]\\ \le \left(1-t\right){\parallel {x}_{t}-z\parallel }^{2}+r\left(\frac{r}{1-t}-2\alpha \right){\parallel A{x}_{t}-Az\parallel }^{2}+t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}.\end{array}$
(3.5)

From (3.4) and (3.5), we have

$\begin{array}{rl}{\parallel {x}_{t}-z\parallel }^{2}& \le {\parallel {u}_{t}-z\parallel }^{2}\\ \le \left(1-t\right){\parallel {x}_{t}-z\parallel }^{2}+r\left(\frac{r}{1-t}-2\alpha \right){\parallel A{x}_{t}-Az\parallel }^{2}+t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}.\end{array}$

Thus,

$r\left(2\alpha -\frac{r}{1-t}\right){\parallel A{x}_{t}-Az\parallel }^{2}\le t\left({\parallel f\left({x}_{t}\right)-z\parallel }^{2}-{\parallel {x}_{t}-z\parallel }^{2}\right)\to 0.$

Since ${lim inf}_{t\to 0+}r\left(2\alpha -\frac{r}{1-t}\right)>0$, we derive

$\underset{t\to 0+}{lim}\parallel A{x}_{t}-Az\parallel =0.$
(3.6)

From Lemma 2.1 and Lemma 2.2, we obtain

$\begin{array}{rl}{\parallel {u}_{t}-z\parallel }^{2}=& {\parallel {T}_{r}\left(tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}\right)-{T}_{r}\left(z-rAz\right)\parallel }^{2}\\ \le & 〈tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}-\left(z-rAz\right),{u}_{t}-z〉\\ =& \frac{1}{2}\left({\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}-\left(z-rAz\right)\parallel }^{2}+{\parallel {u}_{t}-z\parallel }^{2}\\ -{\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-r\left(A{x}_{t}-Az\right)-{u}_{t}\parallel }^{2}\right).\end{array}$
(3.7)

It follows that

$\begin{array}{rl}{\parallel {u}_{t}-z\parallel }^{2}\le & {\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}-\left(z-rAz\right)\parallel }^{2}\\ -{\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-r\left(A{x}_{t}-Az\right)-{u}_{t}\parallel }^{2}.\end{array}$

By the nonexpansivity of $I-\frac{r}{1-t}A$, we have

$\begin{array}{r}{\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-rA{x}_{t}-\left(z-rAz\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}={\parallel \left(1-t\right)\left(\left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)-\left(z-\frac{r}{1-t}Az\right)\right)+t\left(f\left({x}_{t}\right)-z\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \left(1-t\right){\parallel \left({x}_{t}-\frac{r}{1-t}A{x}_{t}\right)-\left(z-\frac{r}{1-t}Az\right)\parallel }^{2}+t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \left(1-t\right){\parallel {x}_{t}-z\parallel }^{2}+t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}.\end{array}$

Thus

$\begin{array}{rl}{\parallel {x}_{t}-z\parallel }^{2}\le & {\parallel {u}_{t}-z\parallel }^{2}\le \left(1-t\right){\parallel {x}_{t}-z\parallel }^{2}+t{\parallel f\left({x}_{t}\right)-z\parallel }^{2}\\ -{\parallel tf\left({x}_{t}\right)+\left(1-t\right){x}_{t}-r\left(A{x}_{t}-Az\right)-{u}_{t}\parallel }^{2}.\end{array}$

Hence

${\parallel t\left(f\left({x}_{t}\right)-{x}_{t}\right)-r\left(A{x}_{t}-Az\right)-\left({u}_{t}-{x}_{t}\right)\parallel }^{2}\le t\left({\parallel f\left({x}_{t}\right)-z\parallel }^{2}-{\parallel {x}_{t}-z\parallel }^{2}\right)\to 0.$

Since $\parallel A{x}_{t}-Az\parallel \to 0$ (by (3.6)), we deduce

$\underset{t\to 0+}{lim}\parallel {x}_{t}-{u}_{t}\parallel =0.$

So

$\underset{t\to 0+}{lim}\parallel {x}_{t}-S{x}_{t}\parallel =\underset{t\to 0+}{lim}\left(1-\mu \right)\parallel {x}_{t}-{u}_{t}\parallel =0.$
(3.8)

Next we show that $\left\{{x}_{t}\right\}$ is relatively norm compact as $t\to 0+$. Let $\left\{{t}_{n}\right\}\subset \left(0,1\right)$ be a sequence such that ${t}_{n}\to 0$ as $n\to \mathrm{\infty }$. Put ${x}_{n}:={x}_{{t}_{n}}$ and ${u}_{n}:={u}_{{t}_{n}}$. From (3.8), we get

$\parallel {x}_{n}-S{x}_{n}\parallel \to 0.$
(3.9)

By (3.7), we deduce

$\begin{array}{rl}{\parallel {u}_{t}-z\parallel }^{2}\le & t〈f\left({x}_{t}\right)-f\left(z\right),{u}_{t}-z〉+t〈f\left(z\right)-z,{u}_{t}-z〉\\ +\left(1-t\right)〈{x}_{t}-\frac{r}{1-t}A{x}_{t}-\left(z-\frac{r}{1-t}Az\right),{u}_{t}-z〉\\ \le & \left[1-\left(1-\rho \right)t\right]\parallel {x}_{t}-z\parallel \parallel {u}_{t}-z\parallel +t〈f\left(z\right)-z,{u}_{t}-z〉\\ \le & \frac{1-\left(1-\rho \right)t}{2}{\parallel {x}_{t}-z\parallel }^{2}+\frac{1}{2}{\parallel {u}_{t}-z\parallel }^{2}+t〈f\left(z\right)-z,{u}_{t}-z〉,\end{array}$

that is,

${\parallel {u}_{t}-z\parallel }^{2}\le \left[1-\left(1-\rho \right)t\right]{\parallel {x}_{t}-z\parallel }^{2}+2t〈f\left(z\right)-z,{u}_{t}-z〉.$

Hence,

${\parallel {x}_{t}-z\parallel }^{2}\le {\parallel {u}_{t}-z\parallel }^{2}\le \left[1-\left(1-\rho \right)t\right]{\parallel {x}_{t}-z\parallel }^{2}+2t〈f\left(z\right)-z,{u}_{t}-z〉.$

It follows that

${\parallel {x}_{t}-z\parallel }^{2}\le \frac{2}{1-\rho }〈f\left(z\right)-z,{u}_{t}-z〉.$

In particular,

${\parallel {x}_{n}-z\parallel }^{2}\le \frac{2}{1-\rho }〈f\left(z\right)-z,{u}_{n}-z〉.$
(3.10)

Since $\left\{{x}_{n}\right\}$ is bounded, without loss of generality, we may assume that $\left\{{x}_{n}\right\}$ converges weakly to a point ${x}^{\ast }\in C$. Also $S{x}_{n}⇀{x}^{\ast }$ and ${u}_{n}⇀{x}^{\ast }$. Noticing (3.9) we can use Lemma 2.3 to get ${x}^{\ast }\in Fix\left(S\right)$.

Now we show ${x}^{\ast }\in \mathit{EPA}$. Since ${u}_{n}={T}_{\lambda }\left({t}_{n}f\left({x}_{n}\right)+\left(1-{t}_{n}\right){x}_{n}-rA{x}_{n}\right)$ for any $y\in C$, we have

$F\left({u}_{n},y\right)+〈A{x}_{n},y-{u}_{n}〉+\frac{1}{r}〈y-{u}_{n},{u}_{n}-\left({t}_{n}f\left({x}_{n}\right)+\left(1-{t}_{n}\right){x}_{n}\right)〉\ge 0.$

From (H2), we have

$〈A{x}_{n},y-{u}_{n}〉+\frac{1}{r}〈y-{u}_{n},{u}_{n}-\left({t}_{n}f\left({x}_{n}\right)+\left(1-{t}_{n}\right){x}_{n}\right)〉\ge F\left(y,{u}_{n}\right).$
(3.11)

Put ${z}_{t}=ty+\left(1-t\right){x}^{\ast }$ for all $t\in \left(0,1-\frac{\lambda }{2\alpha }\right)$ and $y\in C$. Then we have ${z}_{t}\in C$. So, from (3.11), we have

$\begin{array}{rl}〈{z}_{t}-{u}_{n},A{z}_{t}〉\ge & 〈{z}_{t}-{u}_{n},A{z}_{t}〉-〈{z}_{t}-{u}_{n},A{x}_{n}〉\\ -\frac{1}{r}〈{z}_{t}-{u}_{n},{u}_{n}-\left({t}_{n}f\left({x}_{n}\right)+\left(1-{t}_{n}\right){x}_{n}\right)〉+F\left({z}_{t},{u}_{n}\right)\\ =& 〈{z}_{t}-{u}_{n},A{z}_{t}-A{u}_{n}〉+〈{z}_{t}-{u}_{n},A{u}_{n}-A{x}_{n}〉\\ -\frac{1}{r}〈{z}_{t}-{u}_{n},{u}_{n}-{x}_{n}-{t}_{n}\left(f\left({x}_{n}\right)-{x}_{n}\right)〉+F\left({z}_{t},{u}_{n}\right).\end{array}$

Since A is Lipschitz continuous and $\parallel {u}_{n}-{x}_{n}\parallel \to 0$, we have $\parallel A{u}_{n}-A{x}_{n}\parallel \to 0$. Further, from the monotonicity of A, we have $〈{z}_{t}-{u}_{n},A{z}_{t}-A{u}_{n}〉\ge 0$. So, from (H4), we have

(3.12)

From (H1), (H4) and (3.12), we also have

$\begin{array}{rl}0& =F\left({z}_{t},{z}_{t}\right)\\ \le tF\left({z}_{t},y\right)+\left(1-t\right)F\left({z}_{t},{x}^{\ast }\right)\\ \le tF\left({z}_{t},y\right)+\left(1-t\right)〈{z}_{t}-{x}^{\ast },A{z}_{t}〉\\ =tF\left({z}_{t},y\right)+\left(1-t\right)t〈y-{x}^{\ast },A{z}_{t}〉\end{array}$

and hence

$0\le F\left({z}_{t},y\right)+\left(1-t\right)〈y-{x}^{\ast },A{z}_{t}〉.$

Letting $t\to 0$, we have, for each $y\in C$,

$0\le F\left({x}^{\ast },y\right)+〈y-{x}^{\ast },A{x}^{\ast }〉.$

This implies ${x}^{\ast }\in \mathit{EPA}$. Therefore we can substitute ${x}^{\ast }$ for z in (3.10) to get

${\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}\le \frac{2}{1-\rho }〈f\left({x}^{\ast }\right)-{x}^{\ast },{u}_{n}-{x}^{\ast }〉,\phantom{\rule{1em}{0ex}}z\in Fix\left(S\right)\cap \mathit{EPA}.$

Consequently, the weak convergence of $\left\{{x}_{n}\right\}$ (and $\left\{{u}_{n}\right\}$) to ${x}^{\ast }$ actually implies that ${x}_{n}\to {x}^{\ast }$. This has proved the relative norm-compactness of the net $\left\{{x}_{t}\right\}$ as $t\to 0+$. This completes the proof. □

Now we show our first main result.

Theorem 3.3 The net $\left\{{x}_{t}\right\}$ generated by the implicit method (3.2) converges in norm, as $t\to 0+$, to the unique solution ${x}^{\ast }$ of the following variational inequality:

${x}^{\ast }\in \mathrm{\Gamma },\phantom{\rule{1em}{0ex}}〈\left(I-f\right){x}^{\ast },x-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}x\in \mathrm{\Gamma }.$
(3.13)

In particular, if we take $f=0$, then the net $\left\{{x}_{t}\right\}$ converges in norm, as $t\to 0+$, to a solution of the minimization problem (1.1).

Proof Now we return to (3.10) and take the limit as $n\to \mathrm{\infty }$ to get

${\parallel {x}^{\ast }-z\parallel }^{2}\le \frac{2}{1-\rho }〈z-f\left(z\right),z-{x}^{\ast }〉,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }.$
(3.14)

In particular, ${x}^{\ast }$ solves the following variational inequality

${x}^{\ast }\in \mathrm{\Gamma },\phantom{\rule{1em}{0ex}}〈\left(I-f\right)z,z-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }$

or the equivalent dual variational inequality

${x}^{\ast }\in \mathrm{\Gamma },\phantom{\rule{1em}{0ex}}〈\left(I-f\right){x}^{\ast },z-{x}^{\ast }〉\ge 0,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }.$

Therefore, ${x}^{\ast }=\left({P}_{\mathrm{\Gamma }}f\right){x}^{\ast }$. That is, ${x}^{\ast }$ is the unique fixed point in Γ of the contraction ${P}_{\mathrm{\Gamma }}f$. Clearly, this is sufficient to conclude that the entire net $\left\{{x}_{t}\right\}$ converges in norm to ${x}^{\ast }$ as $t\to 0$.

Finally, if we take $f=0$, then (3.14) is reduced to

${\parallel {x}^{\ast }-z\parallel }^{2}\le 〈z,z-{x}^{\ast }〉,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }.$

Equivalently,

${\parallel {x}^{\ast }\parallel }^{2}\le 〈{x}^{\ast },z〉,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }.$

This clearly implies that

$\parallel {x}^{\ast }\parallel \le \parallel z\parallel ,\phantom{\rule{1em}{0ex}}z\in \mathrm{\Gamma }.$

Therefore, ${x}^{\ast }$ is a solution of the minimization problem (1.1). This completes the proof. □

Next, we introduce an explicit algorithm for finding a solution of the minimization problem (1.1).

Algorithm 3.4 For given ${x}_{0}\in C$ arbitrarily, let the sequence $\left\{{x}_{n}\right\}$ be generated iteratively by

$\left\{\begin{array}{c}F\left({u}_{n},y\right)+\frac{1}{r}〈y-{u}_{n},{u}_{n}-\left({\alpha }_{n}f+\left(1-{\alpha }_{n}\right)I-rA\right){x}_{n}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,\hfill \\ {x}_{n+1}=\mu S{x}_{n}+\left(1-\mu \right){u}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,\hfill \end{array}$
(3.15)

where $\left\{{\alpha }_{n}\right\}$ is a real number sequence in $\left[0,1\right]$.

Next, we give our second main result.

Theorem 3.5 Assume that the sequence $\left\{{\alpha }_{n}\right\}$ satisfies the conditions: ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=0}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$ and ${lim}_{n\to \mathrm{\infty }}\frac{{\alpha }_{n+1}}{{\alpha }_{n}}=1$. Then the sequence $\left\{{x}_{n}\right\}$ generated by (3.15) converges strongly to $\stackrel{˜}{x}$ which is the unique solution of the variational inequality (3.13). In particular, if $f=0$, then the sequence $\left\{{x}_{n}\right\}$ converges strongly to a solution of the minimization problem (1.1).

Proof Pick $z\in \mathrm{\Gamma }$. From Lemma 2.2, we know that ${u}_{n}={T}_{r}\left[{\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}-rA{x}_{n}\right]$. Set ${z}_{n}={\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}-rA{x}_{n}$ for all n. From (3.15), we get

$\begin{array}{r}\parallel {u}_{n}-z\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel {T}_{r}{z}_{n}-{T}_{r}\left(z-rAz\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \parallel {z}_{n}-\left(z-rAz\right)\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel \left({\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right)\left({x}_{n}-\frac{rA{x}_{n}}{1-{\alpha }_{n}}\right)\right)-\left({\alpha }_{n}z+\left(1-{\alpha }_{n}\right)\left(z-\frac{rAz}{1-{\alpha }_{n}}\right)\right)\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel \left(1-{\alpha }_{n}\right)\left(\left({x}_{n}-\frac{rA{x}_{n}}{1-{\alpha }_{n}}\right)-\left(z-\frac{rAz}{1-{\alpha }_{n}}\right)\right)+{\alpha }_{n}\left(f\left({x}_{n}\right)-z\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n}\right)\parallel {x}_{n}-z\parallel +{\alpha }_{n}\parallel f\left({x}_{n}\right)-f\left(z\right)\parallel +{\alpha }_{n}\parallel f\left(z\right)-z\parallel \\ \phantom{\rule{1em}{0ex}}\le \left[1-\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-z\parallel +{\alpha }_{n}\parallel f\left(z\right)-z\parallel .\end{array}$
(3.16)

Hence,

$\begin{array}{rl}\parallel {x}_{n+1}-z\parallel & \le \mu \parallel S{x}_{n}-z\parallel +\left(1-\mu \right)\parallel {u}_{n}-z\parallel \\ \le \mu \parallel {x}_{n}-z\parallel +\left(1-\mu \right)\left[1-\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-z\parallel +\left(1-\mu \right){\alpha }_{n}\parallel f\left(z\right)-z\parallel \\ =\left[1-\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n}\right]\parallel {x}_{n}-z\parallel +\left(1-\mu \right){\alpha }_{n}\parallel f\left(z\right)-z\parallel .\end{array}$

By induction, we have

$\parallel {x}_{n+1}-z\parallel \le max\left\{\parallel {x}_{0}-z\parallel ,\frac{\parallel f\left(z\right)-z\parallel }{1-\rho }\right\}.$

Therefore, $\left\{{x}_{n}\right\}$ is bounded. Hence, $\left\{A{x}_{n}\right\}$, $\left\{{u}_{n}\right\}$, $\left\{S{x}_{n}\right\}$ are also bounded.

From (3.16), we obtain

${\parallel {u}_{n}-z\parallel }^{2}\le \left(1-{\alpha }_{n}\right){\parallel \left({x}_{n}-\frac{rA{x}_{n}}{1-{\alpha }_{n}}\right)-\left(z-\frac{rAz}{1-{\alpha }_{n}}\right)\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}.$

Since A is α-inverse strongly monotone, we know from Lemma 2.3 that

${\parallel \left({x}_{n}-\frac{rA{x}_{n}}{1-{\alpha }_{n}}\right)-\left(z-\frac{rAz}{1-{\alpha }_{n}}\right)\parallel }^{2}\le {\parallel {x}_{n}-z\parallel }^{2}+\frac{r\left(r-2\left(1-{\alpha }_{n}\right)\alpha \right)}{{\left(1-{\alpha }_{n}\right)}^{2}}{\parallel A{x}_{n}-Az\parallel }^{2}.$

It follows that

${\parallel {u}_{n}-z\parallel }^{2}\le \left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+\frac{r\left(r-2\left(1-{\alpha }_{n}\right)\alpha \right)}{\left(1-{\alpha }_{n}\right)}{\parallel A{x}_{n}-Az\parallel }^{2}+{\alpha }_{n}{\parallel f\left(z\right)-z\parallel }^{2}.$
(3.17)

Note that

$\parallel {u}_{n+1}-{u}_{n}\parallel \le \parallel {T}_{r}{z}_{n+1}-{T}_{r}{z}_{n}\parallel \le \parallel {z}_{n+1}-{z}_{n}\parallel .$
(3.18)

From Lemma 2.3, we know that $I-\lambda A$ is nonexpansive for all $\lambda \in \left(0,2\alpha \right)$. Thus, we have $I-\frac{{\lambda }_{n+1}}{1-{\alpha }_{n+1}}A$ is nonexpansive for all n due to the fact that $\frac{r}{1-{\alpha }_{n+1}}\in \left(0,2\alpha \right)$. Then we get

$\begin{array}{r}\parallel {z}_{n+1}-{z}_{n}\parallel \\ \phantom{\rule{1em}{0ex}}=\parallel {\alpha }_{n+1}f\left({x}_{n+1}\right)+\left(1-{\alpha }_{n+1}\right){x}_{n+1}-rA{x}_{n+1}-\left({\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}-rA{x}_{n}\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \parallel \left(1-{\alpha }_{n+1}\right)\left({x}_{n+1}-\frac{r}{1-{\alpha }_{n+1}}A{x}_{n+1}\right)-\left(1-{\alpha }_{n}\right)\left({x}_{n}-\frac{r}{1-{\alpha }_{n}}A{x}_{n}\right)\parallel \\ \phantom{\rule{2em}{0ex}}+{\alpha }_{n+1}\parallel f\left({x}_{n+1}\right)-f\left({x}_{n}\right)\parallel +|{\alpha }_{n+1}-{\alpha }_{n}|\parallel f\left({x}_{n}\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \left(1-{\alpha }_{n+1}\right)\parallel \left(I-\frac{r}{1-{\alpha }_{n+1}}A\right){x}_{n+1}-\left(I-\frac{r}{1-{\alpha }_{n+1}}A\right){x}_{n}\parallel \\ \phantom{\rule{2em}{0ex}}+\parallel \left(1-{\alpha }_{n+1}\right)\left({x}_{n}-\frac{r}{1-{\alpha }_{n+1}}A{x}_{n}\right)-\left(1-{\alpha }_{n}\right)\left({x}_{n}-\frac{r}{1-{\alpha }_{n}}A{x}_{n}\right)\parallel \\ \phantom{\rule{2em}{0ex}}+{\alpha }_{n+1}\rho \parallel {x}_{n+1}-{x}_{n}\parallel +|{\alpha }_{n+1}-{\alpha }_{n}|\parallel f\left({x}_{n}\right)\parallel \\ \phantom{\rule{1em}{0ex}}\le \left[1-\left(1-\rho \right){\alpha }_{n+1}\right]\parallel {x}_{n+1}-{x}_{n}\parallel +|{\alpha }_{n+1}-{\alpha }_{n}|\left(\parallel f\left({x}_{n}\right)\parallel +\parallel {x}_{n}\parallel \right).\end{array}$
(3.19)

From (3.15), (3.18) and (3.19), we obtain

$\begin{array}{rl}\parallel {x}_{n+2}-{x}_{n+1}\parallel \le & \mu \parallel S{x}_{n+1}-S{x}_{n}\parallel +\left(1-\mu \right)\parallel {u}_{n+1}-{u}_{n}\parallel \\ \le & \mu \parallel {x}_{n+1}-{x}_{n}\parallel +\left(1-\mu \right)\parallel {u}_{n+1}-{u}_{n}\parallel \\ \le & \mu \parallel {x}_{n+1}-{x}_{n}\parallel +\left(1-\mu \right)\left[1-\left(1-\rho \right){\alpha }_{n+1}\right]\parallel {x}_{n+1}-{x}_{n}\parallel \\ +\left(1-\mu \right)|{\alpha }_{n+1}-{\alpha }_{n}|\left(\parallel f\left({x}_{n}\right)\parallel +\parallel {x}_{n}\parallel \right)\\ =& \left[1-\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n+1}\right]\parallel {x}_{n+1}-{x}_{n}\parallel \\ +\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n+1}\frac{|{\alpha }_{n+1}-{\alpha }_{n}|}{\left(1-\rho \right){\alpha }_{n+1}}\left(\parallel f\left({x}_{n}\right)\parallel +\parallel {x}_{n}\parallel \right).\end{array}$

By Lemma 2.4, we get

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n+1}-{x}_{n}\parallel =0.$

From (3.15) and (3.17), we have

$\begin{array}{rl}{\parallel {x}_{n+1}-z\parallel }^{2}\le & \mu {\parallel S{x}_{n}-z\parallel }^{2}+\left(1-\mu \right){\parallel {u}_{n}-z\parallel }^{2}\\ \le & \mu {\parallel {x}_{n}-z\parallel }^{2}+\left(1-\mu \right)\left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}{\parallel f\left(z\right)-z\parallel }^{2}\\ +\left(1-\mu \right)\frac{r\left(r-2\left(1-{\alpha }_{n}\right)\alpha \right)}{\left(1-{\alpha }_{n}\right)}{\parallel A{x}_{n}-Az\parallel }^{2}.\end{array}$

Then we obtain

$\begin{array}{r}\left(1-\mu \right)\frac{r\left(2\left(1-{\alpha }_{n}\right)\alpha -r\right)}{\left(1-{\alpha }_{n}\right)}{\parallel A{x}_{n}-Az\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le {\parallel {x}_{n}-z\parallel }^{2}-{\parallel {x}_{n+1}-z\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}{\parallel f\left(z\right)-z\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \left(\parallel {x}_{n}-z\parallel -\parallel {x}_{n+1}-z\parallel \right)\parallel {x}_{n+1}-{x}_{n}\parallel +\left(1-\mu \right){\alpha }_{n}{\parallel f\left(z\right)-z\parallel }^{2}.\end{array}$

Since ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${lim}_{n\to \mathrm{\infty }}\parallel {x}_{n+1}-{x}_{n}\parallel =0$ and ${lim inf}_{n\to \mathrm{\infty }}\left(1-\mu \right)\frac{r\left(2\left(1-{\alpha }_{n}\right)\alpha -r\right)}{\left(1-{\alpha }_{n}\right)}>0$, we have

$\underset{n\to \mathrm{\infty }}{lim}\parallel A{x}_{n}-Az\parallel =0.$
(3.20)

Next, we show $\parallel {x}_{n}-{u}_{n}\parallel \to 0$. By using the firm nonexpansivity of ${T}_{{\lambda }_{n}}$, we have

$\begin{array}{rl}{\parallel {u}_{n}-z\parallel }^{2}=& {\parallel {T}_{r}{z}_{n}-{T}_{r}\left(z-rAz\right)\parallel }^{2}\\ \le & 〈{z}_{n}-\left(z-rAz\right),{u}_{n}-z〉\\ =& \frac{1}{2}\left({\parallel {z}_{n}-\left(z-rAz\right)\parallel }^{2}+{\parallel {u}_{n}-z\parallel }^{2}\\ -{\parallel {z}_{n}-\left(z-rAz\right)-\left({u}_{n}-z\right)\parallel }^{2}\right)\\ =& \frac{1}{2}\left({\parallel {z}_{n}-\left(z-rAz\right)\parallel }^{2}+{\parallel {u}_{n}-z\parallel }^{2}\\ -{\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-{x}_{n}\right)+\left({x}_{n}-{u}_{n}\right)-r\left(A{x}_{n}-Az\right)\parallel }^{2}\right).\end{array}$

From (3.16) and (3.17), we have

${\parallel {z}_{n}-\left(z-rAz\right)\parallel }^{2}\le \left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}.$

Thus,

$\begin{array}{rl}{\parallel {u}_{n}-z\parallel }^{2}\le & \frac{1}{2}\left(\left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}+{\parallel {u}_{n}-z\parallel }^{2}\\ -{\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-{x}_{n}\right)+\left({x}_{n}-{u}_{n}\right)-r\left(A{x}_{n}-Az\right)\parallel }^{2}\right).\end{array}$

That is,

$\begin{array}{rcl}{\parallel {u}_{n}-z\parallel }^{2}& \le & \left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}\\ -{\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-{x}_{n}\right)+\left({x}_{n}-{u}_{n}\right)-r\left(A{x}_{n}-Az\right)\parallel }^{2}\\ =& \left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}-{\parallel {x}_{n}-{u}_{n}\parallel }^{2}\\ +2r〈{x}_{n}-{u}_{n},A{x}_{n}-Az〉-2{\alpha }_{n}〈f\left({x}_{n}\right)-{x}_{n},{x}_{n}-{u}_{n}〉\\ -{\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-{x}_{n}\right)-r\left(A{x}_{n}-Az\right)\parallel }^{2}\\ \le & \left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+{\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}-{\parallel {x}_{n}-{u}_{n}\parallel }^{2}\\ +2r\parallel {x}_{n}-{u}_{n}\parallel \parallel A{x}_{n}-Az\parallel +2{\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \parallel {x}_{n}-{u}_{n}\parallel .\end{array}$

It follows that

$\begin{array}{rcl}{\parallel {x}_{n+1}-z\parallel }^{2}& \le & \mu {\parallel {x}_{n}-z\parallel }^{2}+\left(1-\mu \right)\left(1-{\alpha }_{n}\right){\parallel {x}_{n}-z\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}\\ -\left(1-\mu \right){\parallel {x}_{n}-{u}_{n}\parallel }^{2}+2r\parallel {x}_{n}-{u}_{n}\parallel \parallel A{x}_{n}-Az\parallel \\ +2{\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \parallel {x}_{n}-{u}_{n}\parallel \\ =& \left[1-\left(1-\mu \right){\alpha }_{n}\right]{\parallel {x}_{n}-z\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}-\left(1-\mu \right){\parallel {x}_{n}-{u}_{n}\parallel }^{2}\\ +2r\parallel {x}_{n}-{u}_{n}\parallel \parallel A{x}_{n}-Az\parallel +2{\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \parallel {x}_{n}-{u}_{n}\parallel .\end{array}$

Hence,

$\begin{array}{rcl}\left(1-\mu \right){\parallel {x}_{n}-{u}_{n}\parallel }^{2}& \le & {\parallel {x}_{n}-z\parallel }^{2}-{\parallel {x}_{n+1}-z\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}\\ +2r\parallel {x}_{n}-{u}_{n}\parallel \parallel A{x}_{n}-Az\parallel +2{\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \parallel {x}_{n}-{u}_{n}\parallel \\ \le & \left(\parallel {x}_{n}-z\parallel +\parallel {x}_{n+1}-z\parallel \right)\parallel {x}_{n+1}-{x}_{n}\parallel +\left(1-\mu \right){\alpha }_{n}{\parallel f\left({x}_{n}\right)-z\parallel }^{2}\\ +2r\parallel {x}_{n}-{u}_{n}\parallel \parallel A{x}_{n}-Az\parallel +2{\alpha }_{n}\parallel f\left({x}_{n}\right)-{x}_{n}\parallel \parallel {x}_{n}-{u}_{n}\parallel .\end{array}$

Since $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$, ${\alpha }_{n}\to 0$ and $\parallel A{x}_{n}-Az\parallel \to 0$, we deduce

$\underset{n\to \mathrm{\infty }}{lim}\parallel {x}_{n}-{u}_{n}\parallel =0.$
(3.21)

This together with $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$ implies that

$\underset{n\to \mathrm{\infty }}{lim}\parallel S{x}_{n}-{x}_{n}\parallel =0.$
(3.22)

Put $\stackrel{˜}{x}={lim}_{t\to 0+}{x}_{t}$, where $\left\{{x}_{t}\right\}$ is the net defined by (3.2). We will finally show that ${x}_{n}\to \stackrel{˜}{x}$.

Set ${v}_{n}={x}_{n}-\frac{{\lambda }_{n}}{1-{\alpha }_{n}}\left(A{x}_{n}-A\stackrel{˜}{x}\right)$ for all n. Take $z=\stackrel{˜}{x}$ in (3.17) to get $\parallel A{x}_{n}-A\stackrel{˜}{x}\parallel \to 0$. First, we prove ${lim sup}_{n\to \mathrm{\infty }}〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),{x}_{n}-\stackrel{˜}{x}〉\ge 0$. We take a subsequence $\left\{{v}_{{n}_{i}}\right\}$ of $\left\{{v}_{n}\right\}$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),{x}_{n}-\stackrel{˜}{x}〉=\underset{i\to \mathrm{\infty }}{lim}〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),{x}_{{n}_{i}}-\stackrel{˜}{x}〉.$

It is clear that $\left\{{x}_{{n}_{i}}\right\}$ is bounded due to the boundedness of $\left\{{x}_{n}\right\}$. Then there exists a subsequence $\left\{{x}_{{n}_{{i}_{j}}}\right\}$ of $\left\{{x}_{{n}_{i}}\right\}$ which converges weakly to some point $w\in C$. Hence, $\left\{{x}_{{n}_{{i}_{j}}}\right\}$ also converges weakly to w. From (3.22), we have

$\underset{j\to \mathrm{\infty }}{lim}\parallel {x}_{{n}_{{i}_{j}}}-S{x}_{{n}_{{i}_{j}}}\parallel =0.$
(3.23)

By the demi-closedness principle of the nonexpansive mapping (see Lemma 2.3) and (3.23), we deduce $w\in Fix\left(S\right)$. Furthermore, by a similar argument as that of Theorem 3.3, we can show that w is also in EPA. Hence, we have $w\in Fix\left(S\right)\cap \mathit{EPA}$. This implies that

$\begin{array}{rl}\underset{n\to \mathrm{\infty }}{lim sup}〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),{x}_{n}-\stackrel{˜}{x}〉& =\underset{j\to \mathrm{\infty }}{lim}〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),{x}_{{n}_{{i}_{j}}}-\stackrel{˜}{x}〉\\ =〈\stackrel{˜}{x}-f\left(\stackrel{˜}{x}\right),w-\stackrel{˜}{x}〉\ge 0.\end{array}$

From (3.15), we have

$\begin{array}{r}{\parallel {x}_{n+1}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right){\parallel {u}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right){\parallel {T}_{r}{z}_{n}-{T}_{r}\left(\stackrel{˜}{x}-rA\stackrel{˜}{x}\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right){\parallel {z}_{n}-\left(\stackrel{˜}{x}-rA\stackrel{˜}{x}\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}=\mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right){\parallel {\alpha }_{n}f\left({x}_{n}\right)+\left(1-{\alpha }_{n}\right){x}_{n}-rA{x}_{n}-\left(\stackrel{˜}{x}-rA\stackrel{˜}{x}\right)\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}=\left(1-\mu \right){\parallel {\alpha }_{n}\left(f\left({x}_{n}\right)-\stackrel{˜}{x}\right)+\left(1-{\alpha }_{n}\right)\left(\left({x}_{n}-\frac{r}{1-{\alpha }_{n}}A{x}_{n}\right)-\left(\stackrel{˜}{x}-\frac{r}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)\right)\parallel }^{2}\\ \phantom{\rule{2em}{0ex}}+\mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}=\left(1-\mu \right)\left({\left(1-{\alpha }_{n}\right)}^{2}{\parallel \left({x}_{n}-\frac{r}{1-{\alpha }_{n}}A{x}_{n}\right)-\left(\stackrel{˜}{x}-\frac{r}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)\parallel }^{2}\\ \phantom{\rule{2em}{0ex}}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈f\left({x}_{n}\right)-\stackrel{˜}{x},\left({x}_{n}-\frac{r}{1-{\alpha }_{n}}A{x}_{n}\right)-\left(\stackrel{˜}{x}-\frac{r}{1-{\alpha }_{n}}A\stackrel{˜}{x}\right)〉\\ \phantom{\rule{2em}{0ex}}+{\alpha }_{n}^{2}{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)+\mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right)\left({\left(1-{\alpha }_{n}\right)}^{2}{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈f\left({x}_{n}\right)-f\left(\stackrel{˜}{x}\right),{x}_{n}-\stackrel{˜}{x}〉\\ \phantom{\rule{2em}{0ex}}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈f\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉-2r{\alpha }_{n}〈f\left({x}_{n}\right)-\stackrel{˜}{x},A{x}_{n}-A\stackrel{˜}{x}〉+{\alpha }_{n}^{2}{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)\\ \phantom{\rule{1em}{0ex}}\le \mu {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right)\left({\left(1-{\alpha }_{n}\right)}^{2}{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)\rho {\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{2em}{0ex}}+2{\alpha }_{n}\left(1-{\alpha }_{n}\right)〈f\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉+2r{\alpha }_{n}\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel \parallel A{x}_{n}-A\stackrel{˜}{x}\parallel +{\alpha }_{n}^{2}{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)\\ \phantom{\rule{1em}{0ex}}\le \left[1-2\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n}\right]{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+\left(1-\mu \right){\alpha }_{n}^{2}\left({\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)\\ \phantom{\rule{2em}{0ex}}+2\left(1-\mu \right){\alpha }_{n}\left(1-{\alpha }_{n}\right)〈f\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉+2r\left(1-\mu \right){\alpha }_{n}\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel \parallel A{x}_{n}-A\stackrel{˜}{x}\parallel \\ \phantom{\rule{1em}{0ex}}=\left[1-2\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n}\right]{\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}\\ \phantom{\rule{2em}{0ex}}+2\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n}\left\{\frac{{\alpha }_{n}}{1-\rho }\left({\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)\\ \phantom{\rule{2em}{0ex}}+\frac{1-{\alpha }_{n}}{1-\rho }〈f\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉+\frac{r}{1-\rho }\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel \parallel A{x}_{n}-A\stackrel{˜}{x}\parallel \right\}.\end{array}$

It is clear that ${\sum }_{n}2\left(1-\mu \right)\left(1-\rho \right){\alpha }_{n}=\mathrm{\infty }$ and

$\begin{array}{r}\underset{n}{lim sup}\left\{\frac{{\alpha }_{n}}{1-\rho }\left({\parallel {x}_{n}-\stackrel{˜}{x}\parallel }^{2}+{\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel }^{2}\right)+\frac{1-{\alpha }_{n}}{1-\rho }〈f\left(\stackrel{˜}{x}\right)-\stackrel{˜}{x},{x}_{n}-\stackrel{˜}{x}〉\\ \phantom{\rule{2em}{0ex}}+\frac{r}{1-\rho }\parallel f\left({x}_{n}\right)-\stackrel{˜}{x}\parallel \parallel A{x}_{n}-A\stackrel{˜}{x}\parallel \right\}\le 0.\end{array}$

We can therefore apply Lemma 2.4 to conclude that ${x}_{n}\to \stackrel{˜}{x}$.

Finally, if we take $f=0$, by a similar argument as that in Theorem 3.3, we deduce immediately that $\stackrel{˜}{x}$ is a minimum norm element in Γ. This completes the proof. □

## References

1. Reich S, Xu HK: An iterative approach to a constrained least squares problem. Abstr. Appl. Anal. 2003, 8: 503–512.

2. Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518

3. Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116: 659–678. 10.1023/A:1023073621589

4. Yao Y, Liou YC, Yao JC: Convergence theorem for equilibrium problems and fixed point problems of infinite family of nonexpansive mappings. Fixed Point Theory Appl. 2007., 2007: Article ID 64363

5. Combettes PL, Hirstoaga A: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6: 117–136.

6. Moudafi A: Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Nonlinear Convex Anal. 2008, 9: 37–43.

7. Takahashi S, Takahashi W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331: 506–515. 10.1016/j.jmaa.2006.08.036

8. Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.

9. Chang SS, Lee HWJ, Chan CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal. 2009, 70: 3307–3319. 10.1016/j.na.2008.04.035

10. Chantarangsi W, Jaiboon C, Kumam P: A viscosity hybrid steepest descent method for generalized mixed equilibrium problems and variational inequalities for relaxed cocoercive mapping in Hilbert spaces. Abstr. Appl. Anal. 2010., 2010: Article ID 390972

11. Cianciaruso F, Marino G, Muglia L, Yao Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740

12. Colao V, Acedo GL, Marino G: An implicit method for finding common solutions of variational inequalities and systems of equilibrium problems and fixed points of infinite family of nonexpansive mappings. Nonlinear Anal. 2009, 71: 2708–2715. 10.1016/j.na.2009.01.115

13. Colao V, Marino G, Xu HK: An iterative method for finding common solutions of equilibrium and fixed point problems. J. Math. Anal. Appl. 2008, 344: 340–352. 10.1016/j.jmaa.2008.02.041

14. Fang YP, Huang NJ, Yao JC: Well-posedness by perturbations of mixed variational inequalities in Banach spaces. Eur. J. Oper. Res. 2010, 201: 682–692. 10.1016/j.ejor.2009.04.001

15. Jung JS: Strong convergence of composite iterative methods for equilibrium problems and fixed point problems. Appl. Math. Comput. 2009, 213: 498–505. 10.1016/j.amc.2009.03.048

16. Mainge PE: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 2010, 205: 501–506. 10.1016/j.ejor.2010.01.042

17. Mainge PE, Moudafi A: Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 2008, 9: 283–294.

18. Moudafi A, Théra M Lecture Notes in Economics and Mathematical Systems 477. In Proximal and Dynamical Approaches to Equilibrium Problems. Springer, Berlin; 1999:187–201.

19. Nadezhkina N, Takahashi W: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z

20. Noor MA, Yao Y, Chen R, Liou YC: An iterative method for fixed point problems and variational inequality problems. Math. Commun. 2007, 12: 121–132.

21. Peng JW, Wu SY, Yao JC: A new iterative method for finding common solutions of a system of equilibrium problems, fixed-point problems, and variational inequalities. Abstr. Appl. Anal. 2010., 2010: Article ID 428293

22. Peng JW, Yao JC: A new hybrid-extragradient method for generalized mixed equilibrium problems and fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1433.

23. Plubtieng S, Punpaeng R: A new iterative method for equilibrium problems and fixed point problems of nonexpansive mappings and monotone mappings. Appl. Math. Comput. 2008, 197: 548–558. 10.1016/j.amc.2007.07.075

24. Takahashi S, Takahashi W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 2008, 69: 1025–1033. 10.1016/j.na.2008.02.042

25. Yao Y, Cho YJ, Liou YC: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Eur. J. Oper. Res. 2011, 212: 242–250. 10.1016/j.ejor.2011.01.042

26. Yao Y, Liou YC: Composite algorithms for minimization over the solutions of equilibrium problems and fixed point problems. Abstr. Appl. Anal. 2010., 2010: Article ID 763506

27. Bauschke HH, Borwein JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

28. Combettes PL: Strong convergence of block-iterative outer approximation methods for convex optimization. SIAM J. Control Optim. 2000, 38: 538–565. 10.1137/S036301299732626X

29. Combettes PL, Pesquet JC: Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim. 2007, 18: 1351–1376.

30. Takahashi W, Toyoda M: Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003, 118: 417–428. 10.1023/A:1025407607560

31. Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.

32. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

## Acknowledgements

The first author was supported in part by NSFC 11071279 and NSFC 71161001-G0105. The second author was supported in part by NSC 101-2628-E-230-001-MY3.

## Author information

Authors

### Corresponding author

Correspondence to Shin Min Kang.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Yao, Y., Liou, YC. & Kang, S.M. Conversion of algorithms by releasing projection for minimization problems. Fixed Point Theory Appl 2013, 114 (2013). https://doi.org/10.1186/1687-1812-2013-114

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2013-114

### Keywords

• minimization
• equilibrium problem
• fixed point problem
• nonexpansive mapping 