# Iterative methods for constrained convex minimization problem in Hilbert spaces

## Abstract

In this paper, based on Yamada’s hybrid steepest descent method, a general iterative method is proposed for solving constrained convex minimization problem. It is proved that the sequences generated by proposed implicit and explicit schemes converge strongly to a solution of the constrained convex minimization problem, which also solves a certain variational inequality.

MSC:58E35, 47H09, 65J15.

## 1 Introduction

Let H be a real Hilbert space with inner product $〈\cdot ,\cdot 〉$ and induced norm $\parallel \cdot \parallel$. Let C be a nonempty, closed and convex subset of H. We need some nonlinear operators which are introduced below.

Let $T,A:H\to H$ be nonlinear operators.

• T is nonexpansive if $\parallel Tx-Ty\parallel \le \parallel x-y\parallel$ for all $x,y\in H$.

• T is Lipschitz continuous if there exists a constant $L>0$ such that $\parallel Tx-Ty\parallel \le L\parallel x-y\parallel$, for all $x,y\in H$.

• $A:H\to H$ is monotone if $〈x-y,Ax-Ay〉\ge 0$, for all $x,y\in H$.

• Given is a number $\eta >0$, $A:H\to H$ is η-strongly monotone if $〈x-y,Ax-Ay〉\ge \eta {\parallel x-y\parallel }^{2}$, for all $x,y\in H$.

• Given is a number $\upsilon >0$. $A:H\to H$ is υ-inverse strongly monotone (υ-ism) if $〈x-y,Ax-Ay〉\ge \upsilon {\parallel Ax-Ay\parallel }^{2}$, for all $x,y\in H$.

It is known that inverse strongly monotone operators have been studied widely (see ), and applied to solve practical problems in various fields; for instance, in traffic assignment problems (see [4, 5]).

• $T:H\to H$ is said to be an averaged mapping if $T=\left(1-\alpha \right)I+\alpha S$, where α is a number in $\left(0,1\right)$ and $S:H\to H$ is nonexpansive. In particular, projections are ($1/2$)-averaged mappings.

Averaged mappings have received many investigations, see .

Consider the following constrained convex minimization problem:

$\underset{x\in C}{min}f\left(x\right),$
(1.1)

where $f:C\to R$ is a real valued convex function. Assume that the minimization problem (1.1) is consistent, and let S denote its solution set. It is known that the gradient-projection algorithm is one of the powerful methods for solving the minimization problem (1.1) (see ), and sometimes the minimization problem (1.1) has more than one solution. So, regularization is needed. We can use the idea of regularization to design an iterative algorithm for finding the minimum-norm solution of (1.1).

We consider the regularized minimization problem:

$\underset{x\in C}{min}{f}_{\alpha }\left(x\right)=f\left(x\right)+\frac{\alpha }{2}{\parallel x\parallel }^{2}.$
(1.2)

Here, $\alpha >0$ is the regularization parameter, f is convex function with L-Lipschitz continuous gradient f. Let ${x}_{\mathrm{min}}$ be minimum-norm solution of (1.1), namely, ${x}_{\mathrm{min}}$ satisfies the property:

$\parallel {x}_{\mathrm{min}}\parallel =min\left\{\parallel x\parallel :x\in S\right\}.$

${x}_{\mathrm{min}}$ can be obtained by two steps. First, observing that the gradient $\mathrm{\nabla }{f}_{\alpha }=\mathrm{\nabla }f+\alpha I$ is $\left(L+\alpha \right)$-Lipschitzian and α-strongly monotone, the mapping ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{\alpha }\right)$ is a contraction with coefficient $\sqrt{1-\gamma \left(2\alpha -\gamma {\left(L+\alpha \right)}^{2}\right)}\le 1-\frac{1}{2}\alpha \gamma$, where $0<\gamma \le \frac{\alpha }{{\left(L+\alpha \right)}^{2}}$. So, the regularized problem (1.2) has a unique solution, which is denoted as ${x}_{\alpha }\in C$ and which can be obtained via the Banach contraction principle. Secondly, letting $\alpha \to 0$ yields ${x}_{\alpha }\to {x}_{\mathrm{min}}$ in norm. The following result shows that for suitable choices of γ and α, the minimum-norm solution ${x}_{\mathrm{min}}$ can be obtained by a single step.

Theorem 1.1 

Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is L-Lipschitz continuous. Let ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ be generated by the following iterative algorithm:

${x}_{n+1}={Proj}_{C}\left(I-{\gamma }_{n}\mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}={Proj}_{C}\left(I-{\gamma }_{n}\left(\mathrm{\nabla }f+{\alpha }_{n}I\right)\right){x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0.$
(1.3)

Let $\left\{{\gamma }_{n}\right\}$ and $\left\{{\alpha }_{n}\right\}$ satisfy the following conditions:

1. (i)

$0<{\gamma }_{n}\le {\alpha }_{n}/{\left(L+{\alpha }_{n}\right)}^{2}$ for all n;

2. (ii)

${\alpha }_{n}\to 0$ (and ${\gamma }_{n}\to 0$) as $n\to \mathrm{\infty }$;

3. (iii)

${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}{\gamma }_{n}=\mathrm{\infty }$;

4. (iv)

$\left(|{\gamma }_{n}-{\gamma }_{n-1}|+|{\alpha }_{n}{\gamma }_{n}-{\alpha }_{n-1}{\gamma }_{n-1}|\right)/{\left({\alpha }_{n}{\gamma }_{n}\right)}^{2}\to 0$ as $n\to \mathrm{\infty }$.

Then ${x}_{n}\to {x}_{\mathrm{min}}$ as $n\to \mathrm{\infty }$.

In the assumptions of Theorem 1.1, the sequence $\left\{{\gamma }_{n}\right\}$ is forced to tend to zero. If we keep it as a constant, then we have weak convergence as follows.

Theorem 1.2 

Assume that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is L-Lipschitz continuous. Let ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ be generated by the following iterative algorithm:

${x}_{n+1}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\alpha }_{n}}\right){x}_{n}={Proj}_{C}\left(I-\gamma \left(\mathrm{\nabla }f+{\alpha }_{n}I\right)\right){x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0.$
(1.4)

Assume that $0<\gamma <2/L$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}<\mathrm{\infty }$. Then ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ converges weakly to a solution of the minimization problem (1.1).

In 2001, Yamada  introduced the following hybrid steepest descent method:

${x}_{n+1}=\left(I-{s}_{n}\mu F\right)T{x}_{n},$
(1.5)

where $F:H\to H$ is k-Lipschitzian and η-strongly monotone, and $0<\mu <2\eta /{k}^{2}$. It is proved that the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ generated by (1.5) converges strongly to ${x}^{\ast }\in Fix\left(T\right)$, which solves the variational inequality:

$〈F\left({x}^{\ast }\right),{x}^{\ast }-z〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in Fix\left(T\right).$

In this paper, we introduce a modification of algorithm (1.4) which is based on Yamada’s method. It is proved that the sequence generated by our proposed algorithm converges strongly to a minimizer of (1.1), which is also a solution of a certain variational inequality.

## 2 Preliminaries

In this section, we introduce some useful properties and lemmas which will be used in the proofs for the main results in the next section.

Proposition 2.1 [7, 8]

Let the operators $S,T,V:H\to H$ be given:

1. (i)

If $T=\left(1-\alpha \right)S+\alpha V$, for some $\alpha \in \left(0,1\right)$ and if S is averaged and V is nonexpansive, then T is averaged.

2. (ii)

The composition of finitely many averaged mappings is averaged. That is, if each of the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ is averaged, then so is the composite ${T}_{1}\cdots {T}_{N}$. In particular, if ${T}_{1}$ is ${\alpha }_{1}$-averaged and ${T}_{2}$ is ${\alpha }_{2}$-averaged, where ${\alpha }_{1},{\alpha }_{2}\in \left(0,1\right)$, then the composite ${T}_{1}{T}_{2}$ is α-averaged, where $\alpha ={\alpha }_{1}+{\alpha }_{2}-{\alpha }_{1}{\alpha }_{2}$.

3. (iii)

If the mappings ${\left\{{T}_{i}\right\}}_{i=1}^{N}$ are averaged and have a common fixed point, then

$\bigcap _{i=1}^{N}Fix\left({T}_{i}\right)=Fix\left({T}_{1}\cdots {T}_{N}\right).$

Here, the notations $Fix\left(T\right)$ denotes the set of fixed point of the mapping T; that is, $Fix\left(T\right):=\left\{x\in H:Tx=x\right\}$.

Proposition 2.2 [7, 20]

Let $T:H\to H$ be given. We have:

1. (i)

T is nonexpansive, if and only if the complement $I-T$ is ($1/2$)-ism;

2. (ii)

If T is υ-ism, then for $\gamma >0$, γT is ($\upsilon /\gamma$)-ism;

3. (iii)

T is averaged, if and only if the complement $I-T$ is υ-ism for some $\upsilon >1/2$; indeed, for $\alpha \in \left(0,1\right)$, T is α-averaged, if and only if $I-T$ is ($1/2\alpha$)-ism.

The so-called demiclosed principle for nonexpansive mappings will often be used.

Lemma 2.3 (Demiclosed Principle )

Let C be a closed and convex subset of a Hilbert space H and let $T:C\to C$ be a nonexpansive mapping with $Fix\left(T\right)\ne \mathrm{\varnothing }$. If ${\left\{{x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ is a sequence in C weakly converging to x and if ${\left\{\left(I-T\right){x}_{n}\right\}}_{n=1}^{\mathrm{\infty }}$ converges strongly to y, then $\left(I-T\right)x=y$. In particular, if $y=0$, then $x\in Fix\left(T\right)$.

Recall the metric (nearest point) projection ${Proj}_{C}$ from a real Hilbert space H to a closed convex subset C of H is defined as follows: given $x\in H$, ${Proj}_{C}x$ is the unique point in C with the property

$\parallel x-{Proj}_{C}x\parallel =inf\left\{\parallel x-y\parallel :y\in C\right\}.$

${Proj}_{C}$ is characterized as follows.

Lemma 2.4 Let C be a closed and convex subset of a real Hilbert space H. Given $x\in H$ and $y\in C$, then $y={Proj}_{C}x$ if and only if there holds the inequality

$〈x-y,y-z〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in C.$

Lemma 2.5 Assume that ${\left\{{a}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ is a sequence of nonnegative real numbers such that

${a}_{n+1}\le \left(1-{\gamma }_{n}\right){a}_{n}+{\gamma }_{n}{\delta }_{n}+{\beta }_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$

where ${\left\{{\gamma }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ and ${\left\{{\beta }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ are sequences in $\left(0,1\right)$ and ${\left\{{\delta }_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ is a sequence in such that

1. (i)

${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}=\mathrm{\infty }$;

2. (ii)

either ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$ or ${\sum }_{n=0}^{\mathrm{\infty }}{\gamma }_{n}|{\delta }_{n}|<\mathrm{\infty }$;

3. (iii)

${\sum }_{n=0}^{\mathrm{\infty }}{\beta }_{n}<\mathrm{\infty }$.

Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

• ${x}_{n}\to x$ means that ${x}_{n}\to x$ strongly;

• ${x}_{n}⇀x$ means that ${x}_{n}\to x$ weakly.

## 3 Main results

Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).

Let H be a real Hilbert space and C be a nonempty closed convex subset of Hilbert space H. Let $F:C\to H$ be a k-Lipschitzian and η-strongly monotone operator with constant $k>0$, $\eta >0$ such that $0<\mu <2\eta /{k}^{2}$. Suppose that f is L-Lipschitz continuous. We now consider a mapping ${Q}_{s}$ on C defined by:

${Q}_{s}\left(x\right)={Proj}_{C}\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left(x\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in C,$

where $s\in \left(0,1\right)$, and ${T}_{{\lambda }_{s}}$ is nonexpansive. Let ${T}_{{\lambda }_{s}}$ and ${\lambda }_{s}$ satisfy the following conditions:

1. (i)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right)=\left(1-{\theta }_{s}\right)I+{\theta }_{s}{T}_{{\lambda }_{s}}$ and $\gamma \in \left(0,2/L\right)$;

2. (ii)

${\theta }_{s}=\frac{2+\gamma \left(L+{\lambda }_{s}\right)}{4}$;

3. (iii)

${\lambda }_{s}$ is continuous with respect to s and ${\lambda }_{s}=o\left(s\right)$.

It is easy to see that ${Q}_{s}$ is a contraction. Indeed, we have for each $x,y\in C$,

$\begin{array}{rcl}\parallel {Q}_{s}\left(x\right)-{Q}_{s}\left(y\right)\parallel & =& \parallel {Proj}_{C}\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left(x\right)-{Proj}_{C}\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left(y\right)\parallel \\ \le & \parallel \left(I-s\mu F\right){T}_{{\lambda }_{s}}\left(x\right)-\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left(y\right)\parallel \\ \le & \left(1-s\tau \right)\parallel x-y\parallel ,\end{array}$

where $\tau =\frac{1}{2}\mu \left(2\eta -\mu {k}^{2}\right)$. Hence, ${Q}_{s}$ has a unique fixed point in C, denoted by ${x}_{s}$ which uniquely solves the fixed-point equation

${x}_{s}={Proj}_{C}\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left({x}_{s}\right).$
(3.1)

The following proposition summarizes the properties of the net $\left\{{x}_{s}\right\}$.

Proposition 3.1 Let ${x}_{s}$ be defined by (3.1). Then the following properties for the net $\left\{{x}_{s}\right\}$ hold:

1. (a)

$\left\{{x}_{s}\right\}$ is bounded for $s\in \left(0,1\right)$;

2. (b)

${lim}_{s\to 0}\parallel {x}_{s}-{T}_{{\lambda }_{s}}{x}_{s}\parallel =0$;

3. (c)

${x}_{s}$ defines a continuous curve from $\left(0,1\right)$ into C.

Proof It is well known that: $\stackrel{˜}{x}\in C$ solves the minimization problem (1.1) if and only if $\stackrel{˜}{x}$ solves the fixed-point equation

$\stackrel{˜}{x}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\stackrel{˜}{x}=\frac{2-\gamma L}{4}\stackrel{˜}{x}+\frac{2+\gamma L}{4}T\stackrel{˜}{x},$

where $0<\gamma <2/L$ is a constant. It is clear that $\stackrel{˜}{x}=T\stackrel{˜}{x}$, i.e., $\stackrel{˜}{x}\in S=Fix\left(T\right)$.

(a) Take a fixed $p\in S$, we obtain that It follows that

$\parallel {x}_{s}-p\parallel \le \frac{\left(1+s\mu k\right)\parallel {T}_{{\lambda }_{s}}\left(p\right)-T\left(p\right)\parallel }{s\tau }+\frac{\mu }{\tau }\parallel F\left(p\right)\parallel .$
(3.2)

For $x\in C$, note that

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right)x=\left(1-{\theta }_{s}\right)x+{\theta }_{s}{T}_{{\lambda }_{s}}x$

and

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)x=\left(1-\theta \right)x+\theta Tx,$

where ${\theta }_{s}=\frac{2+\gamma \left(L+{\lambda }_{s}\right)}{4}$ and $\theta =\frac{2+\gamma L}{4}$.

Then we get

$\parallel \left(\theta -{\theta }_{s}\right)x+{\theta }_{s}{T}_{{\lambda }_{s}}x-\theta Tx\parallel =\parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right)x-{Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)x\parallel \le \gamma {\lambda }_{s}\parallel x\parallel .$

Since ${\theta }_{s}=\frac{2+\gamma \left(L+{\lambda }_{s}\right)}{4}$ and $\theta =\frac{2+\gamma L}{4}$, there exists a real positive number $M>0$ such that

$\parallel {T}_{{\lambda }_{s}}x-Tx\parallel \le \frac{{\lambda }_{s}\gamma \left(5\parallel x\parallel +\parallel Tx\parallel \right)}{2+\gamma \left(L+{\lambda }_{s}\right)}\le {\lambda }_{s}M\parallel x\parallel .$
(3.3)

It follows from (3.2) and (3.3) that

$\parallel {x}_{s}-p\parallel \le \frac{1+s\mu k}{\tau }\cdot \frac{{\lambda }_{s}}{s}\cdot M\parallel p\parallel +\frac{\mu }{\tau }\parallel F\left(p\right)\parallel .$

Since ${\lambda }_{s}=o\left(s\right)$, there exists a real positive number ${M}^{\mathrm{\prime }}>0$ such that $\frac{{\lambda }_{s}}{s}\le {M}^{\mathrm{\prime }}$, and Hence, $\left\{{x}_{s}\right\}$ is bounded.

(b) Note that the boundedness of $\left\{{x}_{s}\right\}$ implies that $\left\{F{T}_{{\lambda }_{s}}\left({x}_{s}\right)\right\}$ is also bounded. Hence, by the definition of $\left\{{x}_{s}\right\}$, we have (c) For $\gamma \in \left(0,2/L\right)$, there exists

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right)=\left(1-{\theta }_{s}\right)I+{\theta }_{s}{T}_{{\lambda }_{s}}$

and

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{{s}_{0}}}\right)=\left(1-{\theta }_{{s}_{0}}\right)I+{\theta }_{{s}_{0}}{T}_{{\lambda }_{{s}_{0}}},$

where ${\theta }_{s}=\frac{2+\gamma \left(L+{\lambda }_{s}\right)}{4}$ and ${\theta }_{{s}_{0}}=\frac{2+\gamma \left(L+{\lambda }_{{s}_{0}}\right)}{4}$.

So for ${x}_{s}\in C$, we get for some appropriate constant $N>0$ such that

$N\ge \gamma \parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right){x}_{s}\parallel +5\gamma \parallel {x}_{s}\parallel .$

Now take $s,{s}_{0}\in \left(0,1\right)$ and calculate It follows that

$\parallel {x}_{s}-{x}_{{s}_{0}}\parallel \le \frac{\mu \parallel F{T}_{{\lambda }_{s}}\left({x}_{s}\right)\parallel }{{s}_{0}\tau }|s-{s}_{0}|+\frac{\left(1+{s}_{0}\mu k\right)N}{{s}_{0}\tau }|{\lambda }_{s}-{\lambda }_{{s}_{0}}|.$

Since $\left\{F{T}_{{\lambda }_{s}}\left({x}_{s}\right)\right\}$ is bounded, and ${\lambda }_{s}$ is continuous with respect to s, ${x}_{s}$ defines a continuous curve from $\left(0,1\right)$ into C. □

The following theorem shows that the net $\left\{{x}_{s}\right\}$ converges strongly as $s\to 0$ to a minimizer of (1.1), which solves some variational inequality.

Theorem 3.2 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let $F:C\to H$ be a k-Lipschitzian and η-strongly monotone operator with constant $k>0$, $\eta >0$ such that $0<\mu <2\eta /{k}^{2}$. Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is Lipschitzian with constant $L>0$. Let ${x}_{s}$ be defined by (3.1), where the parameter $s\in \left(0,1\right)$ and ${T}_{{\lambda }_{s}}$ is nonexpansive. Let ${T}_{{\lambda }_{s}}$ and ${\lambda }_{s}$ satisfy the following conditions:

1. (i)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{s}}\right)=\left(1-{\theta }_{s}\right)I+{\theta }_{s}{T}_{{\lambda }_{s}}$ and $\gamma \in \left(0,2/L\right)$;

2. (ii)

${\theta }_{s}=\frac{2+\gamma \left(L+{\lambda }_{s}\right)}{4}$;

3. (iii)

${\lambda }_{s}$ is continuous with respect to s and ${\lambda }_{s}=o\left(s\right)$.

Then the net $\left\{{x}_{s}\right\}$ converges strongly as $s\to 0$ to a minimizer ${x}^{\ast }$ of (1.1), which solves the variational inequality

$〈F{x}^{\ast },{x}^{\ast }-z〉\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in S.$
(3.4)

Equivalently, we have ${Proj}_{S}\left(I-\mu F\right){x}^{\ast }={x}^{\ast }$.

Proof It is easy to see the uniqueness of a solution of the variational inequality (3.4). Indeed, suppose both $\stackrel{˜}{x}\in S$ and $\stackrel{ˆ}{x}\in S$ are solutions to (3.4), then

$〈F\stackrel{˜}{x},\stackrel{˜}{x}-\stackrel{ˆ}{x}〉\le 0$
(3.5)

and

$〈F\stackrel{ˆ}{x},\stackrel{ˆ}{x}-\stackrel{˜}{x}〉\le 0.$
(3.6)

Adding up (3.5) and (3.6) gets

$〈F\stackrel{˜}{x}-F\stackrel{ˆ}{x},\stackrel{˜}{x}-\stackrel{ˆ}{x}〉\le 0.$

The strong monotonicity of F implies that $\stackrel{˜}{x}=\stackrel{ˆ}{x}$ and the uniqueness is proved. Below we use ${x}^{\ast }\in S$ to denote the unique solution of the variational inequality (3.4).

Let us show that ${x}_{s}\to {x}^{\ast }$ as $s\to 0$. Set

${y}_{s}=\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left({x}_{s}\right).$

Then we have ${x}_{s}={Proj}_{C}{y}_{s}$. For any given $z\in S$, we get (3.7)

Since ${Proj}_{C}$ is the metric projection from H onto C, we have

$〈{y}_{s}-{x}_{s},z-{x}_{s}〉\le 0.$

Note that ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)z=z$ and ${Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)=\frac{2-\gamma L}{4}I+\frac{2+\gamma L}{4}T$, so we get $z=Tz$, i.e., $z\in S=Fix\left(T\right)$.

It follows from (3.7) that By (3.3), we obtain that (3.8)

Since $\left\{{x}_{s}\right\}$ is bounded, it is obvious that if $\left\{{s}_{n}\right\}$ is a sequence in $\left(0,1\right)$ such that ${s}_{n}\to 0$, and ${x}_{{s}_{n}}⇀\overline{x}$.

By Proposition 3.1(b) and (3.3), we have So, by Lemma 2.3, we get $\overline{x}\in Fix\left(T\right)=S$.

Since ${\lambda }_{s}=o\left(s\right)$, we obtain from (3.8) that ${x}_{{s}_{n}}\to \overline{x}\in S$.

Next, we show that $\overline{x}$ solves the variational inequality (3.4). Observe that

${x}_{s}={Proj}_{C}{y}_{s}={Proj}_{C}{y}_{s}-{y}_{s}+\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left({x}_{s}\right).$

Hence, we conclude that

$\mu F\left({x}_{s}\right)=\frac{1}{s}\left({Proj}_{C}{y}_{s}-{y}_{s}\right)+\frac{1}{s}\left[\left(I-s\mu F\right){T}_{{\lambda }_{s}}\left({x}_{s}\right)-\left(I-s\mu F\right)\left({x}_{s}\right)\right].$

Since ${T}_{{\lambda }_{s}}$ is nonexpansive, $I-{T}_{{\lambda }_{s}}$ is monotone. Note that, for any given $z\in S$, $z=Tz$ and $〈{Proj}_{C}{y}_{s}-{y}_{s},{Proj}_{C}{y}_{s}-z〉\le 0$.

By (3.3), it follows that (3.9)

Since ${\lambda }_{s}=o\left(s\right)$, by Proposition 3.1(b), we obtain from (3.9) that

$〈\mu F\left(\overline{x}\right),\overline{x}-z〉\le 0.$

So $\overline{x}\in S$ is a solution of the variational inequality (3.4). We get $\overline{x}={x}^{\ast }$ by uniqueness. Therefore, ${x}_{s}\to {x}^{\ast }$ as $s\to 0$.

The variational inequality (3.4) can be rewritten as

$〈\left(I-\mu F\right){x}^{\ast }-{x}^{\ast },{x}^{\ast }-z〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }z\in S.$

So in terms of Lemma 2.4, it is equivalent to the following fixed point equation:

${Proj}_{S}\left(I-\mu F\right){x}^{\ast }={x}^{\ast }.$

Next, we study the following iterative method. For a given arbitrary initial guess ${x}_{0}\in C$, we propose the following explicit scheme that generates a sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ in an explicit way:

${x}_{n+1}={Proj}_{C}\left(I-{s}_{n}\mu F\right){T}_{{\lambda }_{n}}\left({x}_{n}\right),$
(3.10)

where the parameters $\left\{{s}_{n}\right\}\subset \left(0,1\right)$. Let ${T}_{{\lambda }_{n}}$ and ${\lambda }_{n}$ satisfy the following conditions:

1. (i)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\left(1-{\theta }_{n}\right)I+{\theta }_{n}{T}_{{\lambda }_{n}}$ and $0<\gamma <2/L$;

2. (ii)

${\theta }_{n}=\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$;

3. (iii)

${\lambda }_{n}=o\left({s}_{n}\right)$.

It is proved that the sequence ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ converges strongly to a minimizer ${x}^{\ast }\in S$ of (1.1), which also solves the variational inequality (3.4). □

Theorem 3.3 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let $F:C\to H$ be a k-Lipschitzian and η-strongly monotone operator with constant $k>0$, $\eta >0$ such that $0<\mu <2\eta /{k}^{2}$. Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient f is Lipschitzian with constant $L>0$. Let ${\left\{{x}_{n}\right\}}_{n=0}^{\mathrm{\infty }}$ be generated by algorithm (3.10) and the parameters $\left\{{s}_{n}\right\}\subset \left(0,1\right)$. Let ${T}_{{\lambda }_{n}}$, ${\lambda }_{n}$ and ${s}_{n}$ satisfy the following conditions:

1. (C1)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\left(1-{\theta }_{n}\right)I+{\theta }_{n}{T}_{{\lambda }_{n}}$ and $\gamma \in \left(0,2/L\right)$;

2. (C2)

${\theta }_{n}=\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$ for all n;

3. (C3)

${lim}_{n\to \mathrm{\infty }}{s}_{n}=0$ and ${\sum }_{n=0}^{\mathrm{\infty }}{s}_{n}=\mathrm{\infty }$;

4. (C4)

${\sum }_{n=0}^{\mathrm{\infty }}|{s}_{n+1}-{s}_{n}|<\mathrm{\infty }$;

5. (C5)

${\lambda }_{n}=o\left({s}_{n}\right)$ and ${\sum }_{n=0}^{\mathrm{\infty }}|{\lambda }_{n+1}-{\lambda }_{n}|<\mathrm{\infty }$.

Then the sequence $\left\{{x}_{n}\right\}$ generated by the explicit scheme (3.10) converges strongly to a minimizer ${x}^{\ast }$ of (1.1), which is also a solution of the variational inequality (3.4).

Proof It is well known that:

1. (a)

$\stackrel{˜}{x}\in C$ solves the minimization problem (1.1) if and only if $\stackrel{˜}{x}$ solves the fixed-point equation

$\stackrel{˜}{x}={Proj}_{C}\left(I-\gamma \mathrm{\nabla }f\right)\stackrel{˜}{x}=\frac{2-\gamma L}{4}\stackrel{˜}{x}+\frac{2+\gamma L}{4}T\stackrel{˜}{x},$

where $0<\gamma <2/L$ is a constant. It is clear that $\stackrel{˜}{x}=T\stackrel{˜}{x}$, i.e., $\stackrel{˜}{x}\in S=Fix\left(T\right)$.

1. (b)

the gradient f is $1/L$-ism.

2. (c)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)$ is $\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$ averaged for $\gamma \in \left(0,2/L\right)$, in particular, the following relation holds:

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}I+\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}{T}_{{\lambda }_{n}}=\left(1-{\theta }_{n}\right)I+{\theta }_{n}{T}_{{\lambda }_{n}}.$

We observe that $\left\{{x}_{n}\right\}$ is bounded. Indeed, take a fixed $p\in S$, we get It follows that

$\parallel {x}_{n+1}-p\parallel \le \left(1-{s}_{n}\tau \right)\parallel {x}_{n}-p\parallel +\left(1+{s}_{n}\mu k\right)\parallel {T}_{{\lambda }_{n}}\left(p\right)-T\left(p\right)\parallel +{s}_{n}\parallel \mu F\left(p\right)\parallel .$

Note that, by using the same argument as in the proof of (3.3), there exists a real positive number $M>0$ such that

$\parallel {T}_{{\lambda }_{n}}p-Tp\parallel \le \frac{{\lambda }_{n}\gamma \left(5\parallel p\parallel +\parallel Tp\parallel \right)}{2+\gamma \left(L+{\lambda }_{n}\right)}\le {\lambda }_{n}M\parallel p\parallel .$
(3.11)

Since ${\lambda }_{n}=o\left({s}_{n}\right)$, there exists a real positive number ${M}^{\mathrm{\prime }}>0$ such that $\frac{{\lambda }_{n}}{{s}_{n}}\le {M}^{\mathrm{\prime }}$ and by (3.11) we get It follows from induction that

$\parallel {x}_{n}-p\parallel \le max\left\{\parallel {x}_{0}-p\parallel ,\frac{\parallel \mu F\left(p\right)\parallel +\left(1+\mu k\right){M}^{\mathrm{\prime }}M\parallel p\parallel }{\tau }\right\},\phantom{\rule{1em}{0ex}}n\ge 0.$
(3.12)

Consequently, $\left\{{x}_{n}\right\}$ is bounded. It implies that $\left\{{T}_{{\lambda }_{n}}\left({x}_{n}\right)\right\}$ is also bounded.

We claim that

$\parallel {x}_{n+1}-{x}_{n}\parallel \to 0.$
(3.13)

Indeed, since

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\frac{2-\gamma \left(L+{\lambda }_{n}\right)}{4}I+\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}{T}_{{\lambda }_{n}},$

we obtain that

${T}_{{\lambda }_{n}}=\frac{4{Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)-\left[2-\gamma \left(L+{\lambda }_{n}\right)\right]I}{2+\gamma \left(L+{\lambda }_{n}\right)}.$

By using the same argument as in the proof of Proposition 3.1(c), we obtain that for some appropriate constant $K>0$ such that

$K\ge \gamma \parallel {Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)\left({x}_{n-1}\right)\parallel +5\gamma \parallel {x}_{n-1}\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.$

Thus, we get for some appropriate constant $E>0$ such that

$E\ge \parallel F{T}_{{\lambda }_{n}}\left({x}_{n-1}\right)\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.$

Consequently, we get

$\parallel {x}_{n+1}-{x}_{n}\parallel \le \left(1-{s}_{n}\tau \right)\parallel {x}_{n}-{x}_{n-1}\parallel +\mu E|{s}_{n}-{s}_{n-1}|+|{\lambda }_{n}-{\lambda }_{n-1}|\left(K+\mu k\cdot K\right).$

By Lemma 2.5, we obtain $\parallel {x}_{n+1}-{x}_{n}\parallel \to 0$.

Next, we show that

$\parallel {x}_{n}-{T}_{{\lambda }_{n}}{x}_{n}\parallel \to 0.$
(3.14)

Indeed, it follows from (3.13) that Now we show that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-{x}^{\ast },-\mu F\left({x}^{\ast }\right)〉\le 0,$
(3.15)

where ${x}^{\ast }\in S$ is a solution of the variational inequality (3.4).

Indeed, take a subsequence $\left\{{x}_{{n}_{k}}\right\}$ of $\left\{{x}_{n}\right\}$ such that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-{x}^{\ast },-\mu F\left({x}^{\ast }\right)〉=\underset{k\to \mathrm{\infty }}{lim}〈{x}_{{n}_{k}}-{x}^{\ast },-\mu F\left({x}^{\ast }\right)〉.$
(3.16)

Without loss of generality, we may assume that ${x}_{{n}_{k}}⇀\stackrel{˜}{x}$.

We observe that

$\parallel {x}_{n}-T{x}_{n}\parallel \le \parallel {x}_{n}-{T}_{{\lambda }_{n}}\left({x}_{n}\right)\parallel +\parallel {T}_{{\lambda }_{n}}\left({x}_{n}\right)-T{x}_{n}\parallel .$

It follows from (3.11) that

$\parallel {x}_{n}-T{x}_{n}\parallel \le \parallel {x}_{n}-{T}_{{\lambda }_{n}}\left({x}_{n}\right)\parallel +{\lambda }_{n}M\parallel {x}_{n}\parallel .$

By (3.14), we get $\parallel {x}_{n}-T{x}_{n}\parallel \to 0$.

In terms of Lemma 2.3, we get $\stackrel{˜}{x}\in Fix\left(T\right)=S$.

Consequently, from (3.16) and the variational inequality (3.4), it follows that

$\underset{n\to \mathrm{\infty }}{lim sup}〈{x}_{n}-{x}^{\ast },-\mu F\left({x}^{\ast }\right)〉=〈\stackrel{˜}{x}-{x}^{\ast },-\mu F\left({x}^{\ast }\right)〉\le 0.$

Finally, we show that ${x}_{n}\to {x}^{\ast }$.

As a matter of fact, set

${y}_{n}=\left(I-{s}_{n}\mu F\right){T}_{{\lambda }_{n}}\left({x}_{n}\right),\phantom{\rule{1em}{0ex}}n\ge 0.$

Then, ${x}_{n+1}={Proj}_{C}{y}_{n}-{y}_{n}+{y}_{n}$.

In terms of Lemma 2.4 and (3.11), we obtain It follows that since $\left\{{x}_{n}\right\}$ is bounded, we can take a constant ${L}^{\mathrm{\prime }}>0$ such that

${L}^{\mathrm{\prime }}\ge \left(M+\mu kM\right)\parallel {x}^{\ast }\parallel \parallel {x}_{n+1}-{x}^{\ast }\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.$

It then follows that

${\parallel {x}_{n+1}-{x}^{\ast }\parallel }^{2}\le \left(1-{s}_{n}\tau \right){\parallel {x}_{n}-{x}^{\ast }\parallel }^{2}+{s}_{n}{\delta }_{n},$
(3.17)

where ${\delta }_{n}=\frac{2}{1+{s}_{n}\tau }〈-\mu F\left({x}^{\ast }\right),{x}_{n+1}-{x}^{\ast }〉+\frac{2{\lambda }_{n}}{{s}_{n}}{L}^{\mathrm{\prime }}$.

By (3.15) and ${\lambda }_{n}=o\left({s}_{n}\right)$, we get ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$. Now applying Lemma 2.5 to (3.17) concludes that ${x}_{n}\to {x}^{\ast }$ as $n\to \mathrm{\infty }$. □

## 4 Application

In this section, we give an application of Theorem 3.3 to the split feasibility problem (say SFP, for short), which was introduced by Censor and Elfving . Since its inception in 1994, the split feasibility problem (SFP) has received much attention (see [7, 23, 24]) due to its applications in signal processing and image reconstruction, with particular progress in intensity-modulated radiation therapy.

The SFP can mathematically be formulated as the problem of finding a point x with the property

$x\in C\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}Bx\in Q,$
(4.1)

where C and Q are nonempty, closed and convex subset of Hilbert space ${H}_{1}$ and ${H}_{2}$, respectively. $B:{H}_{1}\to {H}_{2}$ is a bounded linear operator.

It is clear that ${x}^{\ast }$ is a solution to the split feasibility problem (4.1) if and only if ${x}^{\ast }\in C$ and $B{x}^{\ast }-{Proj}_{Q}B{x}^{\ast }=0$. We define the proximity function f by

$f\left(x\right)=\frac{1}{2}{\parallel Bx-{Proj}_{Q}Bx\parallel }^{2},$

and consider the constrained convex minimization problem

$\underset{x\in C}{min}f\left(x\right)=\underset{x\in C}{min}\frac{1}{2}{\parallel Bx-{Proj}_{Q}Bx\parallel }^{2}.$
(4.2)

Then ${x}^{\ast }$ solves the split feasibility problem (4.1) if and only if ${x}^{\ast }$ solves the minimization problem (4.2) with the minimize equal to 0. Byrne  introduced the so-called CQ algorithm to solve the (SFP).

${x}_{n+1}={Proj}_{C}\left(I-\gamma {B}^{\ast }\left(I-{Proj}_{Q}\right)B\right){x}_{n},\phantom{\rule{1em}{0ex}}n\ge 0,$
(4.3)

where $0<\gamma <2/{\parallel B\parallel }^{2}$. He obtained that the sequence $\left\{{x}_{n}\right\}$ generated by (4.3) converges weakly to a solution of the (SFP).

In order to obtain strong convergence iterative sequence to solve the (SFP), we propose the following algorithm:

${x}_{n+1}={Proj}_{C}\left(I-{s}_{n}\mu F\right){T}_{{\lambda }_{n}}\left({x}_{n}\right),$
(4.4)

where the parameters $\left\{{s}_{n}\right\}\subset \left(0,1\right)$ and $\left\{{T}_{{\lambda }_{n}}\right\}$ satisfy the following conditions:

1. (C1)

${Proj}_{C}\left(I-\gamma \left({B}^{\ast }\left(I-{Proj}_{Q}\right)B+{\lambda }_{n}I\right)\right)=\left(1-{\theta }_{n}\right)I+{\theta }_{n}{T}_{{\lambda }_{n}}$ and $\gamma \in \left(0,2/L\right)$;

2. (C2)

${\theta }_{n}=\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$ for all n,

where $F:C\to H$ is k-Lipschitzian and η-strongly monotone operator with constant $k>0$, $\eta >0$ such that $0<\mu <2\eta /{k}^{2}$. We can show that the sequence $\left\{{x}_{n}\right\}$ generated by (4.4) converges strongly to a solution of the (SFP) (4.1) if the sequence $\left\{{s}_{n}\right\}\subset \left(0,1\right)$ and the sequence $\left\{{\lambda }_{n}\right\}$ of parameters satisfy appropriate conditions.

Applying Theorem 3.3, we obtain the following result.

Theorem 4.1 Assume that the split feasibility problem (4.1) is consistent. Let the sequence $\left\{{x}_{n}\right\}$ be generated by (4.4). Where the sequence $\left\{{s}_{n}\right\}\subset \left(0,1\right)$ and the sequence $\left\{{\lambda }_{n}\right\}$ satisfy the conditions (C3)-(C5). Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to a solution of the split feasibility problem (4.1).

Proof By the definition of the proximity function f, we have

$\mathrm{\nabla }f\left(x\right)={B}^{\ast }\left(I-{Proj}_{Q}\right)Bx,$

and f is Lipschitz continuous, i.e.,

$\parallel \mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(y\right)\parallel \le L\parallel x-y\parallel ,$

where $L={\parallel B\parallel }^{2}$.

Set ${f}_{{\lambda }_{n}}\left(x\right)=f\left(x\right)+\frac{{\lambda }_{n}}{2}{\parallel x\parallel }^{2}$, consequently

$\begin{array}{rcl}\mathrm{\nabla }{f}_{{\lambda }_{n}}\left(x\right)& =& \mathrm{\nabla }f\left(x\right)+{\lambda }_{n}I\left(x\right)\\ =& {B}^{\ast }\left(I-{Proj}_{Q}\right)Bx+{\lambda }_{n}x.\end{array}$

Then the iterative scheme (4.4) is equivalent to

${x}_{n+1}={Proj}_{C}\left(I-{s}_{n}\mu F\right){T}_{{\lambda }_{n}}\left({x}_{n}\right),$

where the parameters $\left\{{s}_{n}\right\}\subset \left(0,1\right)$. $\left\{{T}_{{\lambda }_{n}}\right\}$ satisfy the following conditions:

1. (C1)

${Proj}_{C}\left(I-\gamma \mathrm{\nabla }{f}_{{\lambda }_{n}}\right)=\left(1-{\theta }_{n}\right)I+{\theta }_{n}{T}_{{\lambda }_{n}}$ and $\gamma \in \left(0,2/L\right)$;

2. (C2)

${\theta }_{n}=\frac{2+\gamma \left(L+{\lambda }_{n}\right)}{4}$ for all n.

Due to Theorem 3.3, we have the conclusion immediately. □

## References

1. Brezis H: Operateurs Maximaux Monotones et Semi-Groups de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam; 1973.

2. Jaiboon C, Kumam P: A Hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Fixed Point Theory Appl. 2009. doi:10.1155/2009/374815

3. Jitpeera T, Katchang P, Kumam P: A viscosity of Cesàro Mean approximation methods for a mixed equilibrium, variational inequalities, and fixed point prolems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/945051

4. Bertsekas DP, Gafni EM: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 1982, 17: 139–159. 10.1007/BFb0120965

5. Han D, Lo HK: Solving non-additive traffic assignment problems, a descent method for cocoercive variational inequalities. Eur. J. Oper. Res. 2004, 159: 529–544. 10.1016/S0377-2217(03)00423-5

6. Bauschke H, Borwein J: On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38: 367–426. 10.1137/S0036144593251710

7. Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006

8. Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53: 475–504. 10.1080/02331930412331327157

9. Xu HK: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z

10. Yamada I: The hybrid steepest descent for the variational inequality problems over the intersection of fixed points sets of nonexpansive mapping. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Application. Edited by: Butnariu D, Censor Y, Reich S. Elservier, New York; 2001:473–504.

11. Levitin ES, Polyak BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz. 1996, 6: 787–823.

12. Calamai PH, Moré JJ: Projected gradient methods for linearly constrained problems. Math. Program. 1987, 39: 93–116. 10.1007/BF02592073

13. Polyak BT: Introduction to Optimization. Optimization Software, New York; 1987.

14. Su M, Xu HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim. 2010, 1: 35–43.

15. Jung JS: A general iterative approach to variational inequality problems and optimization problems. Fixed Point Theory Appl. 2011. doi:10.1155/2011/284363

16. Jung JS: A general composite iterative method for generalized mixed equilibrium problems, variational inequalities problems and optimization problems. J. Inequal. Appl. 2011. doi:10.1186/1029–242x-2011–51

17. Jitpeera T, Kumam P: A new iterative algorithm for solving common solutions of generalized mixed equilibrium problems, fixed point problems and variational inclusion problems with minimization problems. Fixed Point Theory Appl. 2012., 2012: Article ID 111. doi:10.1186/1687–1812–2012–111

18. Witthayarat U, Jitpeera T, Kumam P: A new modified hybrid steepest-descent by using a viscosity approximation method with a weakly contractive mapping for a system of equilibrium problems and fixed point problems with minimization problems. Abstr. Appl. Anal. 2012., 2012: Article ID 206345

19. Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018

20. Martinez-Yanes C, Xu HK: Strong convergence of the CQ method for fixed-point iteration processes. Nonlinear Anal. 2006, 64: 2400–2411. 10.1016/j.na.2005.08.018

21. Hundal H: An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57: 35–61. 10.1016/j.na.2003.11.004

22. Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692

23. López G, Martin-Márquez V, Wang FH, Xu HK: Solving the split feasibility problem without prior knowlege of matrix norms. Inverse Probl. 2012., 28: Article ID 085004

24. Zhao JL, Zhang YJ, Yang QZ: Modified projection methods for the split feasibility problem and the multiple-set split feasibility problem. Appl. Math. Comput. 2012, 219: 1644–1653. 10.1016/j.amc.2012.08.005

## Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported in part by The Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. ZXH2012K001), and by the Science Research Foundation of Civil Aviation University of China (No. 2012KYM03).

## Author information

Authors

### Corresponding author

Correspondence to Ming Tian.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All the authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Tian, M., Huang, LH. Iterative methods for constrained convex minimization problem in Hilbert spaces. Fixed Point Theory Appl 2013, 105 (2013). https://doi.org/10.1186/1687-1812-2013-105

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2013-105

### Keywords

• iterative algorithm
• constrained convex minimization
• nonexpansive mapping
• fixed point
• variational inequality 