Recall that throughout this paper, we use S to denote the solution set of constrained convex minimization problem (1.1).
Let H be a real Hilbert space and C be a nonempty closed convex subset of Hilbert space H. Let F:C\to H be a kLipschitzian and ηstrongly monotone operator with constant k>0, \eta >0 such that 0<\mu <2\eta /{k}^{2}. Suppose that ∇f is LLipschitz continuous. We now consider a mapping {Q}_{s} on C defined by:
{Q}_{s}(x)={Proj}_{C}(Is\mu F){T}_{{\lambda}_{s}}(x),\phantom{\rule{1em}{0ex}}\mathrm{\forall}x\in C,
where s\in (0,1), and {T}_{{\lambda}_{s}} is nonexpansive. Let {T}_{{\lambda}_{s}} and {\lambda}_{s} satisfy the following conditions:

(i)
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})=(1{\theta}_{s})I+{\theta}_{s}{T}_{{\lambda}_{s}} and \gamma \in (0,2/L);

(ii)
{\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4};

(iii)
{\lambda}_{s} is continuous with respect to s and {\lambda}_{s}=o(s).
It is easy to see that {Q}_{s} is a contraction. Indeed, we have for each x,y\in C,
\begin{array}{rcl}\parallel {Q}_{s}(x){Q}_{s}(y)\parallel & =& \parallel {Proj}_{C}(Is\mu F){T}_{{\lambda}_{s}}(x){Proj}_{C}(Is\mu F){T}_{{\lambda}_{s}}(y)\parallel \\ \le & \parallel (Is\mu F){T}_{{\lambda}_{s}}(x)(Is\mu F){T}_{{\lambda}_{s}}(y)\parallel \\ \le & (1s\tau )\parallel xy\parallel ,\end{array}
where \tau =\frac{1}{2}\mu (2\eta \mu {k}^{2}). Hence, {Q}_{s} has a unique fixed point in C, denoted by {x}_{s} which uniquely solves the fixedpoint equation
{x}_{s}={Proj}_{C}(Is\mu F){T}_{{\lambda}_{s}}({x}_{s}).
(3.1)
The following proposition summarizes the properties of the net \{{x}_{s}\}.
Proposition 3.1 Let {x}_{s} be defined by (3.1). Then the following properties for the net \{{x}_{s}\} hold:

(a)
\{{x}_{s}\} is bounded for s\in (0,1);

(b)
{lim}_{s\to 0}\parallel {x}_{s}{T}_{{\lambda}_{s}}{x}_{s}\parallel =0;

(c)
{x}_{s} defines a continuous curve from (0,1) into C.
Proof It is well known that: \tilde{x}\in C solves the minimization problem (1.1) if and only if \tilde{x} solves the fixedpoint equation
\tilde{x}={Proj}_{C}(I\gamma \mathrm{\nabla}f)\tilde{x}=\frac{2\gamma L}{4}\tilde{x}+\frac{2+\gamma L}{4}T\tilde{x},
where 0<\gamma <2/L is a constant. It is clear that \tilde{x}=T\tilde{x}, i.e., \tilde{x}\in S=Fix(T).
(a) Take a fixed p\in S, we obtain that
It follows that
\parallel {x}_{s}p\parallel \le \frac{(1+s\mu k)\parallel {T}_{{\lambda}_{s}}(p)T(p)\parallel}{s\tau}+\frac{\mu}{\tau}\parallel F(p)\parallel .
(3.2)
For x\in C, note that
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})x=(1{\theta}_{s})x+{\theta}_{s}{T}_{{\lambda}_{s}}x
and
{Proj}_{C}(I\gamma \mathrm{\nabla}f)x=(1\theta )x+\theta Tx,
where {\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4} and \theta =\frac{2+\gamma L}{4}.
Then we get
\parallel (\theta {\theta}_{s})x+{\theta}_{s}{T}_{{\lambda}_{s}}x\theta Tx\parallel =\parallel {Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})x{Proj}_{C}(I\gamma \mathrm{\nabla}f)x\parallel \le \gamma {\lambda}_{s}\parallel x\parallel .
Since {\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4} and \theta =\frac{2+\gamma L}{4}, there exists a real positive number M>0 such that
\parallel {T}_{{\lambda}_{s}}xTx\parallel \le \frac{{\lambda}_{s}\gamma (5\parallel x\parallel +\parallel Tx\parallel )}{2+\gamma (L+{\lambda}_{s})}\le {\lambda}_{s}M\parallel x\parallel .
(3.3)
It follows from (3.2) and (3.3) that
\parallel {x}_{s}p\parallel \le \frac{1+s\mu k}{\tau}\cdot \frac{{\lambda}_{s}}{s}\cdot M\parallel p\parallel +\frac{\mu}{\tau}\parallel F(p)\parallel .
Since {\lambda}_{s}=o(s), there exists a real positive number {M}^{\mathrm{\prime}}>0 such that \frac{{\lambda}_{s}}{s}\le {M}^{\mathrm{\prime}}, and
Hence, \{{x}_{s}\} is bounded.
(b) Note that the boundedness of \{{x}_{s}\} implies that \{F{T}_{{\lambda}_{s}}({x}_{s})\} is also bounded. Hence, by the definition of \{{x}_{s}\}, we have
(c) For \gamma \in (0,2/L), there exists
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})=(1{\theta}_{s})I+{\theta}_{s}{T}_{{\lambda}_{s}}
and
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{{s}_{0}}})=(1{\theta}_{{s}_{0}})I+{\theta}_{{s}_{0}}{T}_{{\lambda}_{{s}_{0}}},
where {\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4} and {\theta}_{{s}_{0}}=\frac{2+\gamma (L+{\lambda}_{{s}_{0}})}{4}.
So for {x}_{s}\in C, we get
for some appropriate constant N>0 such that
N\ge \gamma \parallel {Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}}){x}_{s}\parallel +5\gamma \parallel {x}_{s}\parallel .
Now take s,{s}_{0}\in (0,1) and calculate
It follows that
\parallel {x}_{s}{x}_{{s}_{0}}\parallel \le \frac{\mu \parallel F{T}_{{\lambda}_{s}}({x}_{s})\parallel}{{s}_{0}\tau}s{s}_{0}+\frac{(1+{s}_{0}\mu k)N}{{s}_{0}\tau}{\lambda}_{s}{\lambda}_{{s}_{0}}.
Since \{F{T}_{{\lambda}_{s}}({x}_{s})\} is bounded, and {\lambda}_{s} is continuous with respect to s, {x}_{s} defines a continuous curve from (0,1) into C. □
The following theorem shows that the net \{{x}_{s}\} converges strongly as s\to 0 to a minimizer of (1.1), which solves some variational inequality.
Theorem 3.2 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let F:C\to H be a kLipschitzian and ηstrongly monotone operator with constant k>0, \eta >0 such that 0<\mu <2\eta /{k}^{2}. Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is Lipschitzian with constant L>0. Let {x}_{s} be defined by (3.1), where the parameter s\in (0,1) and {T}_{{\lambda}_{s}} is nonexpansive. Let {T}_{{\lambda}_{s}} and {\lambda}_{s} satisfy the following conditions:

(i)
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{s}})=(1{\theta}_{s})I+{\theta}_{s}{T}_{{\lambda}_{s}} and \gamma \in (0,2/L);

(ii)
{\theta}_{s}=\frac{2+\gamma (L+{\lambda}_{s})}{4};

(iii)
{\lambda}_{s} is continuous with respect to s and {\lambda}_{s}=o(s).
Then the net \{{x}_{s}\} converges strongly as s\to 0 to a minimizer {x}^{\ast} of (1.1), which solves the variational inequality
\u3008F{x}^{\ast},{x}^{\ast}z\u3009\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}z\in S.
(3.4)
Equivalently, we have {Proj}_{S}(I\mu F){x}^{\ast}={x}^{\ast}.
Proof It is easy to see the uniqueness of a solution of the variational inequality (3.4). Indeed, suppose both \tilde{x}\in S and \stackrel{\u02c6}{x}\in S are solutions to (3.4), then
\u3008F\tilde{x},\tilde{x}\stackrel{\u02c6}{x}\u3009\le 0
(3.5)
and
\u3008F\stackrel{\u02c6}{x},\stackrel{\u02c6}{x}\tilde{x}\u3009\le 0.
(3.6)
Adding up (3.5) and (3.6) gets
\u3008F\tilde{x}F\stackrel{\u02c6}{x},\tilde{x}\stackrel{\u02c6}{x}\u3009\le 0.
The strong monotonicity of F implies that \tilde{x}=\stackrel{\u02c6}{x} and the uniqueness is proved. Below we use {x}^{\ast}\in S to denote the unique solution of the variational inequality (3.4).
Let us show that {x}_{s}\to {x}^{\ast} as s\to 0. Set
{y}_{s}=(Is\mu F){T}_{{\lambda}_{s}}({x}_{s}).
Then we have {x}_{s}={Proj}_{C}{y}_{s}. For any given z\in S, we get
Since {Proj}_{C} is the metric projection from H onto C, we have
\u3008{y}_{s}{x}_{s},z{x}_{s}\u3009\le 0.
Note that {Proj}_{C}(I\gamma \mathrm{\nabla}f)z=z and {Proj}_{C}(I\gamma \mathrm{\nabla}f)=\frac{2\gamma L}{4}I+\frac{2+\gamma L}{4}T, so we get z=Tz, i.e., z\in S=Fix(T).
It follows from (3.7) that
By (3.3), we obtain that
Since \{{x}_{s}\} is bounded, it is obvious that if \{{s}_{n}\} is a sequence in (0,1) such that {s}_{n}\to 0, and {x}_{{s}_{n}}\rightharpoonup \overline{x}.
By Proposition 3.1(b) and (3.3), we have
So, by Lemma 2.3, we get \overline{x}\in Fix(T)=S.
Since {\lambda}_{s}=o(s), we obtain from (3.8) that {x}_{{s}_{n}}\to \overline{x}\in S.
Next, we show that \overline{x} solves the variational inequality (3.4). Observe that
{x}_{s}={Proj}_{C}{y}_{s}={Proj}_{C}{y}_{s}{y}_{s}+(Is\mu F){T}_{{\lambda}_{s}}({x}_{s}).
Hence, we conclude that
\mu F({x}_{s})=\frac{1}{s}({Proj}_{C}{y}_{s}{y}_{s})+\frac{1}{s}[(Is\mu F){T}_{{\lambda}_{s}}({x}_{s})(Is\mu F)({x}_{s})].
Since {T}_{{\lambda}_{s}} is nonexpansive, I{T}_{{\lambda}_{s}} is monotone. Note that, for any given z\in S, z=Tz and \u3008{Proj}_{C}{y}_{s}{y}_{s},{Proj}_{C}{y}_{s}z\u3009\le 0.
By (3.3), it follows that
Since {\lambda}_{s}=o(s), by Proposition 3.1(b), we obtain from (3.9) that
\u3008\mu F(\overline{x}),\overline{x}z\u3009\le 0.
So \overline{x}\in S is a solution of the variational inequality (3.4). We get \overline{x}={x}^{\ast} by uniqueness. Therefore, {x}_{s}\to {x}^{\ast} as s\to 0.
The variational inequality (3.4) can be rewritten as
\u3008(I\mu F){x}^{\ast}{x}^{\ast},{x}^{\ast}z\u3009\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}z\in S.
So in terms of Lemma 2.4, it is equivalent to the following fixed point equation:
{Proj}_{S}(I\mu F){x}^{\ast}={x}^{\ast}.
Next, we study the following iterative method. For a given arbitrary initial guess {x}_{0}\in C, we propose the following explicit scheme that generates a sequence {\{{x}_{n}\}}_{n=0}^{\mathrm{\infty}} in an explicit way:
{x}_{n+1}={Proj}_{C}(I{s}_{n}\mu F){T}_{{\lambda}_{n}}({x}_{n}),
(3.10)
where the parameters \{{s}_{n}\}\subset (0,1). Let {T}_{{\lambda}_{n}} and {\lambda}_{n} satisfy the following conditions:

(i)
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}} and 0<\gamma <2/L;

(ii)
{\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4};

(iii)
{\lambda}_{n}=o({s}_{n}).
It is proved that the sequence {\{{x}_{n}\}}_{n=0}^{\mathrm{\infty}} converges strongly to a minimizer {x}^{\ast}\in S of (1.1), which also solves the variational inequality (3.4). □
Theorem 3.3 Let H be a real Hilbert space and C be a nonempty, closed and convex subset of Hilbert space H. Let F:C\to H be a kLipschitzian and ηstrongly monotone operator with constant k>0, \eta >0 such that 0<\mu <2\eta /{k}^{2}. Suppose that the minimization problem (1.1) is consistent and let S denote its solution set. Assume that the gradient ∇f is Lipschitzian with constant L>0. Let {\{{x}_{n}\}}_{n=0}^{\mathrm{\infty}} be generated by algorithm (3.10) and the parameters \{{s}_{n}\}\subset (0,1). Let {T}_{{\lambda}_{n}}, {\lambda}_{n} and {s}_{n} satisfy the following conditions:

(C1)
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}} and \gamma \in (0,2/L);

(C2)
{\theta}_{n}=\frac{2+\gamma (L+{\lambda}_{n})}{4} for all n;

(C3)
{lim}_{n\to \mathrm{\infty}}{s}_{n}=0 and {\sum}_{n=0}^{\mathrm{\infty}}{s}_{n}=\mathrm{\infty};

(C4)
{\sum}_{n=0}^{\mathrm{\infty}}{s}_{n+1}{s}_{n}<\mathrm{\infty};

(C5)
{\lambda}_{n}=o({s}_{n}) and {\sum}_{n=0}^{\mathrm{\infty}}{\lambda}_{n+1}{\lambda}_{n}<\mathrm{\infty}.
Then the sequence \{{x}_{n}\} generated by the explicit scheme (3.10) converges strongly to a minimizer {x}^{\ast} of (1.1), which is also a solution of the variational inequality (3.4).
Proof It is well known that:

(a)
\tilde{x}\in C solves the minimization problem (1.1) if and only if \tilde{x} solves the fixedpoint equation
\tilde{x}={Proj}_{C}(I\gamma \mathrm{\nabla}f)\tilde{x}=\frac{2\gamma L}{4}\tilde{x}+\frac{2+\gamma L}{4}T\tilde{x},
where 0<\gamma <2/L is a constant. It is clear that \tilde{x}=T\tilde{x}, i.e., \tilde{x}\in S=Fix(T).

(b)
the gradient ∇f is 1/Lism.

(c)
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}}) is \frac{2+\gamma (L+{\lambda}_{n})}{4} averaged for \gamma \in (0,2/L), in particular, the following relation holds:
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=\frac{2\gamma (L+{\lambda}_{n})}{4}I+\frac{2+\gamma (L+{\lambda}_{n})}{4}{T}_{{\lambda}_{n}}=(1{\theta}_{n})I+{\theta}_{n}{T}_{{\lambda}_{n}}.
We observe that \{{x}_{n}\} is bounded. Indeed, take a fixed p\in S, we get
It follows that
\parallel {x}_{n+1}p\parallel \le (1{s}_{n}\tau )\parallel {x}_{n}p\parallel +(1+{s}_{n}\mu k)\parallel {T}_{{\lambda}_{n}}(p)T(p)\parallel +{s}_{n}\parallel \mu F(p)\parallel .
Note that, by using the same argument as in the proof of (3.3), there exists a real positive number M>0 such that
\parallel {T}_{{\lambda}_{n}}pTp\parallel \le \frac{{\lambda}_{n}\gamma (5\parallel p\parallel +\parallel Tp\parallel )}{2+\gamma (L+{\lambda}_{n})}\le {\lambda}_{n}M\parallel p\parallel .
(3.11)
Since {\lambda}_{n}=o({s}_{n}), there exists a real positive number {M}^{\mathrm{\prime}}>0 such that \frac{{\lambda}_{n}}{{s}_{n}}\le {M}^{\mathrm{\prime}} and by (3.11) we get
It follows from induction that
\parallel {x}_{n}p\parallel \le max\{\parallel {x}_{0}p\parallel ,\frac{\parallel \mu F(p)\parallel +(1+\mu k){M}^{\mathrm{\prime}}M\parallel p\parallel}{\tau}\},\phantom{\rule{1em}{0ex}}n\ge 0.
(3.12)
Consequently, \{{x}_{n}\} is bounded. It implies that \{{T}_{{\lambda}_{n}}({x}_{n})\} is also bounded.
We claim that
\parallel {x}_{n+1}{x}_{n}\parallel \to 0.
(3.13)
Indeed, since
{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})=\frac{2\gamma (L+{\lambda}_{n})}{4}I+\frac{2+\gamma (L+{\lambda}_{n})}{4}{T}_{{\lambda}_{n}},
we obtain that
{T}_{{\lambda}_{n}}=\frac{4{Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})[2\gamma (L+{\lambda}_{n})]I}{2+\gamma (L+{\lambda}_{n})}.
By using the same argument as in the proof of Proposition 3.1(c), we obtain that
for some appropriate constant K>0 such that
K\ge \gamma \parallel {Proj}_{C}(I\gamma \mathrm{\nabla}{f}_{{\lambda}_{n}})({x}_{n1})\parallel +5\gamma \parallel {x}_{n1}\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.
Thus, we get
for some appropriate constant E>0 such that
E\ge \parallel F{T}_{{\lambda}_{n}}({x}_{n1})\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.
Consequently, we get
\parallel {x}_{n+1}{x}_{n}\parallel \le (1{s}_{n}\tau )\parallel {x}_{n}{x}_{n1}\parallel +\mu E{s}_{n}{s}_{n1}+{\lambda}_{n}{\lambda}_{n1}(K+\mu k\cdot K).
By Lemma 2.5, we obtain \parallel {x}_{n+1}{x}_{n}\parallel \to 0.
Next, we show that
\parallel {x}_{n}{T}_{{\lambda}_{n}}{x}_{n}\parallel \to 0.
(3.14)
Indeed, it follows from (3.13) that
Now we show that
\underset{n\to \mathrm{\infty}}{lim\hspace{0.17em}sup}\u3008{x}_{n}{x}^{\ast},\mu F\left({x}^{\ast}\right)\u3009\le 0,
(3.15)
where {x}^{\ast}\in S is a solution of the variational inequality (3.4).
Indeed, take a subsequence \{{x}_{{n}_{k}}\} of \{{x}_{n}\} such that
\underset{n\to \mathrm{\infty}}{lim\hspace{0.17em}sup}\u3008{x}_{n}{x}^{\ast},\mu F\left({x}^{\ast}\right)\u3009=\underset{k\to \mathrm{\infty}}{lim}\u3008{x}_{{n}_{k}}{x}^{\ast},\mu F\left({x}^{\ast}\right)\u3009.
(3.16)
Without loss of generality, we may assume that {x}_{{n}_{k}}\rightharpoonup \tilde{x}.
We observe that
\parallel {x}_{n}T{x}_{n}\parallel \le \parallel {x}_{n}{T}_{{\lambda}_{n}}({x}_{n})\parallel +\parallel {T}_{{\lambda}_{n}}({x}_{n})T{x}_{n}\parallel .
It follows from (3.11) that
\parallel {x}_{n}T{x}_{n}\parallel \le \parallel {x}_{n}{T}_{{\lambda}_{n}}({x}_{n})\parallel +{\lambda}_{n}M\parallel {x}_{n}\parallel .
By (3.14), we get \parallel {x}_{n}T{x}_{n}\parallel \to 0.
In terms of Lemma 2.3, we get \tilde{x}\in Fix(T)=S.
Consequently, from (3.16) and the variational inequality (3.4), it follows that
\underset{n\to \mathrm{\infty}}{lim\hspace{0.17em}sup}\u3008{x}_{n}{x}^{\ast},\mu F\left({x}^{\ast}\right)\u3009=\u3008\tilde{x}{x}^{\ast},\mu F\left({x}^{\ast}\right)\u3009\le 0.
Finally, we show that {x}_{n}\to {x}^{\ast}.
As a matter of fact, set
{y}_{n}=(I{s}_{n}\mu F){T}_{{\lambda}_{n}}({x}_{n}),\phantom{\rule{1em}{0ex}}n\ge 0.
Then, {x}_{n+1}={Proj}_{C}{y}_{n}{y}_{n}+{y}_{n}.
In terms of Lemma 2.4 and (3.11), we obtain
It follows that
since \{{x}_{n}\} is bounded, we can take a constant {L}^{\mathrm{\prime}}>0 such that
{L}^{\mathrm{\prime}}\ge (M+\mu kM)\parallel {x}^{\ast}\parallel \parallel {x}_{n+1}{x}^{\ast}\parallel ,\phantom{\rule{1em}{0ex}}n\ge 0.
It then follows that
{\parallel {x}_{n+1}{x}^{\ast}\parallel}^{2}\le (1{s}_{n}\tau ){\parallel {x}_{n}{x}^{\ast}\parallel}^{2}+{s}_{n}{\delta}_{n},
(3.17)
where {\delta}_{n}=\frac{2}{1+{s}_{n}\tau}\u3008\mu F({x}^{\ast}),{x}_{n+1}{x}^{\ast}\u3009+\frac{2{\lambda}_{n}}{{s}_{n}}{L}^{\mathrm{\prime}}.
By (3.15) and {\lambda}_{n}=o({s}_{n}), we get {lim\hspace{0.17em}sup}_{n\to \mathrm{\infty}}{\delta}_{n}\le 0. Now applying Lemma 2.5 to (3.17) concludes that {x}_{n}\to {x}^{\ast} as n\to \mathrm{\infty}. □