Let H be a real Hilbert space with norm \(\\cdot\_{H}\) and inner product \((\cdot,\cdot)\). Let C be a nonempty closed and convex subset of H. Let T be a nonlinear mapping of H into itself. Let I denote the identity mapping on H. Denote by \(\mathfrak{F}(T)\) the set of fixed points of T.
Moreover, the symbols ⇀ and → stand for weak and strong convergence, respectively.
We say that T is generalized Lipschitzian iff there exists a nonnegative real valued function \(r(x,y)\) satisfying \(\sup_{x,y\in H} \{r(x,y)\}=\lambda<\infty\) such that
$$ \TxTy\_{H} \leq r(x,y) \xy\_{H},\quad \forall x, y \in H. $$
(1.1)
Recently, this class of mappings has been studied by Saddeek and Ahmed [1], and Saddeek [2].
For \(r(x,y)=\lambda\in(0,1)\) (resp., \(r(x,y)=1\)) such mappings are said to be λcontractive (resp., nonexpansive) mappings.
If \(r(x,y)=\lambda> 0\), then the class of generalized Lipschitzian mappings coincide with the class of λLipschitzian mappings.
We say that T is generalized strictly pseudocontractive iff for each pair of points x, y in H there exist nonnegative real valued functions \(r_{i}(x,y)\), \(i=1,2\), satisfying
$$\sup_{x,y\in H} \Biggl\{ \sum_{i=1}^{2}r_{i}(x, y)\Biggr\} =\lambda' < \infty $$
such that
$$ \Vert TxTy\Vert _{H}^{2} \leq r_{1}(x,y) \Vert xy\Vert _{H}^{2}+r_{2}(x,y) \bigl\Vert (IT) (x)(IT) (y)\bigr\Vert _{H}^{2}. $$
(1.2)
By letting \(r_{1}(x,y)=1\) and \(r_{2}(x,y)=\lambda\in[0,1)\) (resp., \(r_{i}(x,y)=1\), \(i=1,2\)) in (1.2), we may derive the class of λstrictly pseudocontractive (resp., pseudocontractive) mappings, which is due to Browder and Petryshyn [3].
The class of λstrictly pseudocontractive mappings has been studied recently by various authors (see, for example, [4–9]).
It worth noting that the class of generalized strictly pseudocontractive mappings includes generalized Lipschizian mappings, λstrictly pseudocontractive mappings, λLipschitzian mappings, pseudocontractive mappings, nonexpansive (or 0strictly pseudocontractive) mappings.
These mappings appear in nonlinear analysis and its applications.
Definition 1.1
For any \(x, y, z \in H\) the mapping T is said to be

(i)
demiclosed at 0 (see, for example, [10]) if \(Tx=0\) whenever \(\{x_{n}\}\subset H\) with \(x_{n}\rightharpoonup x\) and \(Tx_{n}\rightarrow0\), as \(n \rightarrow \infty\);

(ii)
pseudomonotone (see, for example, [11]) if it is bounded and \(x_{n}\rightharpoonup x \in H\) and
$$ \limsup_{n\rightarrow \infty} ( Tx_{n},x_{n}x) \leq0 \quad \Longrightarrow\quad \liminf_{n\rightarrow\infty} ( Tx_{n},x_{n}y) \geq (Tx,xy); $$

(iii)
coercive (see, for example, [12]) if
$$ (Tx,x) \geq \rho\bigl(\Vert x\Vert _{H}\bigr)\x \_{H},\qquad \lim_{\xi\rightarrow+\infty} \rho (\xi)=+\infty; $$

(iv)
potential (see, for example, [13]) if
$$ \int^{1}_{0}\bigl(\bigl(T\bigl(t(x+y),x+y\bigr) \bigr)\bigl(T(tx),x\bigr)\bigr) \,dt= \int^{1}_{0}\bigl( T(x+ty),y\bigr) \,dt; $$

(v)
hemicontinuous (see, for example, [12]) if
$$ \lim_{t\rightarrow 0} \bigl( T(x+ty),z\bigr)=(Tx,z); $$

(vi)
demicontinuous (see, for example, [12]) if
$$ \lim_{\x_{n}x\_{H}\rightarrow 0} ( Tx_{n},y)=(Tx,y); $$

(vii)
uniformly monotone (see, for example, [11]) if there exist \(p\geq2\), \(\alpha>0\) such that
$$ ( TxTy,xy) \geq\alpha\xy\_{H}^{p}; $$

(viii)
bounded Lipschitz continuous (see, for example, [13]) if there exist \(p\geq2\), \(M>0\) such that
$$ \TxTy\_{H} \leq M \bigl(\Vert x\Vert _{H}+\y \_{H}\bigr)^{p2} \xy\_{H}. $$
It should be noted that any demicontinuous mapping is hemicontinuous and every uniformly monotone is monotone (i.e., \((TxTy,xy)\geq 0\), \(\forall x, y\in H\)) and every monotone hemicontinuous is pseudomonotone.
If T is uniformly monotone (resp. bounded Lipschitz continuous) with \(p=2\), then T is called strongly monotone (resp. MLipschitzian).
For \(x_{0}\in C\) the Krasnoselskii iterative process (see, for example, [14]) starting at \(x_{0}\) is defined by
$$ x_{n+1}=(1\tau)x_{n}+ \tau T x_{n}, \quad n\geq0, $$
(1.3)
where \(\tau\in(0,1)\).
Recently, in a real Hilbert space setting, Saddeek and Ahmed [1] proved that the Krasnoselskii iterative sequence given by (1.3) converges weakly to a fixed point of T under the basic assumptions that \(IT\) is generalized Lipschitzian, demiclosed at 0, coercive, bounded, and potential. Moreover, they also applied their result to the stationary filtration problem with a discontinuous law.
However, the convergence in [1] is in general not strong. Very recently, motivated and inspired by the work in He and Zhu [15], Saddeek [2] introduced the following modified Krasnoselskii iterative algorithm by the boundary method:
$$ x_{n+1}=\bigl(1\tau h(x_{n})\bigr) x_{n}+ \tau T_{\tau} x_{n},\quad n\geq0, $$
(1.4)
where \(x_{0}=x \in C\), \(\tau\in(0,1)\), \(T_{\tau} = (1 \tau) I + \tau T\) and \(h:\rightarrow[0,1]\) is a function defined by
$$ h(x) =\inf\bigl\{ \alpha\in[0,1] : \alpha x \in C\bigr\} ,\quad \forall x \in C. $$
By replacing \(T_{\tau}\) by T and taking \(h(x_{n})=1\), \(\forall n\geq0\) in (1.4), we can obtain (1.3).
Saddeek [2] obtained some strong convergence theorems of the iterative algorithm (1.4) for finding the minimum norm solutions of certain nonlinear operator equations.
The class of uniformly convex Banach spaces play an important role in both the geometry of Banach spaces and relative topics in nonlinear functional analysis (see, for example, [16, 17]).
Let X be a real Banach space with its dual \(X^{\ast}\). Denote by \(\langle\cdot,\cdot\rangle\) the duality pairing between \(X^{\ast}\) and X. Let \(\\cdot\_{X}\) be a norm in X, and \(\\cdot\_{X^{\ast}}\) be a norm in \(X^{\ast}\).
A Banach space X is said to be strictly convex if \(\x+y\_{X}<2\) for every \(x, y \in X\) with \(\x\_{X}\leq1\), \(\y\_{X}\leq1\) and \(x\neq y\).
A Banach space X is said to be uniformly convex if for every \(\varepsilon>0\), there exists an increasing positive function \(\delta(\varepsilon)\) with \(\delta(0)=0\) such that \(\x\_{X}\leq 1\), \(\y\_{X}\leq1\) with \(\xy\_{X}\geq\varepsilon\) imply \(\x+y\_{X}\leq2(1\delta(\varepsilon))\) for every \(x, y \in X\).
It is well known that every Hilbert space is uniformly convex and every uniformly convex Banach space is reflexive and strictly convex.
A Banach space X is said to have a Gateaux differentiable norm (see, for example, [10], p.69) if for every \(x, y \in X\) with \(\x\_{X}= 1\), \(\y\_{X}= 1\) the following limit exists:
$$ \lim_{t\rightarrow0^{+}} \frac{[\x+ty\_{X}\x\_{X}]}{t}. $$
X is said to have a uniformly Gateaux differentiable norm if for all \(y \in X\) with \(\y\_{X}= 1\), the limit is attained uniformly for \(\x\_{X}= 1\).
Hilbert spaces, \(L^{p}\) (or \(l_{p}\)) spaces, and Sobolev spaces \(W_{p}^{1}\) (\(1< p<\infty\)) are uniformly convex and have a uniformly Gateaux differentiable norm.
The generalized duality mapping \(J_{p}\), \(p>1\) from X to \(2^{X^{\ast}}\) is defined by
$$ J_{p}(x)=\bigl\{ x^{\ast} \in X^{\ast}: \bigl\langle x^{\ast},x\bigr\rangle = \Vert x\Vert _{X}^{p}, \bigl\Vert x^{\ast}\bigr\Vert _{X^{\ast}} =\Vert x\Vert _{X}^{p1}\bigr\} , \quad \forall x \in X. $$
It is well known that (see, for example, [18, 19]) if the uniformly convex Banach space X is a uniformly Gateaux differentiable norm, then \(J_{p}\) is single valued (we denote it by \(j_{p}\)), one to one and onto. In this case the inverse of \(j_{p}\) will be denoted by \(j_{p}^{1}\).
Definition 1.1 above can easily be stated for mappings T from C to \(X^{\ast}\). The only change here is that one replaces the inner product \((\cdot,\cdot)\) by the bilinear form \(\langle\cdot,\cdot\rangle\).
Given a nonlinear mapping A of C into \(X^{\ast}\). The variational inequality problem associated with C and A is to find
$$ x \in C : \langle Axf, yx\rangle\geq0,\quad \forall y\in C, f\in X^{\ast}. $$
(1.5)
The set of solutions of the variational inequality (1.5) is denoted by \(\operatorname{VI}(C,A)\).
It is well known (see, for example, [12, 20, 21]) that if A is pseudomonotone and coercive, then \(\operatorname{VI}(C,A)\) is a nonempty, closed, and convex subset of X. Further, if \(A=j_{p}T\), then \(\tilde{\mathfrak{F}}(j_{p},T)=\{x \in C: j_{p}x=Tx\}=A^{1}0\). In addition, there exists also a unique element \(z=\operatorname{proj}_{A^{1}0}(0) \in \operatorname{VI}(A^{1}0,j_{p})\), called the minimum norm solution of variational inequality (1.5) (or the metric projection of the origin onto \(A^{1}0\)). If \(X=H\), then \(j_{p}=I\) and hence \(\tilde{\mathfrak{F}}=\mathfrak{F}\).
Example 1.1
Let Ω be a bounded domain in \(\mathbb{R}^{n}\) with Lipschitz continuous boundary. Let us consider \(p\geq2\), \(\frac{1}{p}+\frac{1}{q}=1\), and \(X=\mathring{W}_{p}^{(1)}(\Omega)\), \(X^{\ast }={W}_{q}^{(1)}(\Omega)\). The pLaplacian is the mapping \(\Delta_{p}:\mathring{W}_{p}^{(1)}(\Omega)\rightarrow {W}_{q}^{(1)}(\Omega)\), \(\Delta_{p}u=\operatorname{div}(\nabla u^{p2}\nabla u)\) for \(u\in \mathring{W}_{p}^{(1)}(\Omega)\).
It is well known that the pLaplacian is in fact the generalized duality mapping \(j_{p}\) (more specifically, \(j_{p}=\Delta_{p}\)), i.e., \(\langle j_{p}u,v\rangle=\int_{\Omega}{\nabla u^{p2}(\nabla u, \nabla u) \,dx}\), \(\forall u, v \in \mathring{W}_{p}^{(1)}(\Omega)\).
From [22], p.312, we have
$$\begin{aligned} \langle j_{p}uj_{p}v,uv\rangle =& \int_{\Omega}\bigl(\bigl(\nabla u^{p2}\nabla u\nabla v^{p2}\nabla v\bigr), \nabla (uv)\bigr) \,dx \\ \geq& M \int_{\Omega}\nabla u\nabla v^{p} \,dx \quad \text{for some } M>0, \end{aligned}$$
which implies that \(j_{p}\) is uniformly monotone.
By [22], p.314, we have
$$ \bigl\vert \langle j_{p}uj_{p}v,w\rangle\bigr\vert \leq M\uv\_{\mathring{W}_{p}^{(1)}(\Omega)}\bigl(\Vert u\Vert _{\mathring{W}_{p}^{(1)}(\Omega)} +\v \_{\mathring{W}_{p}^{(1)}(\Omega)}\bigr)^{p2}\w\_{\mathring{W}_{p}^{(1)}(\Omega)}, $$
or
$$\begin{aligned} \j_{p}uj_{p}v\_{{W}_{q}^{(1)}(\Omega)} =&\sup_{w\in \mathring{W}_{p}^{(1)}(\Omega)} \frac{\langle j_{p}uj_{p}v,w\rangle}{\w\_{\mathring{W}_{p}^{(1)}(\Omega)}} \\ \leq& M \uv\_{\mathring{W}_{p}^{(1)}(\Omega)}\bigl(\Vert u\Vert _{\mathring{W}_{p}^{(1)}(\Omega)} +\v \_{\mathring{W}_{p}^{(1)}(\Omega)}\bigr)^{p2}, \end{aligned}$$
this shows that \(j_{p}\) is bounded Lipschitz continuous.
The generalized duality mapping \(j_{p}=\Delta_{p}\) is bounded, demicontinuous (and hence hemicontinuous) and monotone, and hence \(j_{p}\) is pseudomonotone.
From the definition of \(j_{p}\), it follows that \(j_{p}\) is coercive.
Since \(j_{p}u\in{W}_{q}^{(1)}(\Omega)\), \(\forall u\in \mathring{W}_{p}^{(1)}(\Omega)\) is the subgradient of \(\frac{1}{p}\u\_{\mathring{W}_{p}^{(1)}(\Omega)}^{p}\), it follows that \(j_{p}\) is potential.
Since \(j_{p}\) is pseudomonotone and coercive (it is surjective), then \(j_{p}\) is demiclosed at 0 (see Saddeek [2] for an explanation).
The mapping \(j_{p}\) is generalized strictly pseudocontractive with \(r_{1}(x,y)=1\).
The following two lemmas play an important role in the sequel.
Lemma 1.1
([23])
Let
\(\{a_{n}\}\), \(\{b_{n}\}\), and
\(\{c_{n}\}\)
be nonnegative real sequences satisfying
$$ a_{n+1} \leq(1\gamma_{n}) a_{n}+ b_{n}+ c_{n}, \quad \forall n\geq0, $$
where
\(\gamma_{n}\subset(0,1)\), \(\sum_{n=0}^{\infty} \gamma_{n} = \infty\), \(\limsup_{n \rightarrow\infty} \frac{b_{n}}{\gamma_{n}} \leq0\), and
\(\sum_{n=0}^{\infty} c_{n} < \infty\). Then
\(\lim_{n\rightarrow\infty}a_{n}=0\).
Lemma 1.2
([24])
Let
X
be a real uniformly convex Banach space with a uniformly Gateaux differentiable norm, and let
\(X^{\ast}\)
be its dual. Then, for all
\(x^{\ast}, y^{\ast} \in X^{\ast}\), the following inequality holds:
$$ \bigl\Vert x^{\ast}+y^{\ast}\bigr\Vert ^{2}_{X^{\ast}} \leq \bigl\Vert x^{\ast}\bigr\Vert ^{2}_{X^{\ast}}+2 \bigl\langle y^{\ast}, j_{p}^{1}x^{\ast}y\bigr\rangle , \quad y \in X, $$
where
\(j_{p}^{1}\)
is the inverse of the duality mapping
\(j_{p}\).
Let us now generalize the algorithm (1.4) for a pair of mappings as follows:
$$ j_{p}x_{n+1}=\bigl(1\tau h(x_{n}) \bigr) j_{p}x_{n}+ \tau T_{\tau}^{j_{p}} x_{n}, \quad n\geq0, $$
(1.6)
where \(x_{0}=x \in C\), \(\tau\in(0,1)\), \(T_{\tau}^{j_{p}} = (1 \tau) j_{p} + \tau T\), \(T:C\rightarrow X^{\ast}\) is a suitable mapping, and \(j_{p}: X\rightarrow X^{\ast}\) is the generalized duality mapping.
This algorithm can also be regarded as a modification of algorithm (3) in [1]. We shall call this algorithm the generalized modified Krasnoselskii iterative algorithm.
In the case when X is uniformly convex Banach space, the generalized strictly pseudocontractive mapping (1.2) can be written as follows:
$$\begin{aligned} \Vert TxTy\Vert _{X^{\ast}}^{p} \leq& r_{1}(x,y) \Vert j_{p}xj_{p}y\Vert _{X^{\ast}}^{p} \\ &{}+r_{2}(x,y) \bigl\Vert (j_{p}T) (x)(j_{p}T) (y)\bigr\Vert _{X^{\ast}}^{p}, \quad p\in[2,\infty), \end{aligned}$$
(1.7)
where \(r_{1}(x,y)\) and \(r_{2}(x,y)\) satisfy the same conditions as above.
Obviously, (1.6) and (1.7) reduce to (1.4) and (1.2), respectively, when X is a Hilbert space.
The main purpose of this paper is to extend the results in [2] to uniformly convex Banach spaces and to generalized modified iterative processes with generalized strictly pseudocontractive mappings.