Skip to main content

A Tseng-type algorithm for approximating zeros of monotone inclusion and J-fixed-point problems with applications

Abstract

In this paper, a Halpern–Tseng-type algorithm for approximating zeros of the sum of two monotone operators whose zeros are J-fixed points of relatively J-nonexpansive mappings is introduced and studied. A strong convergence theorem is established in Banach spaces that are uniformly smooth and 2-uniformly convex. Furthermore, applications of the theorem to convex minimization and image-restoration problems are presented. In addition, the proposed algorithm is used in solving some classical image-recovery problems and a numerical example in a Banach space is presented to support the main theorem. Finally, the performance of the proposed algorithm is compared with that of some existing algorithms in the literature.

1 Introduction

Let E be a real Banach space with dual space, \(E^{*}\). Let \(A: E \to E^{*}\) and \(B: E \to 2^{E^{*}}\) be single-valued and multivalued monotone operators, respectively. The following monotone inclusion problem:

$$ \text{find } u\in E \quad \text{such that } 0\in (A+B)u, $$
(1)

has been of interest to several authors due to its numerous applications in solving problems arising from image restoration, signal recovery, and machine learning. One of the early methods used for approximating solutions of the inclusion problem (1) is the forward–backward algorithm (FBA); which was introduced by Passty [37] and studied extensively by many authors (see, e.g., [1, 2, 6, 15, 16, 20, 21, 28, 49]).

Recently, there is growing interest in the study of the monotone inclusion problem (1) whose solutions are fixed points of some nonexpansive-type mappings. In general, the problem is stated as follows:

$$ \text{find } u \in E \quad \text{such that } 0\in (A+B)u \text{ and } Tu=u, $$
(2)

where \(T: E \to E\) is a nonexpansive-type mapping.

In 2010, Takahashi et al. [44] introduced and studied an iterative algorithm that approximates solutions of problem (2) in the setting of real Hilbert spaces. They proved the following strong convergence theorem:

Theorem 1.1

Let C be a closed and convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of C into H and let B be a maximal monotone operator on H, such that the domain of B is included in C. Let \(J_{\lambda}= (I +\lambda B)^{-1}\) be the resolvent of B for \(\lambda >0\) and let T be a nonexpansive mapping of C into itself, such that \(F(T) \cap (A + B)^{-1}0 \neq \emptyset \). Let \(x_{1} = x \in C\) and let \(\{x_{n}\} \subset C\) be a sequence generated by

$$ x_{n+1}=\beta _{n}x_{n}+(1-\beta _{n})T \bigl(\alpha _{n}x+(1-\alpha _{n})J_{ \lambda _{n}}(x_{n}- \lambda _{n}Ax_{n}) \bigr), $$
(3)

for all \(n\in \mathbb{N}\), where \(\{\lambda _{n}\}\subset (0,2\alpha )\), \(\{\alpha _{n}\}, \{\beta _{n} \}\subset (0,1)\) satisfy

$$\begin{aligned}& 0< a\leq \lambda _{n} \leq b < 2\alpha , \qquad 0< c\leq \beta _{n} \leq d< 1,\\& \lim_{n\to \infty} (\lambda _{n}-\lambda _{n+1})=0, \qquad \lim_{n\to \infty}\alpha _{n}=0 \quad \sum_{n=1}^{\infty }\alpha _{n}= \infty . \end{aligned}$$

Then, \(\{x_{n}\}\) converges strongly to a point of \(F(T)\cap (A+B)^{-1}0\).

In recent years, many authors have exploited the inertial technique in order to accelerate the convergence of sequences generated by existing algorithms in the literature. The inertial extrapolation technique was first introduced by Polyak [39] as an acceleration process in solving smooth, convex minimization problems. An algorithm of inertial type is an iterative procedure in which subsequent terms are obtained using the preceding two terms. Many authors have shown numerically that adding the inertial extrapolation term in many existing algorithms improves its performance (see, e.g., [3, 12, 17, 18, 25, 30, 36, 38, 42, 43]).

In 2021, Adamu et al. [4] introduced and studied the following inertial algorithm that approximates solutions of problem (2) in real Hilbert spaces. They proved the following strong convergence theorem:

Theorem 1.2

Let H be a real Hilbert space. Let \(A:H\to H\) be α-inverse strongly monotone, \(B: H \to 2^{H}\) be a set-valued maximal monotone operator, and \(T:H\to H\) be a nonexpansive mapping. Assume \(F(T) \cap (A + B)^{-1}0 \neq \emptyset \). Let \(x_{0},x_{1} , u \in H\) and let \(\{x_{n}\} \subset H\) be a sequence generated by:

$$ \begin{aligned} \textstyle\begin{cases} w_{n}= x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ z_{n}=\gamma _{n}w_{n}+(1-\gamma _{n})(I+\lambda _{n}B)^{-1}(I- \lambda _{n}A)w_{n}, \\ y_{n}=s_{n}w_{n}+(1-s_{n})(I+\lambda _{n}B)^{-1}(I-\lambda _{n}A)z_{n}, \\ x_{n+1}=\tau _{n}u+\sigma _{n}w_{n}+\mu _{n}Ty_{n}, \end{cases}\displaystyle \end{aligned} $$
(4)

where the control parameters satisfy some appropriate conditions. Then, \(\{x_{n}\}\) converges strongly to a point in \(F(T)\cap (A+B)^{-1}0\).

Remark 1

We recall that in Algorithms (3) and (4) the operator A is required to be α-inverse strongly monotone, i.e., A satisfies the following inequality:

$$ \langle x-y, Ax-Ay \rangle \geq \alpha \Vert Ax-Ay \Vert ^{2}. $$

This requirement rules out some important applications (see, e.g., Sect. 4 of [45]).

To dispense with the α-inverse strong monotonicity assumption on A, using the idea of the extragradient method of Korpelevic [27] for monotone variational inequalities, Tseng [45] introduced the following algorithm in real Hilbert spaces:

$$ \textstyle\begin{cases} x_{1}\in C; \\ y_{n}=(I+\lambda _{n}B)^{-1}(I-\lambda _{n}A)x_{n}; \\ x_{n+1}=P_{C} (y_{n}-\lambda _{n}(Ay_{n}-Ax_{n}) ), \end{cases} $$
(5)

where \(C\subset H\) is nonempty, closed, and convex such that \(C\cap (A+B)^{-1}0\neq \emptyset \), A is maximal monotone and Lipschitz continuous with constant \(L>0\) and B is maximal monotone. He proved weak convergence of the sequence generated by his algorithm to a solution of problem (1).

Remark 2

We note here that the class of monotone operators that are Lipschitz continuous contain, properly, the class of monotone operators that are α-inverse strongly monotone, since every α-inverse strongly monotone operator is \(\frac{1}{\alpha}\)-Lipschitz continuous.

Recently, in 2021, Padcharoen et al. [35] proposed an inertial version of Tseng’s Algorithm (5) in the setting of real Hilbert spaces. They proved the following theorem:

Theorem 1.3

Let H be a real Hilbert space. Let \(A: H \to H\) be an L-Lipschitz continuous and monotone mapping and \(B: H \to 2^{H}\) be a maximal monotone map. Assume that the solution set \((A+B)^{-1}\) is nonempty. Given \(x_{0},x_{1}\in H\), let \(\{x_{n}\}\) be a sequence defined by:

$$ \textstyle\begin{cases} w_{n}=x_{n}+\alpha _{n} (x_{n}-x_{n-1}), \\ y_{n}= (I+\lambda _{n}B)^{-1}(I-\lambda _{n}A)w_{n}, \\ x_{n+1}=y_{n}-\lambda _{n} (Ay_{n}-Aw_{n}), \end{cases} $$
(6)

where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (6) converges weakly to a solution of problem (1).

In 2019, Shehu [41] extended the inclusion problem (1) involving monotone operators to Banach spaces. He introduced and studied a modified version of Tseng’s algorithm and proved the following theorem:

Theorem 1.4

Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0\) is nonempty and the normalized duality mapping J on E is weakly sequentially continuous. Let \(\{x_{n}\}\) be a sequence in E generated by:

$$ \textstyle\begin{cases} x_{1}\in E, \\ y_{n}=(J+\lambda _{n}B)^{-1}(Jx_{n}-\lambda _{n}Ax_{n}) , \\ x_{n+1}=J^{-1}(Jy_{n}-\lambda _{n}(Ay_{n}-Ax_{n})), \end{cases} $$
(7)

where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (7) converges weakly to a point \(x\in (A+B)^{-1}0\).

To obtain a strong convergence theorem and dispense with the weak sequential continuity assumption on the normalized duality mapping J in Theorem 1.4, in the same paper [41], Shehu introduced and studied a Halpern modification of Algorithm (7). He proved the following theorem:

Theorem 1.5

Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0\) is nonempty. Let \(\{x_{n}\}\) be a sequence in E generated by:

$$ \textstyle\begin{cases} x_{1}\in E, \\ y_{n}=(J+\lambda _{n}B)^{-1}(Jx_{n}-\lambda _{n}Ax_{n}) , \\ w_{n}=J^{-1}(Jy_{n}-\lambda _{n}(Ay_{n}-Ax_{n})), \\ x_{n+1}= J^{-1} (\alpha _{n} Jx_{1}+(1-\alpha _{n})Jw_{n} ), \end{cases} $$
(8)

where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (8) converges strongly to a point \(x\in (A+B)^{-1}0\).

Recently, Cholamjiak et al. [23] introduced and studied a Halpern–Tseng-type algorithm for approximating solutions of the inclusion problem (2) in the setting of Banach spaces. They proved the following theorem:

Theorem 1.6

Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping and \(T: E \to E\) be relatively nonexpansive. Suppose the solution set \(\Omega = (A+B)^{-1}0\cap F(T)\neq \emptyset \). Let \(\{x_{n}\}\) be a sequence in E generated by:

$$ \textstyle\begin{cases} u,x_{1}\in E, \\ y_{n}=(J+\lambda _{n}B)^{-1}(Jx_{n}-\lambda _{n}Ax_{n}) , \\ z_{n}= J^{-1}(Jy_{n}-\lambda _{n}(Ay_{n}-Ax_{n})), \\ x_{n+1}= J^{-1} (\alpha _{n} Ju+(1-\alpha _{n})(\beta _{n}Jz_{n}+(1- \beta _{n})JTz_{n}) ), \end{cases} $$
(9)

where \(\{\lambda _{n} \}\subset (0, \frac{\sqrt{c}}{\sqrt{\kappa}L})\), for some \(c,\kappa >0\); \(\{\alpha _{n}\}, \{\beta _{n} \}\subset (0,1)\) with \(\lim_{n\to \infty} \alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \); \(0< a\leq \beta _{n}\leq b <1\). Then, the sequence \(\{x_{n}\}\) generated by (9) converges strongly to a solution of problem (2).

In 2016, Chidume and Idu [22] reintroduced a fixed-point notion for operators mapping a uniformly convex and uniformly smooth real Banach space, E to its dual space, \(E^{*}\). Given a map \(T: E \to E^{*}\) let J be the normalized duality mapping on E. Chidume and Idu [22] called a point \(u\in E\) a J-fixed point of T if \(Tu=Ju\) and denoted the set of by \(F_{J}(T):=\{x\in E: Tx=Jx \}\). An intriguing property of a J-fixed point is its connection with optimization problems, see, e.g., [22] for the connection. Currently, there is a growing interest in the study of J-fixed points (see, e.g., [11, 13, 33, 34], for some interesting results concerning J-fixed points in the literature).

Remark 3

We note here that this notion has also been defined by Zegeye [50] who called it a semifixed point. Also, Liu [29] called it a duality fixed point.

In line with the current interest on the inclusion problems (1) and (2) involving monotone operators on Banach spaces, J-fixed points and the inertial acceleration technique, it is our purpose in this paper to propose an inertial Halpern–Tseng-type algorithm for approximating solutions of the inclusion problem (1) that are J-fixed points of a relatively J-nonexpansive mapping. Furthermore, we prove the strong convergence of the sequence generated by our algorithm in the setting of real Banach spaces that are uniformly smooth and 2-uniformly convex. In addition, we present applications of our theorem to convex minimization and use our algorithm to solve some classical problems arising from image restoration. Finally, we present a numerical example on a real Banach space to support our main theorem.

2 Preliminaries

In this section, we define some notions and state some results that will be needed in our subsequent analysis.

Let E be a real normed space and let \(J: E \to 2^{E^{*}} \) be the normalized duality map (see, e.g., [8] for the explicit definition of J and its properties on certain Banach spaces). The following functional \(\phi :E\times E \to \mathbb{R}\) defined on a smooth real Banach space by

$$\begin{aligned} \phi (x,y):= \Vert x \Vert ^{2}-2\langle x,Jy \rangle + \Vert y \Vert ^{2},\quad \forall x,y\in E, \end{aligned}$$
(10)

will be needed in our estimations in the following. The functional ϕ was first introduced by Alber [8] and has been extensively studied by many authors (see, for example, [8, 14, 26, 32] and the references contained in them). Observe that on a real Hilbert space H, the definition of ϕ above reduces to \(\phi (x,y)=\Vert x-y\Vert ^{2}\), \(\forall x,y\in H\). Furthermore, given \(x,y,z\in E \) and \(\tau \in [0,1]\), using the definition of ϕ, one can easily deduce the following (see, e.g., [22, 32]):

  1. D1:

    \((\Vert x \Vert -\Vert y\Vert )^{2} \leq \phi (x,y)\leq (\Vert x\|+ \Vert y\Vert )^{2}\),

  2. D2:

    \(\phi (x, J^{-1}(\tau Jy+(1-\tau )Jz)\leq \tau \phi (x,y) + (1-\tau ) \phi (x,z) \),

  3. D3:

    \(\phi (x,y)=\phi (x,z)+\phi (z,y)+2\langle z-x,Jy-Jz\rangle \),

where J and \(J^{-1}\) are the duality maps on E and \(E^{*}\), respectively.

We shall use interchangeably ϕ and \(V: E\times E^{*} \to \mathbb{R}\) defined by

$$ V\bigl(x,y^{*}\bigr):= \Vert x \Vert ^{2}-2\bigl\langle x,y^{*}\bigr\rangle + \Vert y \Vert ^{2},\quad \forall x \in E, y^{*}\in E^{*}, $$

since \(V(x,y^{*})=\phi (x,J^{-1}y^{*})\).

The following ideas will be used in the subsequent discussion.

Definition 2.1

Let \(T:E\to E^{*}\) be a map. A point \(x^{*}\in E\) is called an asymptotic J-fixed point of T if there exists a sequence \(\{x_{n}\}\subset E\) such that \(x_{n}\rightharpoonup x^{*}\) and \(\|Jx_{n}-Tx_{n}\|\to 0 \), as \(n \to \infty \). Let \(\widehat{F}_{J}(T)\) be the set of asymptotic J-fixed points of T.

Definition 2.2

A map \(T:E\to E^{*}\) is said to be relatively J-nonexpansive if

\((i)\):

\(\widehat{F}_{J}(T)=F_{J}(T) \neq \emptyset \),

\((ii)\):

\(\phi (u,J^{-1}Tx)\leq \phi (u,x)\), \(\forall x\in E\), \(u\in F_{J}(T)\).

Remark 4

See Chidume et al. [19] for a nontrivial example of a relatively J-nonexpansive mapping. One can easily verify from the definition above, that if an operator T is relatively J-nonexpansive then the operator \(J^{-1}T\) is relatively nonexpansive in the usual sense and vice versa. Furthermore, \(x^{*}\in F_{J} (T ) \Leftrightarrow x^{*}\in F (J^{-1} T )\).

Definition 2.3

Let E be a smooth, strictly convex, and reflexive real Banach space and let C be a nonempty, closed, and convex subset of E. Following Alber [8], the generalized projection map, \(\Pi _{C} : E\to C\) is defined by

$$ \Pi _{C}(u) =\inf_{v\in C}\phi (v,u), \quad \forall u \in E. $$

Clearly, in a real Hilbert space, the generalized projection \(\Pi _{C}\) coincides with the metric projection \(P_{C}\) from E onto C.

Definition 2.4

Let E be a reflexive, strictly convex, and smooth real Banach space and let \(B: E \to 2^{E^{*}}\) be a maximal monotone operator. Then, for any \(\lambda >0\) and \(u\in E\), there exists a unique element \(u_{\lambda }\in E\) such that \(Ju \in (Ju_{\lambda }+\lambda Bu_{\lambda})\). The element \(u_{\lambda}\) is called the resolvent of B and it is denoted by \(J_{\lambda}^{B}u\). Alternatively, \(J_{\lambda}^{B}= (J+\lambda B)^{-1}J\), for all \(\lambda >0\). It is easy to verify that \(B^{-1}0=F(J_{\lambda}^{B})\), \(\forall \lambda >0\), where \(F(J_{\lambda }^{B})\) denotes the set of fixed points of \(J_{\lambda}^{B}\).

Now, we recall some fundamental and useful results that will be needed in the proof of our main theorem and its corollaries.

Lemma 2.5

([7])

Let C be a nonempty, closed, and convex subset of a smooth, strictly convex, and reflexive real Banach space E. For any \(x\in E\) and \(y \in C\), \(\tilde{x} =\Pi _{C}x\) if and only if \(\langle \tilde{x}-y, Jx-J\tilde{x} \rangle \geq 0\), for all \(y\in C\).

Lemma 2.6

([8])

Let E be a reflexive, strictly convex, and smooth Banach space with \(E^{*}\) as its dual. Then,

$$\begin{aligned} V\bigl(u,u^{*}\bigr)+2\bigl\langle J^{-1}u^{*}-u,v^{*} \bigr\rangle \leq V\bigl(u,u^{*}+v^{*}\bigr), \end{aligned}$$
(11)

for all \(u\in E\) and \(u^{*},v^{*}\in E^{*}\).

Lemma 2.7

([10])

Let E be a reflexive Banach space. Let \(A: E \to E^{*}\) be a monotone, hemicontinuous, and bounded mapping. Let \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Then, \(A+B\) is a maximal monotone mapping.

Lemma 2.8

Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p,q>1\). The space E is q-uniformly smooth if and only if its dual space \(E^{*}\) is p-uniformly convex.

Lemma 2.9

([48])

Let E be a 2-uniformly smooth, real Banach space. Then, there exists a constant \(\rho >0\) such that \(\forall x,y\in E\)

$$ \Vert x+y \Vert ^{2}\leq \Vert x \Vert ^{2}+2\langle y, Jx\rangle +\rho \Vert y \Vert ^{2}. $$

In a real Hilbert space, \(\rho =1\).

Lemma 2.10

([46])

Let E be a 2-uniformly convex and smooth real Banach space. Then, there exists a positive constant μ such that

$$\begin{aligned} \mu \Vert x-y \Vert ^{2}\le \phi (x,y),\quad \forall x,y\in E. \end{aligned}$$
(12)

Lemma 2.11

([26])

Let E be a uniformly convex and smooth real Banach space, and let \(\{u_{n}\}\) and \(\{v_{n}\}\) be two sequences of E. If either \(\{u_{n}\}\) or \(\{v_{n}\}\) is bounded and \(\phi (u_{n},v_{n} )\to 0 \) then \(\Vert u_{n}-v_{n}\Vert \to 0 \).

Lemma 2.12

([32])

Let E be a uniformly smooth Banach space and \(r > 0\). Then, there exists a continuous, strictly increasing, and convex function \(g : [0, 2r] \rightarrow [0, 1)\) such that \(g(0) = 0\) and

$$\begin{aligned} \phi \bigl(u,J^{-1}\bigl[\beta Jx + (1-\beta )Jy\bigr] \bigr)\le \beta \phi (u,x)+(1- \beta )\phi (u,y)-\beta (1-\beta )g\bigl( \Vert Jx-Jy \Vert \bigr), \end{aligned}$$

for all \(\beta \in [0, 1]\), \(u \in E\) and \(x, y \in B_{r}:=\{z\in E : \|z\|\leq r\}\).

Lemma 2.13

([47])

Let \(\{ a_{n} \}\) be a sequence of nonnegative numbers satisfying the condition

$$ a_{n+1} \leq (1-\alpha _{n})a_{n} +\alpha _{n} \beta _{n} +c_{n},\quad n \geq 0, $$

where \(\{ \alpha _{n} \}\), \(\{ \beta _{n} \}\), and \(\{c_{n}\}\) are sequences of real numbers such that

$$ \begin{gathered} \textit{(i)} \quad \{\alpha _{n}\}\subset [0,1]\quad \textit{s.t. } \sum_{n=0}^{\infty }\alpha _{n}= \infty ; \qquad \textit{(ii)}\quad \limsup_{n\to \infty} \beta _{n} \leq 0; \\ \textit{(iii)}\quad c_{n} \geq 0, \sum_{n=0}^{\infty }c_{n} < \infty . \end{gathered} $$

Then, \(\lim_{n \to \infty } a_{n}=0 \).

Lemma 2.14

([31])

Let \(\Gamma _{n}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\Gamma _{n_{j}}\}_{j\geq 0}\) of \(\{\Gamma _{n}\}\) that satisfies \(\Gamma _{n_{j}}<\Gamma _{n_{j}+1}\) for all \(j\geq 0\). Also, consider the sequence of integers \(\{ \tau (n)\}_{n\geq n_{0}}\) defined by

$$ \tau (n)=\max\{ k\leq n | \Gamma _{k}< \Gamma _{k+1}\}. $$

Then, \(\{\tau (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n\to \infty}\tau (n)=\infty \) and, for all \(n \geq n_{0}\), it holds that \(\Gamma _{\tau (n)}\leq \Gamma _{\tau (n)+1}\) and we have

$$ \Gamma _{n}\leq \Gamma _{\tau (n)+1}. $$

Lemma 2.15

([9])

Let \(\{\Gamma _{n}\}\), \(\{\delta _{n}\}\), and \(\{\alpha _{n}\}\) be sequences in \([0,\infty ) \) such that

$$ \Gamma _{n+1} \leq \Gamma _{n} + \alpha _{n}( \Gamma _{n} - \Gamma _{n-1}) + \delta _{n}, $$

for all \(n \geq 1\), \(\sum_{n=1}^{\infty} \delta _{n} < +\infty \) and there exists a real number α with \(0 \leq \alpha _{n} \leq \alpha <1\), for all \(n \in \mathbb{N}\). Then, the following hold:

  1. (i)

    \(\sum_{n \geq 1}[\Gamma _{n} - \Gamma _{n-1}]_{+} < + \infty \), where \([t]_{+}=\max \{t,0\} \);

  2. (ii)

    there exists \(\Gamma ^{*} \in [0, \infty )\) such that \(\lim_{n \rightarrow \infty} \Gamma _{n}= \Gamma ^{*}\).

Lemma 2.16

([5])

Let E be a 2-uniformly convex and uniformly smooth real Banach space and let \(x_{0},x_{1},w\in E\). Let \(\{v_{n}\}\subset E\) be a sequence defined by \(v_{n}:=J^{-1} (Jx_{n}+\mu _{n}(Jx_{n}-Jx_{n-1}) )\). Then,

$$\begin{aligned} \phi (w,v_{n}) &\leq \phi (w,x_{n})+\rho \mu _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\mu _{n}\phi (x_{n},x_{n-1}) \\ & \quad {}+ \mu _{n} \bigl( \phi (w,x_{n})-\phi (w,x_{n-1}) \bigr), \end{aligned}$$

where \(\{\mu _{n}\}\subset (0,1)\) and ρ is the constant appearing in Lemma 2.9. For completeness, we shall give the proof here.

Proof

Using property D3, we have

$$\begin{aligned} \phi (w,v_{n})&=\phi (w,x_{n})+\phi (x_{n},v_{n})+2 \langle x_{n}-w,Jv_{n}-Jx_{n} \rangle \\ &=\phi (w,x_{n})+\phi (x_{n},v_{n})+2\mu _{n} \langle x_{n}-w, Jx_{n}-Jx_{n-1} \rangle \end{aligned}$$
(13)
$$\begin{aligned} &=\phi (w,x_{n})+\phi (x_{n},v_{n})+\mu _{n}\phi (x_{n},x_{n-1})+\mu _{n} \phi (w,x_{n})-\mu _{n}\phi (w,x_{n-1}). \end{aligned}$$
(14)

Also, by Lemma 2.9, one can estimate \(v_{n}\) as follows:

$$\begin{aligned} \phi (w,v_{n})&=\phi \bigl( w, J^{-1}\bigl(Jx_{n}+ \mu _{n}(Jx_{n}-Jx_{n-1})\bigr) \bigr) \\ &= \Vert w \Vert ^{2}+ \bigl\Vert Jx_{n}+\mu _{n}(Jx_{n}-Jx_{n-1}) \bigr\Vert ^{2}-2\bigl\langle w, Jx_{n}+ \mu _{n}(Jx_{n}-Jx_{n-1}) \bigr\rangle \\ &= \Vert w \Vert ^{2}+ \bigl\Vert Jx_{n}+\mu _{n}(Jx_{n}-Jx_{n-1}) \bigr\Vert ^{2}-2\langle w, Jx_{n} \rangle -2\mu _{n} \langle w, Jx_{n}-Jx_{n-1} \rangle \\ &\leq \phi (w, x_{n}) +\rho \mu _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2}+2\mu _{n} \langle x_{n}-w, Jx_{n}-Jx_{n-1} \rangle . \end{aligned}$$
(15)

Putting together equation (13) and inequality (15), we obtain

$$ \phi (x_{n},v_{n}) \leq \rho \mu _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2}. $$

From (14), this implies that

$$\begin{aligned} \phi (w,v_{n}) &\leq \phi (w,x_{n})+\rho \mu _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\mu _{n}\phi (x_{n},x_{n-1}) \\ &\quad {} + \mu _{n} \bigl( \phi (w,x_{n})- \phi (w,x_{n-1}) \bigr). \end{aligned}$$
(16)

 □

3 Main result

The Setting for Algorithm  3.1 .

  1. 1.

    The space E is a 2-uniformly convex and uniformly smooth real Banach space with dual space, \(E^{*}\).

  2. 2.

    The operator \(A: E \to E^{*}\) is monotone and L-Lipschitz continuous, and \(B: E \to 2^{E^{*}}\) is maximal monotone and \(T: E \to E^{*}\) is relatively J-nonexpansive.

  3. 3.

    The solution set \(\Omega = (A+B)^{-1}0\cap F_{J}(T)\) is nonempty.

  4. 4.

    The control parameters \(\{\beta _{n}\}\subset (0,1)\), \(\{\gamma _{n}\}\subset (0,1)\) such that \(\lim_{n\to \infty}\gamma _{n}=0\) and \(\sum_{n=1}^{\infty }\gamma _{n}=\infty \), \(\{\epsilon _{n}\}\subset (0,1)\) such that \(\sum_{n=1}^{\infty}\epsilon _{n}<\infty \), and \(\{\lambda _{n}\} \subset (\lambda , \frac{\sqrt{\mu}}{\sqrt{\rho}L} )\), where \(\lambda \in (0,\frac{\sqrt{\mu}}{\sqrt{\rho}L})\), ρ and μ are the constants appearing in Lemmas 2.9 and 2.10, respectively.

Algorithm 3.1

Inertial Halpern–Tseng-type algorithm:

Step 0. (Initialization) Choose arbitrary points \(u,x_{0},x_{1}\in E\), \(\theta \in (0,1)\) and set \(n=1\),

Step 1. Choose \(\theta _{n}\) such that \(0\leq \theta _{n} \leq \bar{\theta}_{n}\), where

$$ \bar{\theta}_{n}= \textstyle\begin{cases} \min \{ \theta , \frac{\epsilon _{n}}{ \Vert Jx_{n}-Jx_{n-1} \Vert ^{2}}, \frac{\epsilon _{n}}{\phi (x_{n},x_{n-1})} \}, & x_{n}\neq x_{n-1} , \\ \theta , & \text{otherwise}. \end{cases} $$

Step 2. Compute

$$ \textstyle\begin{cases} w_{n}= J^{-1} ( Jx_{n}+\theta _{n}(Jx_{n}-Jx_{n-1}) ),\\ y_{n}= J_{\lambda _{n}}^{B}J^{-1} (Jw_{n}-\lambda _{n} Aw_{n} ), \\ z_{n}= J^{-1} (Jy_{n}-\lambda _{n}(Ay_{n}-Aw_{n}) ), \\ u_{n}= J^{-1} ( \beta _{n} Jz_{n} + (1-\beta _{n}) Tz_{n} ), \\ x_{n+1}= J^{-1} ( \gamma _{n} Ju + (1-\gamma _{n})Ju_{n} ). \end{cases} $$

Step 3. Set \(n=n+1\) and go to Step 1.

Lemma 3.2

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then, \(\{x_{n}\}\) is bounded.

Proof

Let \(x\in \Omega \). Using Lemma 2.9 and D3, we have

$$\begin{aligned} \phi (x,z_{n}) &\leq \phi \bigl(x, J^{-1} \bigl(Jy_{n}-\lambda _{n}(Ay_{n}-Aw_{n}) \bigr) \bigr) \\ &= \Vert x \Vert ^{2}-2\bigl\langle x, Jy_{n}-\lambda _{n}(Ay_{n}-Aw_{n})\bigr\rangle + \bigl\Vert Jy_{n}- \lambda _{n}(Ay_{n}-Aw_{n}) \bigr\Vert ^{2} \\ &\leq \Vert x \Vert ^{2}-2\langle x, Jy_{n}\rangle +2 \lambda _{n} \langle x, Ay_{n}-Aw_{n} \rangle + \Vert Jy_{n} \Vert ^{2} \\ & \quad {}-2\lambda _{n} \langle y_{n}, Ay_{n}-Aw_{n} \rangle +\rho \bigl\Vert \lambda _{n} (Ay_{n}-Aw_{n}) \bigr\Vert ^{2} \\ &=\phi (x,y_{n}) -2\lambda _{n} \langle y_{n}-x, Ay_{n}-Aw_{n} \rangle +\rho \bigl\Vert \lambda _{n} (Ay_{n}-Aw_{n}) \bigr\Vert ^{2} \\ &=\phi (x,w_{n})+\phi (w_{n},y_{n}) +2\langle w_{n}-x,Jy_{n}-Jw_{n} \rangle \\ & \quad {}-2\lambda _{n} \langle y_{n}-x, Ay_{n}-Aw_{n} \rangle +\rho \bigl\Vert \lambda _{n} (Ay_{n}-Aw_{n}) \bigr\Vert ^{2} \\ &=\phi (x,w_{n})+\phi (w_{n},y_{n}) -2\langle y_{n}- w_{n},Jy_{n}-Jw_{n} \rangle +2 \langle y_{n}- x,Jy_{n}-Jw_{n} \rangle \\ & \quad {}-2\lambda _{n} \langle y_{n}-x, Ay_{n}-Aw_{n} \rangle +\rho \bigl\Vert \lambda _{n} (Ay_{n}-Aw_{n}) \bigr\Vert ^{2} \\ &\leq \phi (x,w_{n})-\phi (y_{n},w_{n}) +\rho \lambda _{n}^{2}L^{2} \Vert y_{n}-w_{n} \Vert ^{2} \\ & \quad {}-2 \bigl\langle y_{n}-x, Jw_{n}-Jy_{n} -\lambda _{n}(Aw_{n}-Ay_{n}) \bigr\rangle . \end{aligned}$$
(17)

Claim.

$$ \bigl\langle y_{n}-x, Jw_{n}-Jy_{n} -\lambda _{n}(Aw_{n}-Ay_{n})\bigr\rangle \geq 0 . $$
(18)

Proof of claim. Observe that \(y_{n}=J_{\lambda _{n}}^{B}J^{-1} (Jw_{n}-\lambda _{n} Aw_{n} )\) implies \((Jw_{n}-\lambda _{n}Aw_{n})\in (Jy_{n}+\lambda _{n} By_{n})\). Since B is maximal monotone, there exists \(b_{n}\in By_{n}\) such that \(Jw_{n}-\lambda _{n}Aw_{n}=Jy_{n}+\lambda _{n} b_{n}\). Thus,

$$ b_{n}=\frac{1}{\lambda _{n}}(Jw_{n}-Jy_{n}- \lambda _{n} Aw_{n}). $$
(19)

Furthermore, since \(0\in (A+B)x\) and \((Ay_{n}+b_{n})\in (A+B)y_{n}\), by the monotonicity of \((A+B)\), we have

$$ \langle y_{n}-x, Ay_{n}+b_{n} \rangle \geq 0. $$

Substituting equation (19) into this inequality, we obtain

$$ \bigl\langle y_{n}-x,Jw_{n}-Jy_{n}-\lambda _{n}(Aw_{n}-Ay_{n})\bigr\rangle \geq 0, $$

which justifies our claim.

Now, substituting inequality (18) into inequality (17) and using Lemma 2.10, we deduce that

$$ \phi (x,z_{n})\leq \phi (x,w_{n}) - \biggl(1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu} \biggr)\phi (y_{n},w_{n}). $$
(20)

Since \(\lambda _{n} \in (0, \frac{\sqrt{\mu}}{\sqrt{\rho}L} )\), \(1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu}>0\). Thus,

$$ \phi (x,z_{n}) \leq \phi (x,w_{n}). $$
(21)

Also, using D2 and the fact that T is relatively J-nonexpansive, we have

$$\begin{aligned} \phi (x,u_{n}) &\leq \beta _{n}\phi (x,z_{n})+(1- \beta _{n})\phi \bigl(x, J^{-1}Tz_{n}\bigr) \\ & \leq \beta _{n} \phi (x,z_{n})+(1-\beta _{n}) \phi (x,z_{n})=\phi (x,z_{n}). \end{aligned}$$
(22)

Next, using D2, inequalities (22) and (21), Lemma 2.16, and the fact that \(\{\theta _{n}\}\subset (0,1)\), we obtain

$$\begin{aligned} \phi (x,x_{n+1})&=\phi \bigl(x, J^{-1}\bigl(\gamma _{n}Ju+(1-\gamma _{n})Ju_{n}\bigr) \bigr) \\ &\leq \gamma _{n} \phi (x,u)+ (1-\gamma _{n})\phi (x,u_{n}) \\ &\leq \gamma _{n} \phi (x,u)+ (1-\gamma _{n})\phi (x,z_{n}) \\ &\leq \gamma _{n} \phi (x,u)+ (1-\gamma _{n})\phi (x,w_{n}) \\ &\leq \gamma _{n} \phi (x,u)+ (1-\gamma _{n}) \bigl( \phi (x,x_{n}) + \theta _{n} \bigl(\phi (x,x_{n})- \phi (x,x_{n-1}) \bigr) \\ & \quad {}+\rho \theta _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\theta _{n} \phi (x_{n},x_{n-1}) \bigr) \\ &\leq \max \bigl\{ \phi (x,u),\phi (x,x_{n}) +\theta _{n} \bigl(\phi (x,x_{n})- \phi (x,x_{n-1}) \bigr) \\ & \quad {}+\rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\theta _{n} \phi (x_{n},x_{n-1}) \bigr\} . \end{aligned}$$
(23)

If the maximum is \(\phi (x,u)\), for all \(n\geq 1\), we are done. Else, there exists an \(n_{0}\geq 1\) such that for all \(n\geq n_{0}\), we have that

$$\begin{aligned} \phi (x,x_{n+1})& \leq \phi (x,x_{n}) +\theta _{n} \bigl(\phi (x,x_{n})- \phi (x,x_{n-1}) \bigr) +\rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} \\ & \quad {}+\theta _{n} \phi (x_{n},x_{n-1}). \end{aligned}$$

From Step 1 and the setting for Algorithm 3.1 (4), we obtain

$$ \rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} \leq \rho \epsilon _{n}, \qquad \theta _{n} \phi (x_{n},x_{n-1})\leq \epsilon _{n} \quad \text{and} \quad \sum_{n=1}^{\infty }\epsilon _{n} < \infty . $$

Hence, by Lemma 2.15, \(\{\phi (x,x_{n})\}\) is convergent and thus, bounded. Furthermore, by D1, \(\{x_{n}\}\) is bounded. This implies that \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), and \(\{u_{n}\}\) are bounded. □

Now, we are ready to state our main convergence theorem.

Theorem 3.3

Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then, \(\{x_{n}\}\) converges strongly to \(x\in \Omega \).

Proof

Let \(x\in \Omega \). First, we estimate \(\phi (x,u_{n})\) using Lemma 2.12, the fact that T is relatively J-nonexpansive, and inequalities (20) and (21). Now,

$$\begin{aligned} \phi (x,u_{n}) &\leq \beta _{n} \phi (x,z_{n})+(1- \beta _{n})\phi \bigl(x,J^{-1}Tz_{n}\bigr)- \beta _{n}(1-\beta _{n})g\bigl( \Vert Jz_{n}-Tz_{n} \Vert \bigr) \\ &\leq \beta _{n} \phi (x,z_{n}) +(1-\beta _{n}) \phi (x,z_{n})-\beta _{n}(1- \beta _{n})g\bigl( \Vert Jz_{n}-Tz_{n} \Vert \bigr) \\ &\leq \beta _{n} \phi (x,z_{n}) +(1-\beta _{n}) \biggl[ \phi (x,w_{n})- \biggl(1-\frac{\rho \lambda _{n}^{2}L^{2}}{\mu} \biggr)\phi (y_{n},w_{n}) \biggr] \\ & \quad {}-\beta _{n}(1-\beta _{n})g\bigl( \Vert Jz_{n}-Tz_{n} \Vert \bigr) \\ &\leq \phi (x,w_{n})-(1-\beta _{n}) \biggl(1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu} \biggr)\phi (y_{n},w_{n}) \\ & \quad {}-\beta _{n}(1-\beta _{n})g\bigl( \Vert Jz_{n}-JTz_{n} \Vert \bigr). \end{aligned}$$
(24)

Next, we estimate \(\phi (x,x_{n+1})\) using Lemma 2.12 and inequality (24). Hence,

$$\begin{aligned} \phi (x,x_{n+1})& \leq \gamma _{n}\phi (x,u)+(1-\gamma _{n}) \phi (x,u_{n}) \\ &\leq \gamma _{n}\phi (x,u)+(1-\gamma _{n}) \biggl[ \phi (x,w_{n})-(1- \beta _{n}) \biggl(1-\frac{\rho \lambda _{n}^{2}L^{2}}{\mu} \biggr)\phi (y_{n},w_{n}) \\ & \quad {}-\beta _{n}(1-\beta _{n})g\bigl( \Vert Jz_{n}-JTz_{n} \Vert \bigr) \biggr] \\ &= \gamma _{n}\phi (x,u)+(1-\gamma _{n}) \phi (x,w_{n})-(1-\gamma _{n}) (1- \beta _{n}) \biggl(1-\frac{\rho \lambda _{n}^{2}L^{2}}{\mu} \biggr)\phi (y_{n},w_{n}) \\ & \quad {}-(1-\gamma _{n})\beta _{n}(1-\beta _{n})g\bigl( \Vert Jz_{n}-JTz_{n} \Vert \bigr). \end{aligned}$$
(25)

Set \(\eta _{n}=(1-\gamma _{n})(1-\beta _{n}) (1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu} )\) and \(\zeta _{n}=(1-\gamma _{n})\beta _{n}(1-\beta _{n})\). By rearranging the terms in inequality (25) and using Lemma 2.15, we obtain

$$\begin{aligned} &\eta _{n}\phi (y_{n},w_{n})+\zeta _{n}g\bigl( \Vert Jz_{n}-Tz_{n} \Vert \bigr) \\ &\quad\leq \gamma _{n} \bigl(\phi (x,u)-\phi (x,w_{n}) \bigr)+ \phi (x,w_{n})-\phi (x,x_{n+1}) \\ &\quad\leq \gamma _{n} \bigl(\phi (x,u)-\phi (x,w_{n}) \bigr)+ \phi (x,x_{n})+ \rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} \\ &\quad \quad {}+\theta _{n}\phi (x_{n},x_{n-1}) + \theta _{n} \bigl(\phi (x,x_{n})- \phi (x,x_{n-1}) \bigr) -\phi (x,x_{n+1}) \\ &\quad= \gamma _{n} \bigl(\phi (x,u)-\phi (x,w_{n}) \bigr)+ \phi (x,x_{n}) - \phi (x,x_{n+1})+\theta _{n}\phi (x_{n},x_{n-1}) \\ &\quad \quad {}+\rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\theta _{n} \bigl(\phi (x,x_{n})- \phi (x,x_{n-1}) \bigr). \end{aligned}$$
(26)

To complete the proof, we consider the following two cases:

Case 1. Assume there exits an \(n_{0}\in \mathbb{N}\) such that for all \(n\geq n_{0}\),

$$ \phi (x,x_{n+1}) \leq \phi (x,x_{n}) ,\quad \forall n\geq n_{0}. $$

Then, \(\{\phi (x,x_{n})\}\) is convergent.

From inequality (26), and using the fact that \(\lim_{n\to \infty}\gamma _{n}=0\), the boundedness \(\{w_{n}\}\), the existence of \(\lim_{n\to \infty}\phi (x,x_{n})\), and the fact that \(\lim_{n\to \infty} \rho \theta _{n}\|Jx_{n}-Jx_{n-1} \|^{2}=0= \lim_{n\to \infty} \theta _{n} \phi (x_{n},x_{n-1}) \), we obtain the following:

$$ \lim_{n\to \infty} \phi (y_{n},w_{n})=0 \quad \text{and} \quad \lim_{n \to \infty} g\bigl( \Vert Jz_{n}-Tz_{n} \Vert \bigr)=0. $$

This implies by Lemma 2.11 and the properties of g that

$$ \lim_{n\to \infty} \Vert y_{n}-w_{n} \Vert =0 \quad \text{and} \quad \lim_{n \to \infty} \Vert Jz_{n}-Tz_{n} \Vert =0. $$
(27)

Furthermore, since

$$ \Vert Jx_{n}-Jw_{n} \Vert =\theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert , \qquad \lim _{n\to \infty} \Vert Jx_{n}-Jw_{n} \Vert =0. $$

Moreover, by the uniform continuity of \(J^{-1}\) on bounded sets, \(\lim_{n\to \infty}\|x_{n}-w_{n}\|=0\). This and equation (27) imply that \(\lim_{n\to \infty}\|x_{n}-y_{n}\|=0\). By the uniform continuity of J on bounded sets, this implies \(\lim_{n\to \infty} \|Jx_{n}-Jy_{n}\|=0\). Also, the Lipschitz continuity of A and equation (27) imply that \(\lim_{n\to \infty} \|Aw_{n}-Ay_{n}\|=0\). Therefore,

$$ \lim_{n\to \infty} \Vert Jz_{n}-Jy_{n} \Vert =\lim_{n\to \infty}\lambda _{n} \Vert Aw_{n}-Ay_{n} \Vert =0. $$
(28)

By the uniform continuity of \(J^{-1}\), equation (28) implies that \(\lim_{n\to \infty}\| z_{n}-y_{n}\|=0\). Thus,

$$ \lim_{n\to \infty} \Vert x_{n}-z_{n} \Vert =0. $$
(29)

Now, observe that

$$\begin{aligned} \Vert Jx_{n+1}-Jx_{n} \Vert &\leq \Vert Jx_{n+1}-Ju_{n} \Vert + \Vert Ju_{n}-Jz_{n} \Vert + \Vert Jz_{n}-Jx_{n} \Vert \\ &\leq \gamma _{n} \Vert Ju-Ju_{n} \Vert +(1-\beta _{n}) \Vert Tz_{n}-Jz_{n} \Vert + \Vert Jz_{n}-Jx_{n} \Vert \\ &\leq \gamma _{n} \Vert Ju-Ju_{n} \Vert +(1-\beta _{n}) \Vert Tz_{n}-Jz_{n} \Vert + \Vert Jz_{n}-Jy_{n} \Vert \\ & \quad {}+ \Vert Jy_{n}-Jx_{n} \Vert . \end{aligned}$$

This implies that

$$ \lim_{n\to \infty} \Vert Jx_{n+1}-Jx_{n} \Vert =0. $$
(30)

Now, we prove that \(\Omega _{w}(x_{n}) \subset \Omega \), where \(\Omega _{w}(x_{n})\) denotes the set of weak subsequential limits of \(\{x_{n}\}\). Since \(\{x_{n}\}\) is bounded, \(\Omega _{w}(x_{n})\neq \emptyset \). Let \(x^{*}\in \Omega _{w}(x_{n})\). Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup x^{*}\). From equation (29), we have \(z_{n_{k}}\rightharpoonup x^{*}\). This and (27) imply that \(x^{*}\in \widehat{F_{J}}(T)\). Since T is relatively J-nonexpansive, \(x^{*}\in F_{J}(T)\).

Next, we show that \(x^{*}\in (A+B)^{-1}0\). Let \((v,w)\in G(A+B):=\{ (x,y)\in E\times E^{*} : y\in (Ax+Bx)\} \). Then, \((w-Av)\in Bv\). By the definition of \(y_{n}\) in Algorithm 3.1, we have that \((Jw_{n_{k}}-\lambda _{n_{k}}Aw_{n_{k}})\in (Jy_{n_{k}}+\lambda _{n_{k}}By_{n_{k}})\). Thus, \(\frac{1}{\lambda _{n_{k}}}(Jw_{n_{k}}-Jy_{n_{k}}-\lambda _{n_{k}}Aw_{n_{k}}) \in By_{n_{k}}\). By the monotonicity of B, we have

$$ \biggl\langle v-y_{n_{k}}, w-Av-\frac{1}{\lambda _{n_{k}}}(Jw_{n_{k}}-Jy_{n_{k}}- \lambda _{n_{k}}Aw_{n_{k}})\biggr\rangle \geq 0. $$

Using the fact that A is monotone, we estimate this as follows

$$\begin{aligned} \langle v-y_{n_{k}}, w\rangle &\geq \biggl\langle v-y_{n_{k}}, Av+ \frac{1}{\lambda _{n_{k}}}(Jw_{n_{k}}-Jy_{n_{k}}-\lambda _{n_{k}}Aw_{n_{k}}) \biggr\rangle \\ &= \langle v- y_{n_{k}}, Av-Aw_{n_{k}}\rangle + \frac{1}{\lambda _{n_{k}}} \langle v-y_{n_{k}}, Jw_{n_{k}}-Jy_{n_{k}} \rangle \\ &= \langle v- y_{n_{k}}, Av-Ay_{n_{k}}\rangle + \langle v-y_{n_{k}}, Ay_{n_{k}}-Aw_{n_{k}} \rangle \\ & \quad {}+\frac{1}{\lambda _{n_{k}}}\langle v-y_{n_{k}}, Jw_{n_{k}}-Jy_{n_{k}} \rangle \\ & \geq \langle v-y_{n_{k}}, Ay_{n_{k}}-Aw_{n_{k}} \rangle + \frac{1}{\lambda _{n_{k}}}\langle v-y_{n_{k}}, Jw_{n_{k}}-Jy_{n_{k}} \rangle . \end{aligned}$$

Since \(\lim_{n\to \infty} \|Aw_{n}-Ay_{n}\|= \lim_{n\to \infty} \|Jy_{n}-Jw_{n}\|=0\), \(\{\frac{1}{\lambda _{n}} \}\) is bounded and \(y_{n_{k}}\rightharpoonup x^{*}\), it follows that

$$ \bigl\langle v-x^{*}, w \bigr\rangle \geq 0. $$

By Lemma 2.7, \(A+B\) is maximal monotone. This implies that \(0\in (A+B)x^{*}\), i.e., \(x^{*}\in (A+B)^{-1}0\). Hence, \(x^{*}\in \Omega =F_{J}(T) \cap (A+B)^{-1} 0\).

Now, we show that \(\{x_{n}\}\) converges strongly to the point \(x=\Pi _{\Omega}u\). Observe that if \(x=x^{*}\), we are done. Suppose \(x\neq x^{*}\). Using the boundedness of \(\{x_{n}\}\), Lemma 2.5, and the fact that Ω is closed and convex (see, e.g., [23]), there exits a subsequence \(\{x_{n_{k}}\}\subset \{x_{n}\}\) such that

$$ \limsup_{n\to \infty} \langle x_{n}-x,Ju-Jx\rangle =\lim _{k \to \infty} \langle x_{n_{k}}-x,Ju-Jx \rangle = \bigl\langle x^{*}-x, Ju-Jx \bigr\rangle \leq 0. $$

Using (30) and the uniform boundedness of \(J^{-1}\), we deduce that

$$ \limsup_{n\to \infty} \langle x_{n+1}-x, Ju-Jx \rangle \leq 0. $$

Next, using Lemma 2.6, D2, inequalities (22), (21), and Lemma 2.15, we have

$$\begin{aligned} \phi (x, x_{n+1}) &= \phi \bigl(x, J^{-1} \bigl(\gamma _{n}Ju+(1-\gamma _{n})Ju_{n}\bigr) \bigr) \\ &= V \bigl(x, \gamma _{n} Ju+(1-\gamma _{n})Ju_{n} \bigr) \\ &\leq V \bigl(x, \gamma _{n}Ju+(1-\gamma _{n})Ju_{n}- \gamma _{n}(Ju-Jx) \bigr) \\ & \quad {}+ 2\gamma _{n} \langle x_{n+1}-x,Ju-Jx \rangle \\ &= V\bigl(x, \gamma _{n}Jx+(1-\gamma _{n}) Ju_{n}\bigr)+2\gamma _{n} \langle x_{n+1}-x,Ju-Jx \rangle \\ &=\phi \bigl(x, J^{-1}\bigl(\gamma _{n}Jx+(1-\gamma _{n}) Ju_{n}\bigr)\bigr) +2 \gamma _{n} \langle x_{n+1}-x,Ju-Jx \rangle \\ &\leq \gamma _{n} \phi (x,x) +(1-\gamma _{n}) \phi (x,u_{n})+2\gamma _{n} \langle x_{n+1}-x, Ju-Jx \rangle \\ &\leq (1-\gamma _{n}) \bigl( \phi (x,x_{n})+\rho \theta _{n}^{2} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\theta _{n}\phi (x_{n},x_{n-1}) \\ & \quad {}+ \theta _{n} \bigl( \phi (x,x_{n})-\phi (x,x_{n-1}) \bigr) \bigr) +2 \gamma _{n} \langle x_{n+1}-x, Ju-Jx \rangle \end{aligned}$$
(31)
$$\begin{aligned} &\leq (1-\gamma _{n}) \phi (x,x_{n}) + 2\gamma _{n} \langle x_{n+1}-x, Ju-Jx \rangle \\ & \quad {}+ \rho \theta _{n} \Vert Jx_{n}-Jx_{n-1} \Vert ^{2} +\theta _{n}\phi (x_{n},x_{n-1}). \end{aligned}$$
(32)

By Lemma 2.13, inequality (32) implies that \(\lim_{n\to \infty}\phi (x,x_{n})= 0\). Using Lemma 2.11, we obtain that \(\lim_{n\to \infty} x_{n} = x\).

Case 2. If Case 1 does not hold, then, there exists a subsequence \(\{x_{m_{j}}\} \subset \{x_{n}\}\) such that

$$ \phi (x,x_{m_{j}+1})>\phi (x,x_{m_{j}}),\quad \forall j\in \mathbb{N}. $$

By Lemma 2.14, there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\), such that \(\lim_{k \to \infty} m_{k}=\infty \) and the following inequalities hold

$$ \phi (x,x_{m_{k}})\leq \phi (x, x_{m_{k}+1}) \quad \text{and} \quad \phi (x,x_{k}) \leq \phi (x,x_{m_{k}}), \quad \forall k\in \mathbb{N}. $$

From inequality (26) we have

$$\begin{aligned} & \eta _{m_{k}}\phi (y_{m_{k}},x_{m_{k}}) +\zeta _{m_{k}} g\bigl( \Vert Jz_{m_{k}}-Tz_{m_{k}} \Vert \bigr) \\ &\quad\leq \gamma _{m_{k}} \bigl(\phi (x,u)-\phi (x,w_{m_{k}}) \bigr)+ \phi (x,x_{m_{k}}) \\ &\quad \quad {}-\phi (x,x_{m_{k}+1}) +\theta _{m_{k}}\phi (x_{m_{k}},x_{m_{k}+1})+ \rho \theta _{m_{k}} \Vert Jx_{m_{k}}-Jx_{m_{k}-1} \Vert ^{2} \\ &\quad \quad {}+\theta _{m_{k}} \bigl(\phi (x,x_{m_{k}})-\phi (x,x_{m_{k}-1}) \bigr) \\ &\quad\leq \gamma _{m_{k}} \bigl(\phi (x,u)-\phi (x,w_{m_{k}}) \bigr)+ \theta _{m_{k}} \phi (x_{m_{k}},x_{m_{k}+1}) \\ &\quad \quad {}+\rho \theta _{m_{k}} \Vert Jx_{m_{k}}-Jx_{m_{k}-1} \Vert ^{2}. \end{aligned}$$

Following a similar argument as in Case 1, one can establish the following

$$\begin{aligned}& \lim_{k\to \infty} \Vert y_{m_{k}}-x_{m_{k}} \Vert =0 \quad \text{and} \quad \lim_{k\to \infty} \Vert Jz_{m_{k}}-Tz_{m_{k}} \Vert =0,\\& \lim_{k\to \infty} \Vert x_{m_{k}+1}-x_{m_{k}} \Vert =0 \quad \text{and} \quad \limsup_{k\to \infty} \langle x_{m_{k}+1}-x, Ju-Jx \rangle \leq 0. \end{aligned}$$

From (31) we have

$$\begin{aligned} \phi (x,x_{m_{k}+1})&\leq (1-\gamma _{m_{k}}) \bigl( \phi (x,x_{m_{k}})+ \rho \theta _{m_{k}}^{2} \Vert Jx_{m_{k}}-Jx_{m_{k}-1} \Vert ^{2} \\ & \quad {}+\theta _{m_{k}}\phi (x_{m_{k}},x_{m_{k}-1}) + \theta _{m_{k}} \bigl( \phi (x,x_{m_{k}})-\phi (x,x_{m_{k}-1}) \bigr) \bigr) \\ & \quad {}+2\gamma _{m_{k}} \langle x_{m_{k}+1}-x, Ju-Jx \rangle \\ &\leq (1-\gamma _{m_{k}})\phi (x,x_{m_{k}}) +2\gamma _{m_{k}} \langle x_{m_{k}+1}-x, Ju-Jx \rangle \\ & \quad {}+ \rho \theta _{m_{k}} \Vert Jx_{m_{k}}-Jx_{m_{k}-1} \Vert ^{2} +\theta _{m_{k}} \phi (x_{m_{k}},x_{m_{k}-1}) \\ & \quad {}+ \bigl( \phi (x,x_{m_{k}})-\phi (x,x_{m_{k}-1}) \bigr) . \end{aligned}$$
(33)

By Lemma 2.13, inequality (33) implies that \(\lim_{n\to \infty}\phi (x,x_{m_{k}})= 0\). Thus,

$$ \limsup_{k \to \infty}\phi (x,x_{k})\leq \lim _{k\to \infty} \phi (x, x_{m_{k}})= 0. $$

Therefore, \(\limsup_{k \to \infty}\phi (x,x_{k})=0\) and so, by Lemma 2.11, \(\lim_{k \to \infty} x_{k} = x\). This completes the proof. □

4 Applications and numerical illustrations

In this section, we give applications of Theorem 3.3 to a structured, nonsmooth, and convex minimization problem, image denoising, and deblurring problems and a numerical illustration on the classical Banach space \(l_{\frac{3}{2}}\). Finally, we will compare the performance of Algorithm 3.1 with Algorithms (3) and (9).

4.1 Application to a convex minimization problem

In this subsection, we shall give an application of our theorem to the structured nonsmooth convex minimization problem that is to

$$ \text{find } x^{*}\in E\quad \text{with } f \bigl(x^{*}\bigr)+g\bigl(x^{*}\bigr) = \min _{x\in E} \bigl\{ f(x)+g(x) \bigr\} , $$
(34)

where f is a real-valued function on E that is smooth and convex and g is an extended real-valued function that is convex and lower-semicontinuous (E is a real Banach space). Problem (34) can be recast as:

$$ \text{find } x^{*}\in E \quad \text{with } 0 \in \bigl( \nabla f\bigl(x^{*}\bigr)+\partial g\bigl(x^{*} \bigr) \bigr), $$
(35)

where f is the gradient of f and ∂g is the subdifferential of g. Suppose f is monotone and Lipschitz continuous. Then, setting \(A=\nabla f\) and \(B=\partial g\), in Algorithm 3.1 and assuming that the solution set \(\Omega :=F_{J}(T) \cap ( \nabla f+\partial g)^{-1}0 \neq \emptyset \), it follows from Theorem 3.3 that \(\{x_{n}\}\) converges strongly to a point \(x\in \Omega \).

4.2 Application to image-restoration problems

The general image-recovery problem can be modeled as the following undetermined linear equation system:

$$ y=Dx+\varrho , $$
(36)

where \(x\in \mathbb{R}^{N}\) is an original image, \(y\in \mathbb{R}^{M}\) is the observed image with noise ϱ, and \(D:\mathbb{R}^{N}\to \mathbb{R}^{M}\) (\(M< N\)) is a bounded linear operator. It is well known that solving (36) can be viewed as solving the LASSO problem:

$$ \min_{x\in \mathbb{R}^{n}}\frac {1}{2} \Vert Dx-y \Vert _{2}^{2}+\lambda \Vert x \Vert _{1}, $$
(37)

where \(\lambda >0\). Following [24], we define \(Ax:=\nabla (\frac {1}{2}\|Dx-y\|_{2}^{2})=D^{T}(Dx-y)\) and \(Bx:=\partial (\lambda \|x\|_{1})\). It is known that A is \(\|D\|^{2}\)-Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [40]).

Remark 5

For the purpose of existence, one can take

$$\begin{aligned}& D= \begin{pmatrix} 3 & 1 \\ -1 & 5 \end{pmatrix} \quad \text{and} \quad y= \begin{pmatrix} 2 \\ 3 \end{pmatrix} , \\& \text{to see that indeed, there exists a matrix } D \text{ such that}\\& Ax:=\nabla \biggl(\frac {1}{2} \Vert Dx-y \Vert _{2}^{2} \biggr)=D^{T}(Dx-y) \quad \text{is Lipschitz continuous and monotone} . \end{aligned}$$

In Algorithm (3) of Takahashi et al., [44], we set \(\alpha _{n}=\frac{1}{1000n}\), \(\beta _{n}=\frac{n}{2n+1}\), and \(\lambda _{n}=0.001\), and \(Sx=\frac{nx}{n+1}\), in Algorithm (9) of Cholamjiak [23], we set \(\lambda _{n} = 0.03\), \(\beta _{n} = 0.999\), \(\gamma _{n}= \frac{1}{(n+1)^{2}}\), \(\theta =0.999\), \(\varepsilon _{n}= \frac{1}{(n+5)^{2}}\), \(\theta _{n}=0.95\), and \(Tx:=\frac{nx}{n+1}\) and in our proposed Algorithm 3.1, we set \(\lambda _{n} = 0.03\), \(\beta _{n} = 0.999\), \(\alpha _{n}= \frac{1}{(n+1)^{2}}\), \(\theta =0.999\), \(\varepsilon _{n}= \frac{1}{(n+5)^{2}}\), \(\theta _{n}=0.95\), and \(Tx:=\frac{nx}{n+1}\). The test images were degraded using the following MATLAB blur functions “fspecial(’motion’,9,15)” and “fspecial(’gaussian’, 5,5)” and then we added random noise. Finally, we used a tolerance of 10−4 and the maximum number of iterations (n) of 300, for all the algorithms. The results are presented in Figs. 1 and 2, and Table 1.

Figure 1
figure 1

Degradation of the test images and their restorations via Algorithms (3) and (9)

Figure 2
figure 2

Test images and their restorations via Algorithms (9) and 3.1

Table 1 SNR values of the restored images in Figs. 1 and 2

Looking at the restored images in Fig. 2, it is difficult to tell which algorithm performs better in the restoration process. To distinguish this, there is a powerful tool that is used to measure the quality of restored images. The tool is called SNR, meaning signal-to-noise ratio. The higher the SNR value for a restored image, the better the restoration process via the algorithm. The SNR is defined as follows:

$$ \text{SNR}= 10\log \frac { \Vert x \Vert _{2}^{2}}{ \Vert x-x_{n} \Vert _{2}^{2}}, $$

where x and \(x_{n}\) are the original image and estimated image at iteration n, respectively. The SNR values for the restored images via Algorithms (3), (9), and 3.1 are presented in Table 1.

Discussion of the numerical results. For the restored images in Figs. 1 and 2, with regards to the number of iterations and the quality of the restored images (SNR values) our proposed Algorithm 3.1 outperforms Algorithm (3) of Takahashi et al. [44] and Algorithm (9) of Cholamjiak et al. [23]. In particular, for the brain image, Algorithms (3) failed to restore the image before the maximum number of iterations was exhausted, however, it took our proposed Algorithm 3.1 just 131 iterations to restore the brain image degraded by motion blur and 105 iterations to restore the brain image degraded by Gaussian blur. From the above experiment, our proposed method appears to be competitive and promising.

4.3 An example in \(l_{\frac{3}{2}}\)

In this subsection we present a numerical implementation of our proposed Algorithm 3.1 on the Banach space

$$ l_{p}= \Biggl\{ \{x_{n}\}\subset \mathbb{R}: \sum _{n=1}^{\infty } \vert x_{n} \vert ^{p}< \infty \Biggr\} \quad \text{with norm } \Vert x \Vert = \Biggl(\sum_{n=1}^{\infty } \vert x_{n} \vert ^{p} \Biggr)^{\frac{1}{p}}. $$

It is well known that for \(1< p\leq 2\), \(l_{p}\) spaces are uniformly smooth and 2-uniformly convex. Now, let \(p=\frac{3}{2}\). Since we cannot sum to infinity on a computer, for the purpose of numerical illustration, we considered the subspace of \(l_{\frac{3}{2}}\) consisting of finite, nonzero terms

$$ S^{k}_{\frac{3}{2}}:= \bigl\{ \{x_{n}\}\subset \mathbb{R}: \{x_{n}\}=\{x_{1},x_{2}, \ldots , x_{k}, 0,0,0, \ldots \} \bigr\} , \quad \text{for some } k\geq 1. $$

Example 1

Consider the space \(S^{3}_{\frac{3}{2}}\) with dual space \(S^{3}_{3}\). Let \(A,B,T: S^{3}_{\frac{3}{2}} \to S_{3}^{3}\) be defined by

$$ Ax:=3x+(1,0.5,0.25,0,0,\ldots ), \qquad Bx:= 2x, \qquad Tx:=x. $$

It is not difficult to verify the map A is 3-Lipschitz, B is maximal monotone, and T is nonexpansive, at the same time it is relatively nonexpansive and relatively J-nonexpansive. Furthermore, the point \(x^{*}= (0.2, 0.1, 0.125, 0, 0,\ldots )\) is the only point in the solution set \(\Omega =(A+B)^{-1}0\cap F_{J}(T)\). In the numerical experiment, we compared the performance of Algorithms (3), (9), and 3.1. For a fair comparison, since these algorithms have similar control parameters we used the same values for each parameter appearing in all the algorithms. For the step-size \(\lambda _{n}\), we used 0.02 for all the algorithms. For \(\alpha _{n}\) defined Algorithms (3) and (9) that was required to satisfy same conditions with \(\gamma _{n}\) defined in our Algorithm 3.1, we used \(\frac{1}{(50{,}000\times n)+1}\) for all the three algorithms. Next, for \(\beta _{n}\) appearing in all the algorithms with the same condition, in Algorithms (9) and 3.1, we used 0.999; however, for Algorithm (3), the choice of \(\beta _{n}=0.5\) gave a better approximation so we used it for the algorithm and finally, to obtain the inertial parameter in our Algorithm 3.1 we chose \(\theta =0.999\), \(\varepsilon _{n}=\frac{1}{(n+5)^{2}}\). We set the Halpern-vector (x or u) to be zero in all the algorithms. The iteration process was started with the initial points \(x_{0}=(2,1,3,0,0,0,\ldots )\) and we observed the behavior of the algorithms as we varied \(x_{1}\) to be: \(\text{First } x_{1}= (1,1,3, 0,0,0, \ldots ) \) and \(\text{Second } x_{1}= (2,0,1,0,0,0 \cdots )\). The iteration process was terminated when \(\|x_{n}-x^{*}\|>10^{-6}\) or \(n>1999\). The results of the experiment are presented in Table 2 and Fig. 3.

Figure 3
figure 3

First 423 iterations of Algorithms (3), (9), and 3.1 illustrated graphically

Table 2 Numerical results for the varied initial point \(x_{1}\)

Discussion of the numerical results. From the numerical illustrations presented in Example 1, we observe that the iterates generated by Algorithm (3) of Takahashi et al. [44] fail to satisfy the stopping criterion before the prescribed maximum number of iterations was exhausted. While Algorithm (9) of Cholamjiak et al. [23] took 1275 iterations to satisfy the tolerance for the First initial point \(x_{1}\) and 1244 for the Second \(x_{1}\), it took our proposed Algorithm 3.1 just 422 for the First initial point \(x_{1}\) and 423 for the Second \(x_{1}\). Thus, in this example our proposed algorithm outperforms the algorithms of Takahashi et al. [44] and Cholamjiak et al. [23].

4.4 Conclusion

This paper presents a modified inertial extension of the theorem of Cholamjiak [23] whose solutions are J-fixed points of relatively J-nonexpansive mappings. Applications of the theorem to convex minimization and image restoration are presented. Furthermore, some interesting numerical implementations of our proposed algorithm in solving image-recovery problems and an example on \(l_{\frac{3}{2}}\) are presented. Finally, the performance of our proposed method is compared with that of Takahashi et al. [44] and Cholamjiak et al. [23] and from the numerical illustrations our proposed Algorithm 3.1 appears to be competitive and promising.

Availability of data and materials

Not applicable.

References

  1. Abass, H.A., Aremu, K.O., Jolaoso, L.O., Mewomo, O.T.: An inertial forward-backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal. 2020, 6, 1–20 (2020)

    Google Scholar 

  2. Abubakar, A.B., Kumam, P., Awwal, A.M.: A modified self-adaptive conjugate gradient method for solving convex constrained monotone nonlinear equations with applications to signal recovery problems. Bangmod Int. J. Math. Comput. Sci. 5(2), 1–26 (2019)

    Google Scholar 

  3. Adamu, A., Adam, A.A.: Approximation of solutions of split equality fixed point problems with applications. Carpath. J. Math. 37(3), 23–34 (2021)

    MathSciNet  MATH  Google Scholar 

  4. Adamu, A., Deepho, J., Ibrahim, A.H., Abubakar, A.B.: Approximation of zeros of sum of monotone mappings with applications to variational inequality and image restoration problems. Nonlinear Funct. Anal. Appl. 26(2), 411–432 (2021)

    MATH  Google Scholar 

  5. Adamu, A., Kitkuan, D., Kumam, P., Padcharoen, A., Seangwattana, T.: Approximation method for monotone inclusion problems in real Banach spaces with applications. J. Inequal. Appl. 2022(1), 70, 1–20 (2022). https://doi.org/10.1186/s13660-022-02805-0

    Article  MathSciNet  MATH  Google Scholar 

  6. Adamu, A., Kitkuan, D., Padcharoen, A., Chidume, C.E., Kumam, P.: Inertial viscosity-type iterative method for solving inclusion problems with applications. Math. Comput. Simul. 194, 445–459 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  7. Alber, Y.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Lecture Notes in Pure and Applied Mathematics, pp. 15–50 (1996)

    Google Scholar 

  8. Alber, Y., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, Netherlands (2006)

    MATH  Google Scholar 

  9. Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  10. Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Springer, New York (1976)

    Book  MATH  Google Scholar 

  11. Cheng, Q., Su, Y., Zhang, J.: Duality fixed point and zero point theorems and applications. Abstr. Appl. Anal. 2012, Article ID 391301, 1–11 (2012). https://doi.org/10.1155/2012/391301

    Article  MathSciNet  MATH  Google Scholar 

  12. Chidume, C., Ikechukwu, S., Adamu, A.: Inertial algorithm for approximating a common fixed point for a countable family of relatively nonexpansive maps. Fixed Point Theory Appl. 2018, 9, 1–9 (2018). https://doi.org/10.1186/s13663-018-0634-3

    Article  MathSciNet  MATH  Google Scholar 

  13. Chidume, C., Kumam, P., Adamu, A.: A hybrid inertial algorithm for approximating solution of convex feasibility problems with applications. Fixed Point Theory Appl. 2020, 12, 1–17 (2020). https://doi.org/10.1186/s13663-020-00678-w

    Article  MathSciNet  MATH  Google Scholar 

  14. Chidume, C.E., Adamu, A., Chinwendu, L.O.: Approximation of solutions of Hammerstein equations with monotone mappings in real Banach spaces. Carpath. J. Math. 35(3), 305–316 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chidume, C.E., Adamu, A., Kumam, P., Kitkuan, D.: Generalized hybrid viscosity-type forward-backward splitting method with application to convex minimization and image restoration problems. Numer. Funct. Anal. Optim. 42, 1586–1607 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  16. Chidume, C.E., Adamu, A., Minjibir, M., Nnyaba, U.: On the strong convergence of the proximal point algorithm with an application to Hammerstein euations. J. Fixed Point Theory Appl. 22(3), 1–21 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  17. Chidume, C.E., Adamu, A., Nnakwe, M.O.: Strong convergence of an inertial algorithm for maximal monotone inclusions with applications. Fixed Point Theory Appl. 2020(1), 13 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chidume, C.E., Adamu, A., Nnakwe, M.O.: An inertial algorithm for solving Hammerstein equations. Symmetry 13(3), 376 (2021)

    Article  Google Scholar 

  19. Chidume, C.E., Adamu, A., Okereke, L.C.: Strong convergence theorem for some nonexpansive-type mappings in certain Banach spaces. Thai J. Math. 18(3), 1537–1548 (2020)

    MathSciNet  MATH  Google Scholar 

  20. Chidume, C.E., Adamu, A., Okereke, L.C.: Iterative algorithms for solutions of Hammerstein equations in real Banach spaces. Fixed Point Theory Appl. 2020, 4, 1–23 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chidume, C.E., De Souza, G.S., Nnyaba, U.V., Romanus, O.M., Adamu, A.: Approximation of zeros m-accretive mappings, with applications to Hammerstein integral equation. Carpath. J. Math. 36, 45–55 (2020)

    MathSciNet  MATH  Google Scholar 

  22. Chidume, C.E., Idu, K.: Approximation of zeros of bounded maximal monotone maps, solutions of Hammerstein integral equations and convex minimization problems. Fixed Point Theory Appl. 97, 1–28 (2016). https://doi.org/10.1186/s13663-016-0582-8

    Article  MATH  Google Scholar 

  23. Cholamjiak, P., Sunthrayuth, P., Singta, A., Muangchoo, K.: Iterative methods for solving the monotone inclusion problem and the fixed point problem in Banach ppaces. Thai J. Math. 18(3), 1225–1246 (2020)

    MathSciNet  MATH  Google Scholar 

  24. Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55(4), 1–22 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  25. Ibrahim, A.H., Kumam, P., Abubakar, A.B., Adamu, A.: Accelerated derivative-free method for nonlinear monotone equations with application. Numer. Linear Algebra Appl. 29(3), e2424 (2022). https://doi.org/10.1002/nla.2424

    Article  MathSciNet  MATH  Google Scholar 

  26. Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  27. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  28. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  29. Liu, B.: Fixed point of strong duality pseudocontractive mappings and applications. Abstr. Appl. Anal. 2012, Article ID 623625, 1–7 (2012). https://doi.org/10.1155/2012/623625

    Article  MathSciNet  MATH  Google Scholar 

  30. Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  31. Maingé, P.E.: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 59(1), 74–79 (2010)

    MathSciNet  MATH  Google Scholar 

  32. Nilsrakoo, W., Saejung, S.: On the fixed-point set of a family of relatively nonexpansive and generalized nonexpansive mappings. Fixed Point Theory Appl. 2010, 414232, 1–14 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  33. Nnakwe, M.O.: An algorithm for approximating a common solution of varia- tional inequality and convex minimization problems. Optimization 70, 2227–2246 (2021). https://doi.org/10.1080/02331934.2020.1777995

    Article  MathSciNet  MATH  Google Scholar 

  34. Nnakwe, M.O., Okeke, C.C.: A common solution of generalized equilibrium problems and fixed points of pseudo-contractive-type maps. J. Appl. Math. Comput. 66(1), 701–716 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  35. Padcharoen, A., Kitkuan, D., Kumam, W., Kumam, P.: Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems. Comput. Math. Methods 3, 1–14 (2021)

    Article  MathSciNet  Google Scholar 

  36. Pan, C., Wang, Y.: Convergence theorems for modified inertial viscosity splitting methods in Banach spaces. Mathematics 7(2), 1–12 (2019)

    Article  Google Scholar 

  37. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  38. Phairatchatniyom, P., ur Rehman, H., Abubakar, J., Kumam, P., Martinez-Moreno, J.: An inertial iterative scheme for solving split variational inclusion problems in real Hilbert spaces. Bangmod Int. J. Math. Comput. Sci. 7(2), 35–52 (2021)

    Google Scholar 

  39. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  40. Rockafellar, R.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  41. Shehu, Y.: Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces. Results Math. 74(4), 1–24 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  42. Taddele, G.H., Gebrie, A.G., Abubakar, J.: An iterative method with inertial effect for solving multiple-set split feasibility problem. Bangmod Int. J. Math. Comput. Sci. 7(2), 53–73 (2021)

    Google Scholar 

  43. Taiwo, A., Mewomo, O.T.: Inertial viscosity with alternative regularization for certain optimization and fixed point problems. J. Appl. Numer. Optim. 4(3), 405–423 (2022)

    Google Scholar 

  44. Takahashi, S., Takahashi, W., Toyoda, M.: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27–41 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  45. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  46. Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal., Theory Methods Appl. 16(12), 1127–1138 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  47. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  48. Xu, Z.B., Roach, G.F.: Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 157, 189–210 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  49. Yodjai, P., Kumam, P., Kitkuan, D., Jirakitpuwapat, W., Plubtieng, W.: The Halpern approximation of three operators splitting method for convex minimization problems with an application to image inpainting. Bangmod Int. J. Math. Comput. Sci. 5(2), 58–75 (2019)

    Google Scholar 

  50. Zegeye, H.: Strong convergence theorems for maximal monotone mappings in Banach spaces. J. Math. Anal. Appl. 343(2), 663–671 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors will like to thank the referees for their esteemed comments and suggestions. The authors would like to dedicate this manuscript to the memory of the late Professor Charles Ejike Chidume who was part of the original draft of the manuscript. He passed away before we compiled the final version submitted to this journal. The first author acknowledges with thanks, the King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship and Center of Excellence in Theoretical and Computational Science (TaCS-CoE) for their financial support.

Funding

This research was supported by the King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship and the National Research Council of Thailand (NRCT) under Research Grants for Talented Mid-Career Researchers (Contract no. N41A640089).

Author information

Authors and Affiliations

Authors

Contributions

AA and PK formulated the problem and discussed the formulation with DK and AP. Analysis, proof of the main theorem, and the draft manuscript was written jointly by AA, Pk, DK, and AP. Proofreading and writing of the original manuscript were done jointly by AA and PK. Software and numerical simulations were done jointly by Dk and AP. Finally, PK secured the grant for the research.

Corresponding author

Correspondence to Poom Kumam.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

The authors undersigned and gave their consent for the publication of their personal images used to be published in the above Journal and Article.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adamu, A., Kumam, P., Kitkuan, D. et al. A Tseng-type algorithm for approximating zeros of monotone inclusion and J-fixed-point problems with applications. Fixed Point Theory Algorithms Sci Eng 2023, 3 (2023). https://doi.org/10.1186/s13663-023-00741-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-023-00741-2

MSC

Keywords