Skip to main content

Self-adaptive forward–backward splitting algorithm for the sum of two monotone operators in Banach spaces

Abstract

In this work, we prove the weak convergence of a one-step self-adaptive algorithm to a solution of the sum of two monotone operators in 2-uniformly convex and uniformly smooth real Banach spaces. We give numerical examples in infinite-dimensional spaces to compare our result with some existing algorithms. Finally, our results extend and complement several existing results in the literature.

1 Introduction

Let \(\mathcal{E}\) be a real Banach space and \(\mathcal{E}^{*}\) be its topological dual. A problem of significant interest in nonlinear analysis is to find

$$\begin{aligned} x\in (A+B)^{-1}(0) \end{aligned}$$
(1.1)

with \((A+B)^{-1}(0)\neq \emptyset \), where \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) is a maximal monotone operator and \(B: \mathcal{E} \to {\mathcal{E}^{*}}\) is a monotone and Lipschitz map. Interest in problem (1.1) stems from its diverse application in different areas of nonlinear analysis such as optimization, variational inequality, split feasibility problems, and saddle-point problems with applications to signal and image processing and machine learning; see, for instance, Attouch et al. [6], Bruck [10], Censor and Elfvin [11], Chen and Rockafellar [12], Combettes and Wajs [15], Davis and Yin [16], Lions and Mercier [19], Moudafi and Thera [22], Passty [23], Peaceman and Rachford [24] for more treatments of problem (1.1). Consider, for instance, the split-feasibility problem, introduced by Censor and Elfvin [11], which is to find

$$\begin{aligned} x\in \mathcal{C}_{1} \quad\text{such that } Tx \in \mathcal{C}_{2}, \end{aligned}$$
(1.2)

where \(\mathcal{C}_{1}\subset {\mathcal{H}_{1}}\), \(\mathcal{C}_{2}\subset \mathcal{H}_{2}\) are nonempty, closed, and convex subsets of the Hilbert spaces \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, and \(T:\mathcal{H}_{1}\to \mathcal{H}_{2}\) is a bounded linear map. Then, (1.2) can be transformed into the monotone inclusion

$$\begin{aligned} \text{find } x \in \mathcal{C}_{1} \quad\text{such that}, 0 \in N_{ \mathcal{C}_{1}}(x) + T^{*}(I-P_{\mathcal{C}_{2}})Tx. \end{aligned}$$

By setting

$$\begin{aligned} A(x)=N_{\mathcal{C}_{1}}(x) \quad\text{and}\quad B(x)=\nabla \biggl( \frac{1}{2} \Vert Tx-P_{\mathcal{C}_{2}}Tx \Vert ^{2} \biggr)=T^{*}(I-P_{ \mathcal{C}_{2}})Tx, \end{aligned}$$

where \(N_{\mathcal{C}_{1}}(x)\) is the normal cone of \(\mathcal{C}_{1}\) at x and \(T^{*}\) is the adjoint operator of T, (1.2) can be reformulated as (1.1).

There are several methods of approximating solutions of (1.1), see, e.g., [1, 6, 9, 10, 16, 19, 20, 23, 30]. One of the most efficient methods is the forward–backward splitting method introduced by Passty [23], and Lions and Mercier [19]. The method generates a sequence \(\{x_{n}\}\) iteratively defined by

$$\begin{aligned} x_{n+1}=J_{\lambda _{n}}^{A}(x_{n}- \lambda _{n} Bx_{n}), \quad n\geq 0. \end{aligned}$$
(1.3)

They proved that if the operator B is μ-cocoercive, that is, there exists \(\mu >0\) such that

$$\begin{aligned} \bigl\langle x -y, B(x) - B(y) \bigr\rangle \geq \mu \bigl\Vert B(x) - B(y) \bigr\Vert ^{2}\quad \forall x,y\in \mathcal{E}, \end{aligned}$$

and \(\liminf \lambda _{n}>0\) with \(\limsup \lambda _{n}<2\mu \), then the sequence \(\{x_{n}\}\) generated by (1.3) converges weakly to a solution of (1.1). The cocoercivity requirement imposed on the operator B limits the class of operators for which the forward–backward splitting method is applicable. In fact, there are some important problems in applications where the forward–backward splitting method fails to converge due to the lack of coercivity of one of the operators. For instance, the first-order optimality condition for the saddle-point problems of the form

$$\begin{aligned} \min_{x\in \mathcal{H}_{1}}\max_{y\in \mathcal{H}_{2}} f_{1}(x)+ \Phi (x,y)+f_{2}(y), \end{aligned}$$
(1.4)

where \(f_{1}:\mathcal{H}_{1}\to \mathbb{R}\cup \{+\infty \}\) and \(f_{2}:\mathcal{H}_{2}\to \mathbb{R}\cup \{+\infty \}\) are proper convex and lower semicontinuous functions and \(\Phi:\mathcal{H}_{1}\times \mathcal{H}_{2}\to \mathbb{R}\) is a smooth convex–concave function. Then, (1.4) can be expressed as

$$\begin{aligned} \text{find } \begin{pmatrix} x \\ y \end{pmatrix} \in \mathcal{H}_{1} \times \mathcal{H}_{2}, \quad\text{such that } \begin{pmatrix} 0 \\ 0 \end{pmatrix} \in \begin{pmatrix} \partial f_{1}(x) \\ \partial f_{2}(y) \end{pmatrix} + \begin{pmatrix} \nabla _{x}\Phi (x,y) \\ -\nabla _{y}\Phi (x,y) \end{pmatrix} . \end{aligned}$$

This can be seen as (1.1) with

$$\begin{aligned} A= \begin{pmatrix} \partial f_{1}(x) \\ \partial f_{2}(y) \end{pmatrix} , \quad \text{and} \quad B= \begin{pmatrix} \nabla _{x}\Phi (x,y) \\ -\nabla _{y}\Phi (x,y) \end{pmatrix} . \end{aligned}$$

Problem (1.4) arises naturally in different areas of application such as statistics, machine learning, and optimization to mention but a few. Although the operator B, in this case, is Lipschitz whenever Φ is, B is never cocoercive even when Φ is bilinear. Thus, the development of an iterative method in which the cocoercivity of B is dispensed with is desirable.

In [28], Tseng introduced the forward–backward–forward splitting method (FBFSM) for approximating solutions of (1.1). The method generates a sequence \(\{x_{n}\}\) iteratively defined by

$$\begin{aligned} \textstyle\begin{cases} y_{n} = J_{\lambda _{n}}^{A}(x_{n} - \lambda _{n} B(x_{n})), \\ x_{n+1} = y_{n} - \lambda _{n} B(y_{n}) + \lambda _{n} B(x_{n}), \quad \forall n\in \mathbb{N}, \end{cases}\displaystyle \end{aligned}$$
(1.5)

with \(\lambda _{n}\in (0,\frac{1}{L})\), where L is the Lipschitz constant of B. Tseng was able to dispense with the cocoercivity of the operator B at the expense of its evaluation twice per iteration. Recently, Malitsky and Tam [21], introduced the forward–reflected–backward splitting method (FRBSM) generated iteratively by

$$\begin{aligned} x_{n+1} = J^{A}_{\lambda _{n}} \bigl(x_{n} - \lambda _{n} B(x_{n}) - \lambda _{n-1} \bigl(B(x_{n}) - B(x_{n-1}) \bigr) \bigr), \quad \forall n \in \mathbb{N}, \end{aligned}$$
(1.6)

with \(\lambda _{n}\in (\epsilon,\frac{1-2\epsilon}{2L})\) and \(\epsilon >0\). The forward–reflected–backward splitting method requires only one evaluation of the operator B per iteration. Thus, improving on the computational cost when compared to the forward–backward–forward method that requires two evaluations of the operator B per iteration. It is worth noting that the step sizes in each of the algorithms introduced by Tseng; and Tam and Malitski heavily depend on the prior knowledge of the Lipshitz constant of one of the operators that, some times, may be difficult to compute.

To overcome this difficulty, very recently, Hieu et. al. [17], introduced the modified forward–reflected–backward splitting method (MFRBSM) generated iteratively by

$$\begin{aligned} x_{n+1} = J^{A}_{\lambda _{n}} \bigl(x_{n} - \lambda _{n} B(x_{n}) - \lambda _{n-1} \bigl(B(x_{n}) - B(x_{n-1}) \bigr) \bigr), \quad \forall n \in \mathbb{N}, \end{aligned}$$
(1.7)

with

$$\begin{aligned} \lambda _{n+1}:=\min \biggl\{ \lambda _{n}, \frac{\theta \Vert x_{n+1} - x_{n} \Vert }{ \Vert Bx_{n+1} - Bx_{n} \Vert } \biggr\} ,\quad \theta \in \biggl(0,\frac{1}{2}\biggr). \end{aligned}$$

They proved the weak convergence of Algorithm (1.7) to a solution of (1.1). It is worth noting that the variable step sizes here do not require prior knowlegde of the Lipschitz constant.

All the results mentioned above are obtained in the setting of Hilbert spaces. There are few results regarding the forward–backward method and its variants in Banach spaces, see, e.g., [26, 29]. One of the difficulties, perhaps, is the fact that the operators A and B go from the Banach space \(\mathcal{E}\) to its dual \(\mathcal{E}^{*}\). The tools available in Hilbert spaces are not readily available in general Banach spaces. Moreover, the Lipschitz constant is, in general, often unknown in practice. In fact, in nonlinear problems it may be difficult to approximate. In those cases an algorithm with a linesearch is often used (see, e.g., [26]). However, a linesearch algorithm needs an inner loop with some stopping criterion over iterations and this task may be time consuming. In this paper, we prove the weak convergence of the forward–reflected–backward splitting method in 2-uniformly convex uniformly smooth real Banach spaces with variable step sizes that do not depend on the Lipschitz constant and without any linesearch Procedure. Our results extend, unify, and complement many existing results in the literature.

2 Preliminaries

In this section, we give some basic definitions and lemmas that will be used in the proof of our main results. Let \(\mathcal{E}\) be a real normed linear space. Let \(S_{\mathcal{E}}\) and \(B_{\mathcal{E}}\) denote the unit sphere and the closed unit ball of \(\mathcal{E}\), respectively. The modulus of smoothness of \(\mathcal{E}\), \(\rho _{\mathcal{E}}: [0,\infty ) \to [0,\infty )\) is defined by

$$\begin{aligned} \rho _{\mathcal{E}}(t):=\sup \biggl\{ \frac{ \Vert x + y \Vert + \Vert x- y \Vert }{2} - 1: x\in S_{\mathcal{E}}, \Vert y \Vert = t \biggr\} . \end{aligned}$$

The space \(\mathcal{E}\) is said to be smooth if

$$\begin{aligned} \lim_{t\to 0} \frac{ \Vert x + ty \Vert - \Vert x \Vert }{t} \end{aligned}$$
(2.1)

exists for all \(x,y\in S_{\mathcal{E}}\). The space \(\mathcal{E}\) is also said to be uniformly smooth if the limit in (2.1) converges uniformly for all \(x,y \in S_{\mathcal{E}}\); and \(\mathcal{E}\) is said to be 2-uniformly smooth, if there exists a fixed constant \(c>0\) such that \(\rho _{\mathcal{E}}(t) \leq ct^{2}\). It is well known that every 2-uniformly smooth space is uniformly smooth. A real normed space \(\mathcal{E}\) is said to be strictly convex if

$$\begin{aligned} \biggl\Vert \frac{(x+y)}{2} \biggr\Vert < 1 \quad\text{for all } x,y \in S_{ \mathcal{E}} \text{ and } x\neq y. \end{aligned}$$

\(\mathcal{E}\) is said to be uniformly convex if \(\delta _{\mathcal{E}}(\epsilon )>0\) for all \(\epsilon \in (0,2]\), where \(\delta _{\mathcal{E}}\) is the modulus of convexity of \(\mathcal{E}\) defined by

$$\begin{aligned} \delta _{\mathcal{E}}(\epsilon ): = \inf \biggl\{ 1- \biggl\Vert \frac{x+y}{2} \biggr\Vert : x,y \in B_{\mathcal{E}}, \Vert x -y \Vert \geq \epsilon \biggr\} , \end{aligned}$$
(2.2)

for all \(\epsilon \in (0,2]\). The space \(\mathcal{E}\) is said to be 2-uniformly convex if there exists \(c>0\) such that \(\delta _{\mathcal{E}}(\epsilon ) \geq c\epsilon ^{2}\) for all \(\epsilon \in (0,2]\). It is obvious that every 2-uniformly convex Banach space is uniformly convex. It is known that all Hilbert spaces are uniformly smooth and 2-uniformly convex. It is also known that all the Lebesgue spaces \(L_{p}\) are uniformly smooth for \(1< p\leq \infty \), and 2-uniformly convex whenever \(1< p\leq 2\) (see [8]).

Let \(\mathcal{E}\) be a real normed space. The normalized duality mapping of \(\mathcal{E}\) into \(\mathcal{E}^{*}\) is defined by

$$\begin{aligned} Jx:= \bigl\{ x^{*}\in \mathcal{E}^{*}: \bigl\langle x^{*},x \bigr\rangle = \bigl\Vert x^{*} \bigr\Vert ^{2} = \Vert x \Vert ^{2} \bigr\} , \end{aligned}$$

for all \(x\in \mathcal{E}\). The normalized duality mapping J has the following properties (see, e.g., [27]):

  • if \(\mathcal{E}\) is reflexive and strictly convex with the strictly convex dual space \(\mathcal{E}^{*}\), then J is a single-valued, one-to-one, and onto mapping. In this case, we can define the single-valued mapping \(J^{-1}: \mathcal{E}^{*} \to \mathcal{E}\) and we have \(J^{-1} =J^{*}\), where \(J^{*}\) is the normalized duality mapping on \(\mathcal{E}^{*}\);

  • if \(\mathcal{E}\) is uniformly smooth, then J is norm-to-norm uniformly continuous on each bounded subset of \(\mathcal{E}\).

Definition 2.1

Let \(\mathcal{E}\) be a real normed space. A map \(A:\mathcal{E}\rightarrow 2^{\mathcal{E}^{*}}\) is called monotone if for each \(x,y\in \mathcal{E}\),

$$\begin{aligned} \langle \eta -\nu,x- y \rangle \geq 0,\quad \forall \eta \in Ax, \nu \in Ay. \end{aligned}$$
(2.3)

If A is single valued, the map \(A:\mathcal{E} \rightarrow \mathcal{E}^{*}\) is called monotone if

$$\begin{aligned} \langle Ax-Ay,x-y \rangle \geq 0, \quad\forall x,y\in \mathcal{\mathcal{E}}. \end{aligned}$$
(2.4)

A multivalued monotone operator \(A: \mathcal{E} \to \mathcal{E}^{*}\) is said to be maximal monotone if \(A = B\) whenever \(B: \mathcal{E} \to 2^{\mathcal{E}^{*}}\) is monotone and \(G(A) \subset G(B)\), where \(G(A) = \{(x,x^{*}): x^{*} \in Ax\}\) is the graph of A.

Let \(\mathcal{E}\) be a real reflexive, strictly convex, and smooth Banach space and let \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) be a maximal monotone operator. Then, for each \(r>0\) the resolvent of A, \(J_{r}^{A}:\mathcal{E}\to \mathcal{E}\) is defined by

$$\begin{aligned} J_{r}^{A}(x)=(J+rA)^{-1}Jx, \end{aligned}$$

where J is the normalized duality mapping on \(\mathcal{E}\). It is easy to show that \(A^{-1}0 = F(J_{r}^{B})\) for all \(r>0\), where \(F(J_{r}^{A})\) denotes the set of fixed points of \(J_{r}^{A}\). Let \(\mathcal{E}\) be a smooth real Banach space with dual \(\mathcal{E}^{*}\). The functional, \(\psi: \mathcal{E}\times \mathcal{E} \to \mathbb{R}\), defined by

$$\begin{aligned} \psi (x,y):= \Vert x \Vert ^{2} - 2\langle x,Jy \rangle + \Vert y \Vert ^{2}, \quad \forall x,y \in \mathcal{E}, \end{aligned}$$
(2.5)

where J is the normalized duality mapping on \(\mathcal{E}\) will play a central role in the following. It was introduced by Alber and has been studied by Alber [2], Alber and Guerre-Delabriere [3], Kamimura and Takahashi [18], Reich [25], Chidume et al. [13, 14], and a host of other authors.

Lemma 2.2

([2, 5])

Let \(\mathcal{E}\) be a real uniformly convex, smooth Banach space. Then, the following identities hold:

  1. (i)

    \(\psi (x,y) = \psi (x,z) + \psi (z,y) + 2\langle x-z, Jz - Jy\rangle, \forall x,y,z \in \mathcal{E}\).

  2. (ii)

    \(\psi (x,y) + \psi (y,x) = 2\langle x -y, Jx - Jy\rangle, \forall x,y \in \mathcal{E}\).

Lemma 2.3

([5])

Let \(\mathcal{E}\) be a real 2-uniformly convex Banach space. Then, there exists \(\mu \geq 1\) such that

$$\begin{aligned} \frac{1}{\mu} \Vert x -y \Vert ^{2} \leq \psi (x,y) \quad\forall x,y \in X. \end{aligned}$$

Lemma 2.4

([7])

Let \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) be a maximal monotone mapping and \(B: \mathcal{E} \to \mathcal{E}^{*}\) be a Lipschitz continuous and monotone mapping. Then, the mapping \(A+B\) is a maximal monotone.

Lemma 2.5

([4])

Let \(\mathcal{E}\) be a uniformly convex Banach space. Then, the normalized duality mapping, J, is uniformly monotone on every bounded set. That is, for every \(R>0\) and arbitrary \(x,y\in \mathcal{E}\) with \(\|x\|\leq R\) and \(\|y\| \leq R\) there exists a real nonnegative and continuous function \(\psi _{R}:[0,\infty )\to [0, \infty )\) such that \(\psi _{R}(t)>0\) for \(t>0\), \(\psi _{R}(0)=0\) and

$$\begin{aligned} \langle Jx-Jy,x-y\rangle \geq \psi _{R} \bigl( \Vert x-y \Vert \bigr). \end{aligned}$$

Lemma 2.6

([18])

Let \(\mathcal{E}\) be a uniformly convex and smooth Banach space, and \(\{x_{n}\}\) and \(\{y_{n}\}\) be two sequences of \(\mathcal{E}\). If \(\lim_{n\to \infty}\psi (x_{n},y_{n}) = 0\) and either \(\{x_{n}\}\) or \(\{y_{n}\}\) is bounded, then \(\lim_{n\to \infty}\|x_{n} - y_{n}\|=0\).

3 Main results

In this section, we state and prove a weak convergence result for the Modified Forward–Reflected–Backward Splitting Method in a 2-uniformly convex uniformly smooth real Banach space. The method does not require the prior knowlegde or an estimate of the Lipschitz constant. In the following, we assume that the solution set \((A + B)^{-1}(0)\) of problem (1.1) is nonempty.

Theorem 3.1

Let \(\mathcal{E}\) be a real 2-uniformly convex uniformly smooth Banach space. Let \(A: \mathcal{E} \to 2^{\mathcal{E}^{*}}\) be a maximal monotone operator and \(B: \mathcal{E} \to \mathcal{E}^{*}\) be monotone and Lipschitz. Let \(x_{-1}, x_{0}\in \mathcal{E}\) be arbitrary and \(\lambda _{-1},\lambda _{0} >0\). Define the sequence \(\{x_{n}\}\) iteratively by

$$\begin{aligned} x_{n+1}= J_{\lambda _{n}}^{A} \circ J^{-1} \bigl(Jx_{n} - \lambda _{n} Bx_{n} - \lambda _{n-1}(Bx_{n} - Bx_{n-1}) \bigr), \quad n\geq 0, \end{aligned}$$
(3.1)

with

$$\begin{aligned} \lambda _{n+1}:= \min \biggl\{ \lambda _{n}, \frac{\theta \Vert x_{n+1} - x_{n} \Vert }{ \Vert Bx_{n+1} - Bx_{n} \Vert } \biggr\} ,\quad \theta \in \biggl(0,\frac{1}{2\mu} \biggr), \mu \geq 1. \end{aligned}$$

Suppose that \((A + B)^{-1} \neq \emptyset \) and that the duality mapping is weakly sequentially continuous, then the sequence \(\{x_{n}\}\) generated by (3.1) converges weakly to a solution of (1.1).

Proof

We first show that the sequence \(\{x_{n}\}\) is bounded. Let \(x^{*}\in (A+B)^{-1}(0)\), so that

$$\begin{aligned} -Bx^{*} \in Ax^{*}. \end{aligned}$$
(3.2)

From (3.1), we have that

$$\begin{aligned} \frac{1}{\lambda _{n}} \bigl(Jx_{n} - \lambda _{n}Bx_{n} - \lambda _{n-1}(Bx_{n} - Bx_{n-1}) - Jx_{n+1} \bigr) \in Ax_{n+1}. \end{aligned}$$
(3.3)

Using (3.2) and (3.3) and the monotonicity of A, we obtain

$$\begin{aligned} \bigl\langle Jx_{n+1} - Jx_{n} + \lambda _{n} \bigl(Bx_{n} - Bx^{*} \bigr) + \lambda _{n-1}(Bx_{n} - Bx_{n-1}), x^{*} - x_{n+1} \bigr\rangle \geq 0. \end{aligned}$$
(3.4)

By Lemma 2.2(i), we have

$$\begin{aligned} 2 \bigl\langle Jx_{n+1} - Jx_{n}, x^{*} - x_{n+1} \bigr\rangle = \psi \bigl(x^{*}, x_{n} \bigr) - \psi \bigl(x^{*}, x_{n+1} \bigr) - \psi (x_{n+1}, x_{n}). \end{aligned}$$
(3.5)

Also,

$$\begin{aligned} \bigl\langle Bx_{n} - Bx^{*}, x^{*} - x_{n+1} \bigr\rangle = \bigl\langle Bx_{n+1} - Bx^{*}, x^{*} - x_{n+1} \bigr\rangle + \bigl\langle Bx_{n} - Bx_{n+1}, x^{*} - x_{n+1} \bigr\rangle \end{aligned}$$
(3.6)

and

$$\begin{aligned} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} -x_{n+1} \bigr\rangle = \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle + \langle Bx_{n} - Bx_{n-1}, x_{n} - x_{n+1} \rangle. \end{aligned}$$
(3.7)

Substituting (3.5), (3.6), and (3.7) into (3.4) we have:

$$\begin{aligned} &\psi \bigl(x^{*},x_{n+1} \bigr) + 2\lambda _{n} \bigl\langle Bx_{n+1} - Bx_{n}, x^{*} - x_{n+1} \bigr\rangle \\ &\quad\leq \psi \bigl(x^{*},x_{n} \bigr) + 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle \\ &\qquad{} +2\lambda _{n-1}\langle Bx_{n} - Bx_{n-1}, x_{n} - x_{n+1}\rangle - \psi (x_{n+1},x_{n}) \\ &\qquad{}+ 2\lambda _{n} \bigl\langle Bx_{n+1} - Bx^{*}, x^{*} - x_{n+1} \bigr\rangle . \end{aligned}$$
(3.8)

Using the monotonicity of B on the last term of equation (3.8) and rearranging the equation we have

$$\begin{aligned} &\psi \bigl(x^{*},x_{n+1} \bigr) + 2\lambda _{n} \bigl\langle Bx_{n+1} - Bx_{n}, x^{*} - x_{n+1} \bigr\rangle + \psi (x_{n+1}, x_{n}) \\ &\quad\leq \psi \bigl(x^{*},x_{n} \bigr)+ 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle \\ &\qquad{}+ 2\lambda _{n-1} \langle Bx_{n} - Bx_{n-1}, x_{n} - x_{n+1} \rangle. \end{aligned}$$
(3.9)

Using the definition of \(\lambda _{n} \) and Lemma 2.3, we have

$$\begin{aligned} 2\lambda _{n-1} \langle Bx_{n} - Bx_{n-1}, x_{n} - x_{n+1}\rangle & \leq 2\lambda _{n-1} \Vert Bx_{n} - Bx_{n-1} \Vert \Vert x_{n} - x_{n+1} \Vert \\ &\leq 2\theta \frac{\lambda _{n-1}}{\lambda _{n}} \Vert x_{n} - x_{n-1} \Vert \Vert x_{n} - x_{n+1} \Vert \\ &\leq \theta \frac{\lambda _{n-1}}{\lambda _{n}} \bigl( \Vert x_{n} - x_{n-1} \Vert ^{2} + \Vert x_{n} - x_{n+1} \Vert ^{2} \bigr) \\ &\leq \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}} \bigl(\psi (x_{n}, x_{n-1}) + \psi (x_{n+1}, x_{n}) \bigr). \end{aligned}$$
(3.10)

Substituting (3.10) into (3.9), we have

$$\begin{aligned} &\psi \bigl(x^{*},x_{n+1} \bigr) + 2\lambda _{n} \bigl\langle Bx_{n+1} - Bx_{n}, x^{*} - x_{n+1} \bigr\rangle + \psi (x_{n+1}, x_{n}) \\ &\quad\leq \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}} \bigl(\psi (x_{n},x_{n-1}) + \psi (x_{n+1}, x_{n}) \bigr) + \psi \bigl(x^{*},x_{n} \bigr) \\ &\qquad{} + 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle . \end{aligned}$$
(3.11)

Rearranging the above inequality we obtain,

$$\begin{aligned} &\psi \bigl(x^{*},x_{n+1} \bigr) + 2\lambda _{n} \bigl\langle Bx_{n+1} - Bx_{n}, x^{*} - x_{n+1} \bigr\rangle + \biggl(1 - \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}} \biggr)\psi (x_{n+1}, x_{n}) \\ &\quad\leq \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1})+ \psi \bigl(x^{*}, x_{n} \bigr) + 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle . \end{aligned}$$
(3.12)

Now, define

$$\begin{aligned} E_{n} \bigl(x^{*} \bigr) = \psi \bigl(x^{*}, x_{n} \bigr) + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) + 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle . \end{aligned}$$
(3.13)

Using definition of \(E_{n}(x^{*})\) in (3.12), we have

$$\begin{aligned} E_{n +1} \bigl(x^{*} \bigr) \leq E_{n} \bigl(x^{*} \bigr) - \biggl(1-\mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}-\mu \theta \frac{\lambda _{n}}{\lambda _{n + 1}} \biggr)\psi (x_{n+1}, x_{n}). \end{aligned}$$
(3.14)

Let \(\delta \in (0,1-2\mu \theta )\) be fixed, since \(\lambda _{n} \to \lambda > 0\), we derive

$$\begin{aligned} \lim_{n \to \infty} \biggl(1-\mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}-\mu \theta \frac{\lambda _{n}}{\lambda _{n + 1}} \biggr) = 1- 2\mu \theta > \delta. \end{aligned}$$

Thus, there exists \(n_{1} \geq 1\) such that

$$\begin{aligned} 1-\mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}-\mu \theta \frac{\lambda _{n}}{\lambda _{n + 1}}\geq \delta, \quad \forall n \geq n_{1}. \end{aligned}$$
(3.15)

It follows from (3.14) and (3.15) that

$$\begin{aligned} E_{n +1} \bigl(x^{*} \bigr) \leq E_{n} \bigl(x^{*} \bigr) - \delta \psi (x_{n+1}, x_{n})\leq E_{n} \bigl(x^{*} \bigr) \quad \forall n \geq n_{1}. \end{aligned}$$
(3.16)

Therefore, the sequence \(\{E_{n}\}_{n\geq n_{1}}\) is nonincreasing.

Now, from the definition of \(E_{n}\) and \(\lambda _{n}\) for each \(n \geq n_{1}\) we see that

$$\begin{aligned} E_{n} \bigl(x^{*} \bigr) &= \psi \bigl(x^{*}, x_{n} \bigr) + 2\lambda _{n-1} \bigl\langle Bx_{n} - Bx_{n-1}, x^{*} - x_{n} \bigr\rangle + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \\ &\geq \psi \bigl(x^{*}, x_{n} \bigr) - 2\lambda _{n-1} \Vert Bx_{n} - Bx_{n-1} \Vert \bigl\Vert x^{*} - x_{n} \bigr\Vert + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \\ &\geq \psi \bigl(x^{*}, x_{n} \bigr) - 2\theta \frac{\lambda _{n-1}}{\lambda _{n}} \Vert x_{n} - x_{n-1} \Vert \bigl\Vert x^{*} - x_{n} \bigr\Vert + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \\ &\geq \psi \bigl(x^{*}, x_{n} \bigr) - \theta \frac{\lambda _{n-1}}{\lambda _{n}} \bigl( \Vert x_{n} - x_{n-1} \Vert ^{2} + \bigl\Vert x^{*} - x_{n} \bigr\Vert ^{2} \bigr) + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \\ &\geq \psi \bigl(x^{*}, x_{n} \bigr) - \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}} \bigl(\psi (x_{n}, x_{n-1}) + \psi \bigl(x^{*}, x_{n} \bigr) \bigr) + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \\ & = \biggl(1- \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}} \biggr)\psi \bigl(x^{*}, x_{n} \bigr) \\ &\geq \biggl(1-\mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}- \mu \theta \frac{\lambda _{n}}{\lambda _{n+1}} \biggr)\psi \bigl(x^{*}, x_{n} \bigr) \\ &\geq \delta \psi \bigl(x^{*}, x_{n} \bigr) \geq 0. \end{aligned}$$

Thus, the limit \(\lim_{n\to \infty}E_{n}\) exists.

Also, the boundedness of \(\{\psi (x^{*}, x_{n})\}\) implies that \(\{x_{n}\}\) is bounded. Moreover, from (3.16), we have by telescoping, that

$$\begin{aligned} E_{n +1} \bigl(x^{*} \bigr) \leq E_{n_{1}} \bigl(x^{*} \bigr) - \delta \sum _{n=n_{1}}^{ \infty}\psi (x_{n+1}, x_{n}). \end{aligned}$$
(3.17)

That is,

$$\begin{aligned} \delta \sum_{n=n_{1}}^{\infty}\psi (x_{n+1}, x_{n}) \leq E_{n_{1}} \bigl(x^{*} \bigr) -\lim_{n\to \infty} E_{n +1} \bigl(x^{*} \bigr) < +\infty. \end{aligned}$$

Hence, the limit \(\lim_{n\to \infty}\psi (x_{n+1}, x_{n})\) exists. Since B is Lipschitz continuous, \(\{x_{n}\}\) is bounded, \(\lambda _{n} \to \lambda > 0\), then from (3.17) and Lemma 2.6, we obtain that

$$\begin{aligned} \lim_{n\to \infty} \biggl(2\lambda \bigl\langle Bx_{n} - Bx_{n-1},x^{*}-x_{n} \bigr\rangle + \mu \theta \frac{\lambda _{n-1}}{\lambda _{n}}\psi (x_{n},x_{n-1}) \biggr) = 0. \end{aligned}$$

Using the definition of \(E_{n}\), we have

$$\begin{aligned} \lim_{n\to \infty} E_{n} \bigl(x^{*} \bigr) = \lim_{n\to \infty}\psi \bigl(x^{*},x_{n} \bigr). \end{aligned}$$

That is, the limit of \(\psi (x^{*},x_{n})\) exists for each \(x^{*}\in (A+B)^{-1}(0)\).

We now prove that \(\{x_{n}\}\) converges weakly to an element of \((A + B)^{-1}(0)\). Let ρ be a weak cluster point of \(\{x_{n}\}\). Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup \rho \). We show that ρ \(\in (A + B)^{-1}(0)\).

From the definition of \(x_{n}\) in (3.1), we have

$$\begin{aligned} \frac{1}{\lambda _{n}}(Jx_{n} - Jx_{n+1}) + (Bx_{n+1} - Bx_{n})- \frac{\lambda _{n-1}}{\lambda _{n}}(Bx_{n} -Bx_{n-1}) \in (A + B)x_{n+1.} \end{aligned}$$
(3.18)

Since, by Lemma 2.4, A + B is maximal monotone, then we have that its graph is demiclosed. Now, passing the limit in (3.18) we obtain that

$$\begin{aligned} 0\in (A + B) (\rho ). \end{aligned}$$

Next, we show that the whole sequence \(\{x_{n}\}\) converges weakly to ρ.

Suppose there exists \(\rho ^{\prime }\) such that \(x_{n_{j}}\rightharpoonup \rho ^{\prime }\) for some subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) with \(\rho ^{\prime } \neq \rho \). Then, we have

$$\begin{aligned} \psi (\rho,x_{n}) = \Vert \rho \Vert ^{2} - 2\langle \rho, Jx_{n}\rangle + \Vert x_{n} \Vert ^{2}, \end{aligned}$$

and

$$\begin{aligned} \psi \bigl(\rho ^{\prime },x_{n} \bigr) = \bigl\Vert \rho ^{\prime } \bigr\Vert ^{2} - 2 \bigl\langle \rho ^{\prime }, Jx_{n} \bigr\rangle + \Vert x_{n} \Vert ^{2}. \end{aligned}$$

Thus, we have

$$\begin{aligned} 2 \bigl\langle \rho ^{\prime } - \rho, Jx_{n} \bigr\rangle = \psi (\rho,x_{n}) - \psi \bigl(\rho ^{\prime },x_{n} \bigr) + \bigl\Vert \rho ^{\prime } \bigr\Vert ^{2} - \Vert \rho \Vert ^{2}. \end{aligned}$$

Hence, the limit \(\lim_{n\to \infty}\langle \rho ^{\prime } - \rho, Jx_{n}\rangle \) exists. Since J is weakly sequentially continuous, we have

$$\begin{aligned} \bigl\langle \rho ^{\prime } - \rho, J\rho \bigr\rangle = \lim _{k\to \infty} \bigl\langle \rho ^{\prime } - \rho, Jx_{n_{k}} \bigr\rangle = \lim_{j\to \infty} \bigl\langle \rho ^{\prime } - \rho, Jx_{n_{j}} \bigr\rangle = \bigl\langle \rho ^{\prime } - \rho, J\rho ^{\prime } \bigr\rangle . \end{aligned}$$

Using Lemma 2.5, we have that \(\rho ^{\prime } = \rho \). Hence \(\{x_{n}\}\) converges weakly to ρ. □

We now state Theorem 3.1 in Hilbert spaces.

Corollary 3.2

Let \(\mathcal{H}\) be a real Hilbert space. Let \(A: \mathcal{H}\to 2^{\mathcal{H}}\) be a maximal monotone operator and \(B:\mathcal{H} \to \mathcal{H}\) be monotone and Lipschitz. Choose \(x_{-1}, x_{0} \in \mathcal{H}, \lambda _{-1}, \lambda _{0} > 0 \). Let \(\{x_{n}\}\) be the sequence defined by

$$\begin{aligned} \textstyle\begin{cases} x_{n+1} = J_{\lambda _{n}}^{A}(x_{n} - \lambda _{n} Bx_{n} - \lambda _{n-1}(Bx_{n} - Bx_{n-1})), \quad n\geq 0, \\ \lambda _{n+1}:= \min \{\lambda _{n}, \frac{\theta \Vert x_{n+1} - x_{n} \Vert }{ \Vert Bx_{n+1} - Bx_{n} \Vert } \},\quad \theta \in (0,\frac{1}{2}).\end{cases}\displaystyle \end{aligned}$$

Suppose \((A+B)^{-1}(0) \neq \emptyset \). Then, the sequence \(\{x_{n}\}\) converges weakly to an element of \((A+B)^{-1}(0)\).

4 Numerical examples in infinite-dimensional spaces

In this section, we compare Algorithm (3.1) with FBFSM and FRBSM introduced in [28] and [21], respectively. For easy referencing, we term FBFSM and FRBSM as TSENG and TAM, respectively. Numerical experiments were carried out on MATLAB R2015a version. All programs were run on a 64-bit OS PC with an Intel(R) Core(TM) i7-3540M CPU @ 1.00 GHz, 1.19 GHz and 3 GB RAM. All figures were plotted using the loglog plot command.

Example 1

Let \(\mathcal{H} = L_{2}([0, 1])\), with the norm and inner product defined as

$$\begin{aligned} \Vert x \Vert _{2} = \biggl( \int ^{1}_{0} \bigl\vert x(t) \bigr\vert ^{2} \,dt \biggr)^{\frac{1}{2}} \quad\text{and}\quad \langle x, y\rangle = \int ^{1}_{0} x(t) y(t) \,dt,\quad \text{respectively.} \end{aligned}$$

Define the operator \(B: \mathcal{H} \to \mathcal{H}\) by

$$\begin{aligned} Bx(t) = \int ^{1}_{0} \biggl[ x(s) - \biggl( \frac{2tse^{t+s}}{e\sqrt{e^{2} - 1}} \biggr)\cos x(s) \biggr] \,ds + \frac{2te^{t}}{e\sqrt{e^{2} - 1}}, \quad x\in L_{2} \bigl([0, 1] \bigr), \end{aligned}$$

then, B is monotone and Lipschitz with Lipschitz constant \(L=2\). Let \(A: \mathcal{L}_{2}([0,1])\to \mathcal{L}_{2}([0,1])\) be defined by

$$\begin{aligned} Ax(t) = \max \bigl\{ x(t),0 \bigr\} , \end{aligned}$$

then, A is maximal monotone and for any \(r>0\), the resolvent, \(J_{r}^{A}: \mathcal{L}_{2}([0,1])\to \mathcal{L}_{2}([0,1])\), of A, is given by

$$\begin{aligned} J_{r}^{A}x(t)= \textstyle\begin{cases} x(t),& Ax(t)=0,\\ \frac {1}{1+r}x(t),& Ax(t)=x(t). \end{cases}\displaystyle \end{aligned}$$

Clearly,

$$\begin{aligned} 0\in (A+B)^{-1}(0). \end{aligned}$$

We show that \(x_{n}\rightharpoonup 0\). We recall that the sequence \(\{x_{n}\}\) converges weakly to 0 in \(\mathcal{L}_{2}([0,1])\) if and only if

$$\begin{aligned} \langle \varphi, x_{n}\rangle = \int ^{1}_{0}\varphi (t) x_{n}(t) \,dt \rightarrow 0 \quad\text{as } n\to \infty \end{aligned}$$

for any \(\psi \in \mathcal{H}^{*}\). We conduct the experiment with various functions ψ in \(\mathcal{L}_{2}([0,1])\). The integrals were approximated using the \(trapz\) and \(int\) command on MATLAB over the interval \([0,1 ]\). The results of the experiment are displayed in Table 1 and Figs. 1, 2, 3, and 4.

Figure 1
figure 1

Example 1 with \(\theta = 0.4\) and \(\lambda _{n} = 0.1250+(0.0156)n^{-1}\)

Figure 2
figure 2

Example 1 with \(\theta = 0.4\) and \(\lambda _{n} = 0.01+(0.2)n^{-1}\)

Figure 3
figure 3

Example 1 with \(\theta = 0.01\) and \(\lambda _{n} = 0.1250+(0.0156)n^{-1}\)

Figure 4
figure 4

Example 1 with \(\theta = 0.01\) and \(\lambda _{n} = 0.01+(0.2)n^{-1}\)

Table 1 Computational Results for Example 1

Example 2

Let \(\mathcal{H} = L_{2}([0, 1])\), with the norm and inner product defined as

$$\begin{aligned} \Vert x \Vert _{2} = \biggl( \int ^{1}_{0} \bigl\vert x(t) \bigr\vert ^{2} \,dt \biggr)^{\frac{1}{2}} \quad\text{and}\quad \langle x, y\rangle = \int ^{1}_{0} x(t) y(t) \,dt, \quad\text{respectively. } \end{aligned}$$

We inherit the map A from (1) above, while the map B is defined by

$$\begin{aligned} Bx(t) = \frac{x(t)+ \vert x(t) \vert }{2}. \end{aligned}$$

Clearly, B is monotone and Lipschitz and

$$\begin{aligned} 0\in (A+B)^{-1}(0). \end{aligned}$$

We show that \(x_{n}\rightharpoonup 0\) just as in Example 1 above. The results of the experiment are displayed in Table 2 and Figs. 5, 6, 7, and 8.

Figure 5
figure 5

Example 2 with \(\theta = 0.4\) and \(\lambda _{n} = 0.1250+(0.0156)n^{-1}\)

Figure 6
figure 6

Example 2 with \(\theta = 0.4\) and \(\lambda _{n} = 0.01+(0.2)n^{-1}\)

Figure 7
figure 7

Example 2 with \(\theta = 0.01\) and \(\lambda _{n} = 0.1250+(0.0156)n^{-1}\)

Figure 8
figure 8

Example 2 with \(\theta = 0.01\) and \(\lambda _{n} = 0.01+(0.2)n^{-1}\)

Table 2 Computational Results for Example 2

Remark 1

From the results displayed in Tables 1 and 2 it is clear that the speed of the convergence of Algorithm (3.1) heavily depends on the value of θ. For instance, Algorithm (3.1) converges faster as the value of θ moves closer to 0.5. Thus, if the value of θ is appropriately chosen Algorithm (3.1), seems to have cheaper computations compared to its counterparts. On the other hand, the Algorithm TAM depends on the step size \(\{\lambda _{n}\}\), while that of TSENG depends on λ. The algorithm converges faster when the step sizes are chosen very close to the upper bound of the interval of choice. Finally, we note that the number of iterations for TSENG in Table 1 was cut short due to the large number iterations needed before the tolerance is reached.

5 Conclusion

In this work, we have proved the weak convergence of a one-step self-adaptive algorithm to a solution of the sum of two monotone operators in 2-uniformly convex and uniformly smooth Banach spaces. Numerical results were presented to illustrate how Algorithm (3.1) competes with some existing algorithms. Finally, our results generalize and complement some existing results in the literature.

Availability of data and materials

Data sharing is not applicable to this article.

References

  1. Abass, H.A., Aremu, K.O., Jolaoso, L.O., Mewomo, O.T.: An inertial forward–backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal. 2020, Article ID 6 (2020)

    Google Scholar 

  2. Alber, Y.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, Lecture Notes in Pure and Appl. Math., vol. 178, pp. 15–50. Dekker, New York (1996)

    Google Scholar 

  3. Alber, Y., Guerre-Delabriere, S.: On the projections for fixed points problems. Analysis 21(1), 17–39 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Alber, Y., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, London (2006)

    MATH  Google Scholar 

  5. Aoyama, K., Kohsaka, F.: Strongly relatively nonexpansive sequences generated by firmly nonexpansive-like mappings. Fixed Point Theory Appl. 2014, Article ID 95 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  6. Attouch, H., Peypouquet, J., Redont, P.: Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 457, 1095–1117 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  7. Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Editura Academiei R.S.R, Bucharest (1976)

    Book  MATH  Google Scholar 

  8. Beauzamy, B.: Introduction to Banach Spaces and Their Geometry, 2nd edn. North-Holland Mathematics Studies, vol. 68, p. xv+338 North-Holland, Amsterdam (1985). ISBN 0-444-87878-5. Notas de Matematica [Mathematical Notes], 86

    MATH  Google Scholar 

  9. Bredies, K.: A forward–backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space. Inverse Probl. 25(1), 015005 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bruck, R.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  11. Censor, Y., Elfving, T.: A multi-projection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, G.H.G., Rockafellar, R.T.: Convergence rates in forward–backward splitting. SIAM J. Optim. 7(2), 421–444 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chidume, C.E., Bello, A.U., Usman, B.: Iterative Algorithms for Zeros of Strongly Monotone Lipschitz Maps in Classical Banach Spaces p. 9. Springer, Berlin (2015). https://doi.org/10.1186/s40064-015-1044-1

    Book  Google Scholar 

  14. Chidume, C.E., Chidume, C.O., Bello, A.U.: An algorithm for zeros of generalized phi-strongly monotone and bounded maps in classical Banach spaces. Optimization (2015). https://doi.org/10.1080/02331934.2015.1074686

  15. Combettes, P., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Davis, D., Yin, W.T.: A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 25, 829–858 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Hieu, D.V., Anh, P.K., Muu, L.D.: Modified Forward Reflected Backward Spliting Method for Variational Inclusions. Springer, Berlin (2020). https://doi.org/10.1007/s10288-020-00440-3

    Book  Google Scholar 

  18. Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  20. Liu, L.: Strong convergence of a modified inertial forward–backward splitting algorithm for a inclusion problem. J. Appl. Numer. Optim. 2, 373–385 (2020)

    Google Scholar 

  21. Malitsky, Y., Tam, M.K.: A forward–backward spliting method for monotone inclusion without cocoercivity. SIAM J. Optim. 30(2), 1451–1472 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  22. Moudafi, A., Thera, M.: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94, 425–448 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  23. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383–390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  24. Peaceman, D.H., Rachford, H.H.: The numerical solutions of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3, 28–41 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  25. Reich, S.: A weak convergence theorem for the alternating method with Bregman distances. In: Kartsatos, A.G. (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes Pure Appl. Math., vol. 178, pp. 313–318. Dekker, New York (1996)

    Google Scholar 

  26. Shehu, Y.: Convergence results of forward–backward algorithm for sum of monotone operator in Banach spaces. Results Math. 74, 138 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  27. Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)

    MATH  Google Scholar 

  28. Tseng, P.: A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  29. Tuyen, T.M., Promkam, R., Sunthrayuth, P.: Strong convergence of a generalized forward–backward splitting method in reflexive Banach spaces. Optimization 71(6), 1483–1508(2020). https://doi.org/10.1080/02331934.2020.1812607

    Article  MathSciNet  MATH  Google Scholar 

  30. Wang, Y., Xu, H.K.: Strong convergence for the proximal-gradient method. J. Nonlinear Convex Anal. 15(3), 581–593 (2014)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors appreciate the support of their institution and AfDB.

Funding

This work is supported from AfDB Research Grant Funds to AUST.

Author information

Authors and Affiliations

Authors

Contributions

The problem was formulated by AUB and the computations and proofs were carried out jointly by CEC and MA. All authors have read and agreed the final manuscript.

Corresponding author

Correspondence to Abdulmalik U. Bello.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bello, A.U., Chidume, C.E. & Alka, M. Self-adaptive forward–backward splitting algorithm for the sum of two monotone operators in Banach spaces. Fixed Point Theory Algorithms Sci Eng 2022, 25 (2022). https://doi.org/10.1186/s13663-022-00732-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-022-00732-9

MSC

Keywords