 Research
 Open Access
 Published:
Selfadaptive forward–backward splitting algorithm for the sum of two monotone operators in Banach spaces
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2022, Article number: 25 (2022)
Abstract
In this work, we prove the weak convergence of a onestep selfadaptive algorithm to a solution of the sum of two monotone operators in 2uniformly convex and uniformly smooth real Banach spaces. We give numerical examples in infinitedimensional spaces to compare our result with some existing algorithms. Finally, our results extend and complement several existing results in the literature.
1 Introduction
Let \(\mathcal{E}\) be a real Banach space and \(\mathcal{E}^{*}\) be its topological dual. A problem of significant interest in nonlinear analysis is to find
with \((A+B)^{1}(0)\neq \emptyset \), where \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) is a maximal monotone operator and \(B: \mathcal{E} \to {\mathcal{E}^{*}}\) is a monotone and Lipschitz map. Interest in problem (1.1) stems from its diverse application in different areas of nonlinear analysis such as optimization, variational inequality, split feasibility problems, and saddlepoint problems with applications to signal and image processing and machine learning; see, for instance, Attouch et al. [6], Bruck [10], Censor and Elfvin [11], Chen and Rockafellar [12], Combettes and Wajs [15], Davis and Yin [16], Lions and Mercier [19], Moudafi and Thera [22], Passty [23], Peaceman and Rachford [24] for more treatments of problem (1.1). Consider, for instance, the splitfeasibility problem, introduced by Censor and Elfvin [11], which is to find
where \(\mathcal{C}_{1}\subset {\mathcal{H}_{1}}\), \(\mathcal{C}_{2}\subset \mathcal{H}_{2}\) are nonempty, closed, and convex subsets of the Hilbert spaces \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, and \(T:\mathcal{H}_{1}\to \mathcal{H}_{2}\) is a bounded linear map. Then, (1.2) can be transformed into the monotone inclusion
By setting
where \(N_{\mathcal{C}_{1}}(x)\) is the normal cone of \(\mathcal{C}_{1}\) at x and \(T^{*}\) is the adjoint operator of T, (1.2) can be reformulated as (1.1).
There are several methods of approximating solutions of (1.1), see, e.g., [1, 6, 9, 10, 16, 19, 20, 23, 30]. One of the most efficient methods is the forward–backward splitting method introduced by Passty [23], and Lions and Mercier [19]. The method generates a sequence \(\{x_{n}\}\) iteratively defined by
They proved that if the operator B is μcocoercive, that is, there exists \(\mu >0\) such that
and \(\liminf \lambda _{n}>0\) with \(\limsup \lambda _{n}<2\mu \), then the sequence \(\{x_{n}\}\) generated by (1.3) converges weakly to a solution of (1.1). The cocoercivity requirement imposed on the operator B limits the class of operators for which the forward–backward splitting method is applicable. In fact, there are some important problems in applications where the forward–backward splitting method fails to converge due to the lack of coercivity of one of the operators. For instance, the firstorder optimality condition for the saddlepoint problems of the form
where \(f_{1}:\mathcal{H}_{1}\to \mathbb{R}\cup \{+\infty \}\) and \(f_{2}:\mathcal{H}_{2}\to \mathbb{R}\cup \{+\infty \}\) are proper convex and lower semicontinuous functions and \(\Phi:\mathcal{H}_{1}\times \mathcal{H}_{2}\to \mathbb{R}\) is a smooth convex–concave function. Then, (1.4) can be expressed as
This can be seen as (1.1) with
Problem (1.4) arises naturally in different areas of application such as statistics, machine learning, and optimization to mention but a few. Although the operator B, in this case, is Lipschitz whenever ∇Φ is, B is never cocoercive even when Φ is bilinear. Thus, the development of an iterative method in which the cocoercivity of B is dispensed with is desirable.
In [28], Tseng introduced the forward–backward–forward splitting method (FBFSM) for approximating solutions of (1.1). The method generates a sequence \(\{x_{n}\}\) iteratively defined by
with \(\lambda _{n}\in (0,\frac{1}{L})\), where L is the Lipschitz constant of B. Tseng was able to dispense with the cocoercivity of the operator B at the expense of its evaluation twice per iteration. Recently, Malitsky and Tam [21], introduced the forward–reflected–backward splitting method (FRBSM) generated iteratively by
with \(\lambda _{n}\in (\epsilon,\frac{12\epsilon}{2L})\) and \(\epsilon >0\). The forward–reflected–backward splitting method requires only one evaluation of the operator B per iteration. Thus, improving on the computational cost when compared to the forward–backward–forward method that requires two evaluations of the operator B per iteration. It is worth noting that the step sizes in each of the algorithms introduced by Tseng; and Tam and Malitski heavily depend on the prior knowledge of the Lipshitz constant of one of the operators that, some times, may be difficult to compute.
To overcome this difficulty, very recently, Hieu et. al. [17], introduced the modified forward–reflected–backward splitting method (MFRBSM) generated iteratively by
with
They proved the weak convergence of Algorithm (1.7) to a solution of (1.1). It is worth noting that the variable step sizes here do not require prior knowlegde of the Lipschitz constant.
All the results mentioned above are obtained in the setting of Hilbert spaces. There are few results regarding the forward–backward method and its variants in Banach spaces, see, e.g., [26, 29]. One of the difficulties, perhaps, is the fact that the operators A and B go from the Banach space \(\mathcal{E}\) to its dual \(\mathcal{E}^{*}\). The tools available in Hilbert spaces are not readily available in general Banach spaces. Moreover, the Lipschitz constant is, in general, often unknown in practice. In fact, in nonlinear problems it may be difficult to approximate. In those cases an algorithm with a linesearch is often used (see, e.g., [26]). However, a linesearch algorithm needs an inner loop with some stopping criterion over iterations and this task may be time consuming. In this paper, we prove the weak convergence of the forward–reflected–backward splitting method in 2uniformly convex uniformly smooth real Banach spaces with variable step sizes that do not depend on the Lipschitz constant and without any linesearch Procedure. Our results extend, unify, and complement many existing results in the literature.
2 Preliminaries
In this section, we give some basic definitions and lemmas that will be used in the proof of our main results. Let \(\mathcal{E}\) be a real normed linear space. Let \(S_{\mathcal{E}}\) and \(B_{\mathcal{E}}\) denote the unit sphere and the closed unit ball of \(\mathcal{E}\), respectively. The modulus of smoothness of \(\mathcal{E}\), \(\rho _{\mathcal{E}}: [0,\infty ) \to [0,\infty )\) is defined by
The space \(\mathcal{E}\) is said to be smooth if
exists for all \(x,y\in S_{\mathcal{E}}\). The space \(\mathcal{E}\) is also said to be uniformly smooth if the limit in (2.1) converges uniformly for all \(x,y \in S_{\mathcal{E}}\); and \(\mathcal{E}\) is said to be 2uniformly smooth, if there exists a fixed constant \(c>0\) such that \(\rho _{\mathcal{E}}(t) \leq ct^{2}\). It is well known that every 2uniformly smooth space is uniformly smooth. A real normed space \(\mathcal{E}\) is said to be strictly convex if
\(\mathcal{E}\) is said to be uniformly convex if \(\delta _{\mathcal{E}}(\epsilon )>0\) for all \(\epsilon \in (0,2]\), where \(\delta _{\mathcal{E}}\) is the modulus of convexity of \(\mathcal{E}\) defined by
for all \(\epsilon \in (0,2]\). The space \(\mathcal{E}\) is said to be 2uniformly convex if there exists \(c>0\) such that \(\delta _{\mathcal{E}}(\epsilon ) \geq c\epsilon ^{2}\) for all \(\epsilon \in (0,2]\). It is obvious that every 2uniformly convex Banach space is uniformly convex. It is known that all Hilbert spaces are uniformly smooth and 2uniformly convex. It is also known that all the Lebesgue spaces \(L_{p}\) are uniformly smooth for \(1< p\leq \infty \), and 2uniformly convex whenever \(1< p\leq 2\) (see [8]).
Let \(\mathcal{E}\) be a real normed space. The normalized duality mapping of \(\mathcal{E}\) into \(\mathcal{E}^{*}\) is defined by
for all \(x\in \mathcal{E}\). The normalized duality mapping J has the following properties (see, e.g., [27]):

if \(\mathcal{E}\) is reflexive and strictly convex with the strictly convex dual space \(\mathcal{E}^{*}\), then J is a singlevalued, onetoone, and onto mapping. In this case, we can define the singlevalued mapping \(J^{1}: \mathcal{E}^{*} \to \mathcal{E}\) and we have \(J^{1} =J^{*}\), where \(J^{*}\) is the normalized duality mapping on \(\mathcal{E}^{*}\);

if \(\mathcal{E}\) is uniformly smooth, then J is normtonorm uniformly continuous on each bounded subset of \(\mathcal{E}\).
Definition 2.1
Let \(\mathcal{E}\) be a real normed space. A map \(A:\mathcal{E}\rightarrow 2^{\mathcal{E}^{*}}\) is called monotone if for each \(x,y\in \mathcal{E}\),
If A is single valued, the map \(A:\mathcal{E} \rightarrow \mathcal{E}^{*}\) is called monotone if
A multivalued monotone operator \(A: \mathcal{E} \to \mathcal{E}^{*}\) is said to be maximal monotone if \(A = B\) whenever \(B: \mathcal{E} \to 2^{\mathcal{E}^{*}}\) is monotone and \(G(A) \subset G(B)\), where \(G(A) = \{(x,x^{*}): x^{*} \in Ax\}\) is the graph of A.
Let \(\mathcal{E}\) be a real reflexive, strictly convex, and smooth Banach space and let \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) be a maximal monotone operator. Then, for each \(r>0\) the resolvent of A, \(J_{r}^{A}:\mathcal{E}\to \mathcal{E}\) is defined by
where J is the normalized duality mapping on \(\mathcal{E}\). It is easy to show that \(A^{1}0 = F(J_{r}^{B})\) for all \(r>0\), where \(F(J_{r}^{A})\) denotes the set of fixed points of \(J_{r}^{A}\). Let \(\mathcal{E}\) be a smooth real Banach space with dual \(\mathcal{E}^{*}\). The functional, \(\psi: \mathcal{E}\times \mathcal{E} \to \mathbb{R}\), defined by
where J is the normalized duality mapping on \(\mathcal{E}\) will play a central role in the following. It was introduced by Alber and has been studied by Alber [2], Alber and GuerreDelabriere [3], Kamimura and Takahashi [18], Reich [25], Chidume et al. [13, 14], and a host of other authors.
Lemma 2.2
Let \(\mathcal{E}\) be a real uniformly convex, smooth Banach space. Then, the following identities hold:

(i)
\(\psi (x,y) = \psi (x,z) + \psi (z,y) + 2\langle xz, Jz  Jy\rangle, \forall x,y,z \in \mathcal{E}\).

(ii)
\(\psi (x,y) + \psi (y,x) = 2\langle x y, Jx  Jy\rangle, \forall x,y \in \mathcal{E}\).
Lemma 2.3
([5])
Let \(\mathcal{E}\) be a real 2uniformly convex Banach space. Then, there exists \(\mu \geq 1\) such that
Lemma 2.4
([7])
Let \(A: \mathcal{E}\to 2^{\mathcal{E}^{*}}\) be a maximal monotone mapping and \(B: \mathcal{E} \to \mathcal{E}^{*}\) be a Lipschitz continuous and monotone mapping. Then, the mapping \(A+B\) is a maximal monotone.
Lemma 2.5
([4])
Let \(\mathcal{E}\) be a uniformly convex Banach space. Then, the normalized duality mapping, J, is uniformly monotone on every bounded set. That is, for every \(R>0\) and arbitrary \(x,y\in \mathcal{E}\) with \(\x\\leq R\) and \(\y\ \leq R\) there exists a real nonnegative and continuous function \(\psi _{R}:[0,\infty )\to [0, \infty )\) such that \(\psi _{R}(t)>0\) for \(t>0\), \(\psi _{R}(0)=0\) and
Lemma 2.6
([18])
Let \(\mathcal{E}\) be a uniformly convex and smooth Banach space, and \(\{x_{n}\}\) and \(\{y_{n}\}\) be two sequences of \(\mathcal{E}\). If \(\lim_{n\to \infty}\psi (x_{n},y_{n}) = 0\) and either \(\{x_{n}\}\) or \(\{y_{n}\}\) is bounded, then \(\lim_{n\to \infty}\x_{n}  y_{n}\=0\).
3 Main results
In this section, we state and prove a weak convergence result for the Modified Forward–Reflected–Backward Splitting Method in a 2uniformly convex uniformly smooth real Banach space. The method does not require the prior knowlegde or an estimate of the Lipschitz constant. In the following, we assume that the solution set \((A + B)^{1}(0)\) of problem (1.1) is nonempty.
Theorem 3.1
Let \(\mathcal{E}\) be a real 2uniformly convex uniformly smooth Banach space. Let \(A: \mathcal{E} \to 2^{\mathcal{E}^{*}}\) be a maximal monotone operator and \(B: \mathcal{E} \to \mathcal{E}^{*}\) be monotone and Lipschitz. Let \(x_{1}, x_{0}\in \mathcal{E}\) be arbitrary and \(\lambda _{1},\lambda _{0} >0\). Define the sequence \(\{x_{n}\}\) iteratively by
with
Suppose that \((A + B)^{1} \neq \emptyset \) and that the duality mapping is weakly sequentially continuous, then the sequence \(\{x_{n}\}\) generated by (3.1) converges weakly to a solution of (1.1).
Proof
We first show that the sequence \(\{x_{n}\}\) is bounded. Let \(x^{*}\in (A+B)^{1}(0)\), so that
From (3.1), we have that
Using (3.2) and (3.3) and the monotonicity of A, we obtain
By Lemma 2.2(i), we have
Also,
and
Substituting (3.5), (3.6), and (3.7) into (3.4) we have:
Using the monotonicity of B on the last term of equation (3.8) and rearranging the equation we have
Using the definition of \(\lambda _{n} \) and Lemma 2.3, we have
Substituting (3.10) into (3.9), we have
Rearranging the above inequality we obtain,
Now, define
Using definition of \(E_{n}(x^{*})\) in (3.12), we have
Let \(\delta \in (0,12\mu \theta )\) be fixed, since \(\lambda _{n} \to \lambda > 0\), we derive
Thus, there exists \(n_{1} \geq 1\) such that
It follows from (3.14) and (3.15) that
Therefore, the sequence \(\{E_{n}\}_{n\geq n_{1}}\) is nonincreasing.
Now, from the definition of \(E_{n}\) and \(\lambda _{n}\) for each \(n \geq n_{1}\) we see that
Thus, the limit \(\lim_{n\to \infty}E_{n}\) exists.
Also, the boundedness of \(\{\psi (x^{*}, x_{n})\}\) implies that \(\{x_{n}\}\) is bounded. Moreover, from (3.16), we have by telescoping, that
That is,
Hence, the limit \(\lim_{n\to \infty}\psi (x_{n+1}, x_{n})\) exists. Since B is Lipschitz continuous, \(\{x_{n}\}\) is bounded, \(\lambda _{n} \to \lambda > 0\), then from (3.17) and Lemma 2.6, we obtain that
Using the definition of \(E_{n}\), we have
That is, the limit of \(\psi (x^{*},x_{n})\) exists for each \(x^{*}\in (A+B)^{1}(0)\).
We now prove that \(\{x_{n}\}\) converges weakly to an element of \((A + B)^{1}(0)\). Let ρ be a weak cluster point of \(\{x_{n}\}\). Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup \rho \). We show that ρ \(\in (A + B)^{1}(0)\).
From the definition of \(x_{n}\) in (3.1), we have
Since, by Lemma 2.4, A + B is maximal monotone, then we have that its graph is demiclosed. Now, passing the limit in (3.18) we obtain that
Next, we show that the whole sequence \(\{x_{n}\}\) converges weakly to ρ.
Suppose there exists \(\rho ^{\prime }\) such that \(x_{n_{j}}\rightharpoonup \rho ^{\prime }\) for some subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) with \(\rho ^{\prime } \neq \rho \). Then, we have
and
Thus, we have
Hence, the limit \(\lim_{n\to \infty}\langle \rho ^{\prime }  \rho, Jx_{n}\rangle \) exists. Since J is weakly sequentially continuous, we have
Using Lemma 2.5, we have that \(\rho ^{\prime } = \rho \). Hence \(\{x_{n}\}\) converges weakly to ρ. □
We now state Theorem 3.1 in Hilbert spaces.
Corollary 3.2
Let \(\mathcal{H}\) be a real Hilbert space. Let \(A: \mathcal{H}\to 2^{\mathcal{H}}\) be a maximal monotone operator and \(B:\mathcal{H} \to \mathcal{H}\) be monotone and Lipschitz. Choose \(x_{1}, x_{0} \in \mathcal{H}, \lambda _{1}, \lambda _{0} > 0 \). Let \(\{x_{n}\}\) be the sequence defined by
Suppose \((A+B)^{1}(0) \neq \emptyset \). Then, the sequence \(\{x_{n}\}\) converges weakly to an element of \((A+B)^{1}(0)\).
4 Numerical examples in infinitedimensional spaces
In this section, we compare Algorithm (3.1) with FBFSM and FRBSM introduced in [28] and [21], respectively. For easy referencing, we term FBFSM and FRBSM as TSENG and TAM, respectively. Numerical experiments were carried out on MATLAB R2015a version. All programs were run on a 64bit OS PC with an Intel(R) Core(TM) i73540M CPU @ 1.00 GHz, 1.19 GHz and 3 GB RAM. All figures were plotted using the loglog plot command.
Example 1
Let \(\mathcal{H} = L_{2}([0, 1])\), with the norm and inner product defined as
Define the operator \(B: \mathcal{H} \to \mathcal{H}\) by
then, B is monotone and Lipschitz with Lipschitz constant \(L=2\). Let \(A: \mathcal{L}_{2}([0,1])\to \mathcal{L}_{2}([0,1])\) be defined by
then, A is maximal monotone and for any \(r>0\), the resolvent, \(J_{r}^{A}: \mathcal{L}_{2}([0,1])\to \mathcal{L}_{2}([0,1])\), of A, is given by
Clearly,
We show that \(x_{n}\rightharpoonup 0\). We recall that the sequence \(\{x_{n}\}\) converges weakly to 0 in \(\mathcal{L}_{2}([0,1])\) if and only if
for any \(\psi \in \mathcal{H}^{*}\). We conduct the experiment with various functions ψ in \(\mathcal{L}_{2}([0,1])\). The integrals were approximated using the \(trapz\) and \(int\) command on MATLAB over the interval \([0,1 ]\). The results of the experiment are displayed in Table 1 and Figs. 1, 2, 3, and 4.
Example 2
Let \(\mathcal{H} = L_{2}([0, 1])\), with the norm and inner product defined as
We inherit the map A from (1) above, while the map B is defined by
Clearly, B is monotone and Lipschitz and
We show that \(x_{n}\rightharpoonup 0\) just as in Example 1 above. The results of the experiment are displayed in Table 2 and Figs. 5, 6, 7, and 8.
Remark 1
From the results displayed in Tables 1 and 2 it is clear that the speed of the convergence of Algorithm (3.1) heavily depends on the value of θ. For instance, Algorithm (3.1) converges faster as the value of θ moves closer to 0.5. Thus, if the value of θ is appropriately chosen Algorithm (3.1), seems to have cheaper computations compared to its counterparts. On the other hand, the Algorithm TAM depends on the step size \(\{\lambda _{n}\}\), while that of TSENG depends on λ. The algorithm converges faster when the step sizes are chosen very close to the upper bound of the interval of choice. Finally, we note that the number of iterations for TSENG in Table 1 was cut short due to the large number iterations needed before the tolerance is reached.
5 Conclusion
In this work, we have proved the weak convergence of a onestep selfadaptive algorithm to a solution of the sum of two monotone operators in 2uniformly convex and uniformly smooth Banach spaces. Numerical results were presented to illustrate how Algorithm (3.1) competes with some existing algorithms. Finally, our results generalize and complement some existing results in the literature.
Availability of data and materials
Data sharing is not applicable to this article.
References
Abass, H.A., Aremu, K.O., Jolaoso, L.O., Mewomo, O.T.: An inertial forward–backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal. 2020, Article ID 6 (2020)
Alber, Y.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, Lecture Notes in Pure and Appl. Math., vol. 178, pp. 15–50. Dekker, New York (1996)
Alber, Y., GuerreDelabriere, S.: On the projections for fixed points problems. Analysis 21(1), 17–39 (2001)
Alber, Y., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, London (2006)
Aoyama, K., Kohsaka, F.: Strongly relatively nonexpansive sequences generated by firmly nonexpansivelike mappings. Fixed Point Theory Appl. 2014, Article ID 95 (2014)
Attouch, H., Peypouquet, J., Redont, P.: Backward–forward algorithms for structured monotone inclusions in Hilbert spaces. J. Math. Anal. Appl. 457, 1095–1117 (2018)
Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Editura Academiei R.S.R, Bucharest (1976)
Beauzamy, B.: Introduction to Banach Spaces and Their Geometry, 2nd edn. NorthHolland Mathematics Studies, vol. 68, p. xv+338 NorthHolland, Amsterdam (1985). ISBN 0444878785. Notas de Matematica [Mathematical Notes], 86
Bredies, K.: A forward–backward splitting algorithm for the minimization of nonsmooth convex functionals in Banach space. Inverse Probl. 25(1), 015005 (2009)
Bruck, R.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61, 159–164 (1977)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Chen, G.H.G., Rockafellar, R.T.: Convergence rates in forward–backward splitting. SIAM J. Optim. 7(2), 421–444 (1997)
Chidume, C.E., Bello, A.U., Usman, B.: Iterative Algorithms for Zeros of Strongly Monotone Lipschitz Maps in Classical Banach Spaces p. 9. Springer, Berlin (2015). https://doi.org/10.1186/s4006401510441
Chidume, C.E., Chidume, C.O., Bello, A.U.: An algorithm for zeros of generalized phistrongly monotone and bounded maps in classical Banach spaces. Optimization (2015). https://doi.org/10.1080/02331934.2015.1074686
Combettes, P., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Davis, D., Yin, W.T.: A threeoperator splitting scheme and its optimization applications. SetValued Var. Anal. 25, 829–858 (2017)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified Forward Reflected Backward Spliting Method for Variational Inclusions. Springer, Berlin (2020). https://doi.org/10.1007/s10288020004403
Kamimura, S., Takahashi, W.: Strong convergence of a proximaltype algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2003)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Liu, L.: Strong convergence of a modified inertial forward–backward splitting algorithm for a inclusion problem. J. Appl. Numer. Optim. 2, 373–385 (2020)
Malitsky, Y., Tam, M.K.: A forward–backward spliting method for monotone inclusion without cocoercivity. SIAM J. Optim. 30(2), 1451–1472 (2020)
Moudafi, A., Thera, M.: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94, 425–448 (1997)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383–390 (1979)
Peaceman, D.H., Rachford, H.H.: The numerical solutions of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3, 28–41 (1955)
Reich, S.: A weak convergence theorem for the alternating method with Bregman distances. In: Kartsatos, A.G. (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes Pure Appl. Math., vol. 178, pp. 313–318. Dekker, New York (1996)
Shehu, Y.: Convergence results of forward–backward algorithm for sum of monotone operator in Banach spaces. Results Math. 74, 138 (2019)
Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Tseng, P.: A modified forward–backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Tuyen, T.M., Promkam, R., Sunthrayuth, P.: Strong convergence of a generalized forward–backward splitting method in reflexive Banach spaces. Optimization 71(6), 1483–1508(2020). https://doi.org/10.1080/02331934.2020.1812607
Wang, Y., Xu, H.K.: Strong convergence for the proximalgradient method. J. Nonlinear Convex Anal. 15(3), 581–593 (2014)
Acknowledgements
The authors appreciate the support of their institution and AfDB.
Funding
This work is supported from AfDB Research Grant Funds to AUST.
Author information
Authors and Affiliations
Contributions
The problem was formulated by AUB and the computations and proofs were carried out jointly by CEC and MA. All authors have read and agreed the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bello, A.U., Chidume, C.E. & Alka, M. Selfadaptive forward–backward splitting algorithm for the sum of two monotone operators in Banach spaces. Fixed Point Theory Algorithms Sci Eng 2022, 25 (2022). https://doi.org/10.1186/s13663022007329
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663022007329
MSC
 47H09
 47H10
 49J20
 49J40
Keywords
 Maximal monotone operators
 Lipschitzcontinuous operator
 Forward–reflected–backward splitting method
 2uniformly convex spaces