- Research
- Open access
- Published:
A Tseng-type algorithm for approximating zeros of monotone inclusion and J-fixed-point problems with applications
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2023, Article number: 3 (2023)
Abstract
In this paper, a Halpern–Tseng-type algorithm for approximating zeros of the sum of two monotone operators whose zeros are J-fixed points of relatively J-nonexpansive mappings is introduced and studied. A strong convergence theorem is established in Banach spaces that are uniformly smooth and 2-uniformly convex. Furthermore, applications of the theorem to convex minimization and image-restoration problems are presented. In addition, the proposed algorithm is used in solving some classical image-recovery problems and a numerical example in a Banach space is presented to support the main theorem. Finally, the performance of the proposed algorithm is compared with that of some existing algorithms in the literature.
1 Introduction
Let E be a real Banach space with dual space, \(E^{*}\). Let \(A: E \to E^{*}\) and \(B: E \to 2^{E^{*}}\) be single-valued and multivalued monotone operators, respectively. The following monotone inclusion problem:
has been of interest to several authors due to its numerous applications in solving problems arising from image restoration, signal recovery, and machine learning. One of the early methods used for approximating solutions of the inclusion problem (1) is the forward–backward algorithm (FBA); which was introduced by Passty [37] and studied extensively by many authors (see, e.g., [1, 2, 6, 15, 16, 20, 21, 28, 49]).
Recently, there is growing interest in the study of the monotone inclusion problem (1) whose solutions are fixed points of some nonexpansive-type mappings. In general, the problem is stated as follows:
where \(T: E \to E\) is a nonexpansive-type mapping.
In 2010, Takahashi et al. [44] introduced and studied an iterative algorithm that approximates solutions of problem (2) in the setting of real Hilbert spaces. They proved the following strong convergence theorem:
Theorem 1.1
Let C be a closed and convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of C into H and let B be a maximal monotone operator on H, such that the domain of B is included in C. Let \(J_{\lambda}= (I +\lambda B)^{-1}\) be the resolvent of B for \(\lambda >0\) and let T be a nonexpansive mapping of C into itself, such that \(F(T) \cap (A + B)^{-1}0 \neq \emptyset \). Let \(x_{1} = x \in C\) and let \(\{x_{n}\} \subset C\) be a sequence generated by
for all \(n\in \mathbb{N}\), where \(\{\lambda _{n}\}\subset (0,2\alpha )\), \(\{\alpha _{n}\}, \{\beta _{n} \}\subset (0,1)\) satisfy
Then, \(\{x_{n}\}\) converges strongly to a point of \(F(T)\cap (A+B)^{-1}0\).
In recent years, many authors have exploited the inertial technique in order to accelerate the convergence of sequences generated by existing algorithms in the literature. The inertial extrapolation technique was first introduced by Polyak [39] as an acceleration process in solving smooth, convex minimization problems. An algorithm of inertial type is an iterative procedure in which subsequent terms are obtained using the preceding two terms. Many authors have shown numerically that adding the inertial extrapolation term in many existing algorithms improves its performance (see, e.g., [3, 12, 17, 18, 25, 30, 36, 38, 42, 43]).
In 2021, Adamu et al. [4] introduced and studied the following inertial algorithm that approximates solutions of problem (2) in real Hilbert spaces. They proved the following strong convergence theorem:
Theorem 1.2
Let H be a real Hilbert space. Let \(A:H\to H\) be α-inverse strongly monotone, \(B: H \to 2^{H}\) be a set-valued maximal monotone operator, and \(T:H\to H\) be a nonexpansive mapping. Assume \(F(T) \cap (A + B)^{-1}0 \neq \emptyset \). Let \(x_{0},x_{1} , u \in H\) and let \(\{x_{n}\} \subset H\) be a sequence generated by:
where the control parameters satisfy some appropriate conditions. Then, \(\{x_{n}\}\) converges strongly to a point in \(F(T)\cap (A+B)^{-1}0\).
Remark 1
We recall that in Algorithms (3) and (4) the operator A is required to be α-inverse strongly monotone, i.e., A satisfies the following inequality:
This requirement rules out some important applications (see, e.g., Sect. 4 of [45]).
To dispense with the α-inverse strong monotonicity assumption on A, using the idea of the extragradient method of Korpelevic [27] for monotone variational inequalities, Tseng [45] introduced the following algorithm in real Hilbert spaces:
where \(C\subset H\) is nonempty, closed, and convex such that \(C\cap (A+B)^{-1}0\neq \emptyset \), A is maximal monotone and Lipschitz continuous with constant \(L>0\) and B is maximal monotone. He proved weak convergence of the sequence generated by his algorithm to a solution of problem (1).
Remark 2
We note here that the class of monotone operators that are Lipschitz continuous contain, properly, the class of monotone operators that are α-inverse strongly monotone, since every α-inverse strongly monotone operator is \(\frac{1}{\alpha}\)-Lipschitz continuous.
Recently, in 2021, Padcharoen et al. [35] proposed an inertial version of Tseng’s Algorithm (5) in the setting of real Hilbert spaces. They proved the following theorem:
Theorem 1.3
Let H be a real Hilbert space. Let \(A: H \to H\) be an L-Lipschitz continuous and monotone mapping and \(B: H \to 2^{H}\) be a maximal monotone map. Assume that the solution set \((A+B)^{-1}\) is nonempty. Given \(x_{0},x_{1}\in H\), let \(\{x_{n}\}\) be a sequence defined by:
where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (6) converges weakly to a solution of problem (1).
In 2019, Shehu [41] extended the inclusion problem (1) involving monotone operators to Banach spaces. He introduced and studied a modified version of Tseng’s algorithm and proved the following theorem:
Theorem 1.4
Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0\) is nonempty and the normalized duality mapping J on E is weakly sequentially continuous. Let \(\{x_{n}\}\) be a sequence in E generated by:
where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (7) converges weakly to a point \(x\in (A+B)^{-1}0\).
To obtain a strong convergence theorem and dispense with the weak sequential continuity assumption on the normalized duality mapping J in Theorem 1.4, in the same paper [41], Shehu introduced and studied a Halpern modification of Algorithm (7). He proved the following theorem:
Theorem 1.5
Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0\) is nonempty. Let \(\{x_{n}\}\) be a sequence in E generated by:
where the control parameters satisfy some appropriate conditions. Then, the sequence \(\{x_{n}\}\) generated by (8) converges strongly to a point \(x\in (A+B)^{-1}0\).
Recently, Cholamjiak et al. [23] introduced and studied a Halpern–Tseng-type algorithm for approximating solutions of the inclusion problem (2) in the setting of Banach spaces. They proved the following theorem:
Theorem 1.6
Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping and \(T: E \to E\) be relatively nonexpansive. Suppose the solution set \(\Omega = (A+B)^{-1}0\cap F(T)\neq \emptyset \). Let \(\{x_{n}\}\) be a sequence in E generated by:
where \(\{\lambda _{n} \}\subset (0, \frac{\sqrt{c}}{\sqrt{\kappa}L})\), for some \(c,\kappa >0\); \(\{\alpha _{n}\}, \{\beta _{n} \}\subset (0,1)\) with \(\lim_{n\to \infty} \alpha _{n}=0\) and \(\sum_{n=1}^{\infty }\alpha _{n}=\infty \); \(0< a\leq \beta _{n}\leq b <1\). Then, the sequence \(\{x_{n}\}\) generated by (9) converges strongly to a solution of problem (2).
In 2016, Chidume and Idu [22] reintroduced a fixed-point notion for operators mapping a uniformly convex and uniformly smooth real Banach space, E to its dual space, \(E^{*}\). Given a map \(T: E \to E^{*}\) let J be the normalized duality mapping on E. Chidume and Idu [22] called a point \(u\in E\) a J-fixed point of T if \(Tu=Ju\) and denoted the set of by \(F_{J}(T):=\{x\in E: Tx=Jx \}\). An intriguing property of a J-fixed point is its connection with optimization problems, see, e.g., [22] for the connection. Currently, there is a growing interest in the study of J-fixed points (see, e.g., [11, 13, 33, 34], for some interesting results concerning J-fixed points in the literature).
Remark 3
We note here that this notion has also been defined by Zegeye [50] who called it a semifixed point. Also, Liu [29] called it a duality fixed point.
In line with the current interest on the inclusion problems (1) and (2) involving monotone operators on Banach spaces, J-fixed points and the inertial acceleration technique, it is our purpose in this paper to propose an inertial Halpern–Tseng-type algorithm for approximating solutions of the inclusion problem (1) that are J-fixed points of a relatively J-nonexpansive mapping. Furthermore, we prove the strong convergence of the sequence generated by our algorithm in the setting of real Banach spaces that are uniformly smooth and 2-uniformly convex. In addition, we present applications of our theorem to convex minimization and use our algorithm to solve some classical problems arising from image restoration. Finally, we present a numerical example on a real Banach space to support our main theorem.
2 Preliminaries
In this section, we define some notions and state some results that will be needed in our subsequent analysis.
Let E be a real normed space and let \(J: E \to 2^{E^{*}} \) be the normalized duality map (see, e.g., [8] for the explicit definition of J and its properties on certain Banach spaces). The following functional \(\phi :E\times E \to \mathbb{R}\) defined on a smooth real Banach space by
will be needed in our estimations in the following. The functional ϕ was first introduced by Alber [8] and has been extensively studied by many authors (see, for example, [8, 14, 26, 32] and the references contained in them). Observe that on a real Hilbert space H, the definition of ϕ above reduces to \(\phi (x,y)=\Vert x-y\Vert ^{2}\), \(\forall x,y\in H\). Furthermore, given \(x,y,z\in E \) and \(\tau \in [0,1]\), using the definition of ϕ, one can easily deduce the following (see, e.g., [22, 32]):
-
D1:
\((\Vert x \Vert -\Vert y\Vert )^{2} \leq \phi (x,y)\leq (\Vert x\|+ \Vert y\Vert )^{2}\),
-
D2:
\(\phi (x, J^{-1}(\tau Jy+(1-\tau )Jz)\leq \tau \phi (x,y) + (1-\tau ) \phi (x,z) \),
-
D3:
\(\phi (x,y)=\phi (x,z)+\phi (z,y)+2\langle z-x,Jy-Jz\rangle \),
where J and \(J^{-1}\) are the duality maps on E and \(E^{*}\), respectively.
We shall use interchangeably ϕ and \(V: E\times E^{*} \to \mathbb{R}\) defined by
since \(V(x,y^{*})=\phi (x,J^{-1}y^{*})\).
The following ideas will be used in the subsequent discussion.
Definition 2.1
Let \(T:E\to E^{*}\) be a map. A point \(x^{*}\in E\) is called an asymptotic J-fixed point of T if there exists a sequence \(\{x_{n}\}\subset E\) such that \(x_{n}\rightharpoonup x^{*}\) and \(\|Jx_{n}-Tx_{n}\|\to 0 \), as \(n \to \infty \). Let \(\widehat{F}_{J}(T)\) be the set of asymptotic J-fixed points of T.
Definition 2.2
A map \(T:E\to E^{*}\) is said to be relatively J-nonexpansive if
- \((i)\):
-
\(\widehat{F}_{J}(T)=F_{J}(T) \neq \emptyset \),
- \((ii)\):
-
\(\phi (u,J^{-1}Tx)\leq \phi (u,x)\), \(\forall x\in E\), \(u\in F_{J}(T)\).
Remark 4
See Chidume et al. [19] for a nontrivial example of a relatively J-nonexpansive mapping. One can easily verify from the definition above, that if an operator T is relatively J-nonexpansive then the operator \(J^{-1}T\) is relatively nonexpansive in the usual sense and vice versa. Furthermore, \(x^{*}\in F_{J} (T ) \Leftrightarrow x^{*}\in F (J^{-1} T )\).
Definition 2.3
Let E be a smooth, strictly convex, and reflexive real Banach space and let C be a nonempty, closed, and convex subset of E. Following Alber [8], the generalized projection map, \(\Pi _{C} : E\to C\) is defined by
Clearly, in a real Hilbert space, the generalized projection \(\Pi _{C}\) coincides with the metric projection \(P_{C}\) from E onto C.
Definition 2.4
Let E be a reflexive, strictly convex, and smooth real Banach space and let \(B: E \to 2^{E^{*}}\) be a maximal monotone operator. Then, for any \(\lambda >0\) and \(u\in E\), there exists a unique element \(u_{\lambda }\in E\) such that \(Ju \in (Ju_{\lambda }+\lambda Bu_{\lambda})\). The element \(u_{\lambda}\) is called the resolvent of B and it is denoted by \(J_{\lambda}^{B}u\). Alternatively, \(J_{\lambda}^{B}= (J+\lambda B)^{-1}J\), for all \(\lambda >0\). It is easy to verify that \(B^{-1}0=F(J_{\lambda}^{B})\), \(\forall \lambda >0\), where \(F(J_{\lambda }^{B})\) denotes the set of fixed points of \(J_{\lambda}^{B}\).
Now, we recall some fundamental and useful results that will be needed in the proof of our main theorem and its corollaries.
Lemma 2.5
([7])
Let C be a nonempty, closed, and convex subset of a smooth, strictly convex, and reflexive real Banach space E. For any \(x\in E\) and \(y \in C\), \(\tilde{x} =\Pi _{C}x\) if and only if \(\langle \tilde{x}-y, Jx-J\tilde{x} \rangle \geq 0\), for all \(y\in C\).
Lemma 2.6
([8])
Let E be a reflexive, strictly convex, and smooth Banach space with \(E^{*}\) as its dual. Then,
for all \(u\in E\) and \(u^{*},v^{*}\in E^{*}\).
Lemma 2.7
([10])
Let E be a reflexive Banach space. Let \(A: E \to E^{*}\) be a monotone, hemicontinuous, and bounded mapping. Let \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Then, \(A+B\) is a maximal monotone mapping.
Lemma 2.8
Let \(\frac{1}{p}+\frac{1}{q}=1\), \(p,q>1\). The space E is q-uniformly smooth if and only if its dual space \(E^{*}\) is p-uniformly convex.
Lemma 2.9
([48])
Let E be a 2-uniformly smooth, real Banach space. Then, there exists a constant \(\rho >0\) such that \(\forall x,y\in E\)
In a real Hilbert space, \(\rho =1\).
Lemma 2.10
([46])
Let E be a 2-uniformly convex and smooth real Banach space. Then, there exists a positive constant μ such that
Lemma 2.11
([26])
Let E be a uniformly convex and smooth real Banach space, and let \(\{u_{n}\}\) and \(\{v_{n}\}\) be two sequences of E. If either \(\{u_{n}\}\) or \(\{v_{n}\}\) is bounded and \(\phi (u_{n},v_{n} )\to 0 \) then \(\Vert u_{n}-v_{n}\Vert \to 0 \).
Lemma 2.12
([32])
Let E be a uniformly smooth Banach space and \(r > 0\). Then, there exists a continuous, strictly increasing, and convex function \(g : [0, 2r] \rightarrow [0, 1)\) such that \(g(0) = 0\) and
for all \(\beta \in [0, 1]\), \(u \in E\) and \(x, y \in B_{r}:=\{z\in E : \|z\|\leq r\}\).
Lemma 2.13
([47])
Let \(\{ a_{n} \}\) be a sequence of nonnegative numbers satisfying the condition
where \(\{ \alpha _{n} \}\), \(\{ \beta _{n} \}\), and \(\{c_{n}\}\) are sequences of real numbers such that
Then, \(\lim_{n \to \infty } a_{n}=0 \).
Lemma 2.14
([31])
Let \(\Gamma _{n}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\Gamma _{n_{j}}\}_{j\geq 0}\) of \(\{\Gamma _{n}\}\) that satisfies \(\Gamma _{n_{j}}<\Gamma _{n_{j}+1}\) for all \(j\geq 0\). Also, consider the sequence of integers \(\{ \tau (n)\}_{n\geq n_{0}}\) defined by
Then, \(\{\tau (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n\to \infty}\tau (n)=\infty \) and, for all \(n \geq n_{0}\), it holds that \(\Gamma _{\tau (n)}\leq \Gamma _{\tau (n)+1}\) and we have
Lemma 2.15
([9])
Let \(\{\Gamma _{n}\}\), \(\{\delta _{n}\}\), and \(\{\alpha _{n}\}\) be sequences in \([0,\infty ) \) such that
for all \(n \geq 1\), \(\sum_{n=1}^{\infty} \delta _{n} < +\infty \) and there exists a real number α with \(0 \leq \alpha _{n} \leq \alpha <1\), for all \(n \in \mathbb{N}\). Then, the following hold:
-
(i)
\(\sum_{n \geq 1}[\Gamma _{n} - \Gamma _{n-1}]_{+} < + \infty \), where \([t]_{+}=\max \{t,0\} \);
-
(ii)
there exists \(\Gamma ^{*} \in [0, \infty )\) such that \(\lim_{n \rightarrow \infty} \Gamma _{n}= \Gamma ^{*}\).
Lemma 2.16
([5])
Let E be a 2-uniformly convex and uniformly smooth real Banach space and let \(x_{0},x_{1},w\in E\). Let \(\{v_{n}\}\subset E\) be a sequence defined by \(v_{n}:=J^{-1} (Jx_{n}+\mu _{n}(Jx_{n}-Jx_{n-1}) )\). Then,
where \(\{\mu _{n}\}\subset (0,1)\) and ρ is the constant appearing in Lemma 2.9. For completeness, we shall give the proof here.
Proof
Using property D3, we have
Also, by Lemma 2.9, one can estimate \(v_{n}\) as follows:
Putting together equation (13) and inequality (15), we obtain
From (14), this implies that
□
3 Main result
The Setting for Algorithm 3.1 .
-
1.
The space E is a 2-uniformly convex and uniformly smooth real Banach space with dual space, \(E^{*}\).
-
2.
The operator \(A: E \to E^{*}\) is monotone and L-Lipschitz continuous, and \(B: E \to 2^{E^{*}}\) is maximal monotone and \(T: E \to E^{*}\) is relatively J-nonexpansive.
-
3.
The solution set \(\Omega = (A+B)^{-1}0\cap F_{J}(T)\) is nonempty.
-
4.
The control parameters \(\{\beta _{n}\}\subset (0,1)\), \(\{\gamma _{n}\}\subset (0,1)\) such that \(\lim_{n\to \infty}\gamma _{n}=0\) and \(\sum_{n=1}^{\infty }\gamma _{n}=\infty \), \(\{\epsilon _{n}\}\subset (0,1)\) such that \(\sum_{n=1}^{\infty}\epsilon _{n}<\infty \), and \(\{\lambda _{n}\} \subset (\lambda , \frac{\sqrt{\mu}}{\sqrt{\rho}L} )\), where \(\lambda \in (0,\frac{\sqrt{\mu}}{\sqrt{\rho}L})\), ρ and μ are the constants appearing in Lemmas 2.9 and 2.10, respectively.
Algorithm 3.1
Inertial Halpern–Tseng-type algorithm:
Step 0. (Initialization) Choose arbitrary points \(u,x_{0},x_{1}\in E\), \(\theta \in (0,1)\) and set \(n=1\),
Step 1. Choose \(\theta _{n}\) such that \(0\leq \theta _{n} \leq \bar{\theta}_{n}\), where
Step 2. Compute
Step 3. Set \(n=n+1\) and go to Step 1.
Lemma 3.2
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then, \(\{x_{n}\}\) is bounded.
Proof
Let \(x\in \Omega \). Using Lemma 2.9 and D3, we have
Claim.
Proof of claim. Observe that \(y_{n}=J_{\lambda _{n}}^{B}J^{-1} (Jw_{n}-\lambda _{n} Aw_{n} )\) implies \((Jw_{n}-\lambda _{n}Aw_{n})\in (Jy_{n}+\lambda _{n} By_{n})\). Since B is maximal monotone, there exists \(b_{n}\in By_{n}\) such that \(Jw_{n}-\lambda _{n}Aw_{n}=Jy_{n}+\lambda _{n} b_{n}\). Thus,
Furthermore, since \(0\in (A+B)x\) and \((Ay_{n}+b_{n})\in (A+B)y_{n}\), by the monotonicity of \((A+B)\), we have
Substituting equation (19) into this inequality, we obtain
which justifies our claim.
Now, substituting inequality (18) into inequality (17) and using Lemma 2.10, we deduce that
Since \(\lambda _{n} \in (0, \frac{\sqrt{\mu}}{\sqrt{\rho}L} )\), \(1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu}>0\). Thus,
Also, using D2 and the fact that T is relatively J-nonexpansive, we have
Next, using D2, inequalities (22) and (21), Lemma 2.16, and the fact that \(\{\theta _{n}\}\subset (0,1)\), we obtain
If the maximum is \(\phi (x,u)\), for all \(n\geq 1\), we are done. Else, there exists an \(n_{0}\geq 1\) such that for all \(n\geq n_{0}\), we have that
From Step 1 and the setting for Algorithm 3.1 (4), we obtain
Hence, by Lemma 2.15, \(\{\phi (x,x_{n})\}\) is convergent and thus, bounded. Furthermore, by D1, \(\{x_{n}\}\) is bounded. This implies that \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\), and \(\{u_{n}\}\) are bounded. □
Now, we are ready to state our main convergence theorem.
Theorem 3.3
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3.1. Then, \(\{x_{n}\}\) converges strongly to \(x\in \Omega \).
Proof
Let \(x\in \Omega \). First, we estimate \(\phi (x,u_{n})\) using Lemma 2.12, the fact that T is relatively J-nonexpansive, and inequalities (20) and (21). Now,
Next, we estimate \(\phi (x,x_{n+1})\) using Lemma 2.12 and inequality (24). Hence,
Set \(\eta _{n}=(1-\gamma _{n})(1-\beta _{n}) (1- \frac{\rho \lambda _{n}^{2}L^{2}}{\mu} )\) and \(\zeta _{n}=(1-\gamma _{n})\beta _{n}(1-\beta _{n})\). By rearranging the terms in inequality (25) and using Lemma 2.15, we obtain
To complete the proof, we consider the following two cases:
Case 1. Assume there exits an \(n_{0}\in \mathbb{N}\) such that for all \(n\geq n_{0}\),
Then, \(\{\phi (x,x_{n})\}\) is convergent.
From inequality (26), and using the fact that \(\lim_{n\to \infty}\gamma _{n}=0\), the boundedness \(\{w_{n}\}\), the existence of \(\lim_{n\to \infty}\phi (x,x_{n})\), and the fact that \(\lim_{n\to \infty} \rho \theta _{n}\|Jx_{n}-Jx_{n-1} \|^{2}=0= \lim_{n\to \infty} \theta _{n} \phi (x_{n},x_{n-1}) \), we obtain the following:
This implies by Lemma 2.11 and the properties of g that
Furthermore, since
Moreover, by the uniform continuity of \(J^{-1}\) on bounded sets, \(\lim_{n\to \infty}\|x_{n}-w_{n}\|=0\). This and equation (27) imply that \(\lim_{n\to \infty}\|x_{n}-y_{n}\|=0\). By the uniform continuity of J on bounded sets, this implies \(\lim_{n\to \infty} \|Jx_{n}-Jy_{n}\|=0\). Also, the Lipschitz continuity of A and equation (27) imply that \(\lim_{n\to \infty} \|Aw_{n}-Ay_{n}\|=0\). Therefore,
By the uniform continuity of \(J^{-1}\), equation (28) implies that \(\lim_{n\to \infty}\| z_{n}-y_{n}\|=0\). Thus,
Now, observe that
This implies that
Now, we prove that \(\Omega _{w}(x_{n}) \subset \Omega \), where \(\Omega _{w}(x_{n})\) denotes the set of weak subsequential limits of \(\{x_{n}\}\). Since \(\{x_{n}\}\) is bounded, \(\Omega _{w}(x_{n})\neq \emptyset \). Let \(x^{*}\in \Omega _{w}(x_{n})\). Then, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}}\rightharpoonup x^{*}\). From equation (29), we have \(z_{n_{k}}\rightharpoonup x^{*}\). This and (27) imply that \(x^{*}\in \widehat{F_{J}}(T)\). Since T is relatively J-nonexpansive, \(x^{*}\in F_{J}(T)\).
Next, we show that \(x^{*}\in (A+B)^{-1}0\). Let \((v,w)\in G(A+B):=\{ (x,y)\in E\times E^{*} : y\in (Ax+Bx)\} \). Then, \((w-Av)\in Bv\). By the definition of \(y_{n}\) in Algorithm 3.1, we have that \((Jw_{n_{k}}-\lambda _{n_{k}}Aw_{n_{k}})\in (Jy_{n_{k}}+\lambda _{n_{k}}By_{n_{k}})\). Thus, \(\frac{1}{\lambda _{n_{k}}}(Jw_{n_{k}}-Jy_{n_{k}}-\lambda _{n_{k}}Aw_{n_{k}}) \in By_{n_{k}}\). By the monotonicity of B, we have
Using the fact that A is monotone, we estimate this as follows
Since \(\lim_{n\to \infty} \|Aw_{n}-Ay_{n}\|= \lim_{n\to \infty} \|Jy_{n}-Jw_{n}\|=0\), \(\{\frac{1}{\lambda _{n}} \}\) is bounded and \(y_{n_{k}}\rightharpoonup x^{*}\), it follows that
By Lemma 2.7, \(A+B\) is maximal monotone. This implies that \(0\in (A+B)x^{*}\), i.e., \(x^{*}\in (A+B)^{-1}0\). Hence, \(x^{*}\in \Omega =F_{J}(T) \cap (A+B)^{-1} 0\).
Now, we show that \(\{x_{n}\}\) converges strongly to the point \(x=\Pi _{\Omega}u\). Observe that if \(x=x^{*}\), we are done. Suppose \(x\neq x^{*}\). Using the boundedness of \(\{x_{n}\}\), Lemma 2.5, and the fact that Ω is closed and convex (see, e.g., [23]), there exits a subsequence \(\{x_{n_{k}}\}\subset \{x_{n}\}\) such that
Using (30) and the uniform boundedness of \(J^{-1}\), we deduce that
Next, using Lemma 2.6, D2, inequalities (22), (21), and Lemma 2.15, we have
By Lemma 2.13, inequality (32) implies that \(\lim_{n\to \infty}\phi (x,x_{n})= 0\). Using Lemma 2.11, we obtain that \(\lim_{n\to \infty} x_{n} = x\).
Case 2. If Case 1 does not hold, then, there exists a subsequence \(\{x_{m_{j}}\} \subset \{x_{n}\}\) such that
By Lemma 2.14, there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\), such that \(\lim_{k \to \infty} m_{k}=\infty \) and the following inequalities hold
From inequality (26) we have
Following a similar argument as in Case 1, one can establish the following
From (31) we have
By Lemma 2.13, inequality (33) implies that \(\lim_{n\to \infty}\phi (x,x_{m_{k}})= 0\). Thus,
Therefore, \(\limsup_{k \to \infty}\phi (x,x_{k})=0\) and so, by Lemma 2.11, \(\lim_{k \to \infty} x_{k} = x\). This completes the proof. □
4 Applications and numerical illustrations
In this section, we give applications of Theorem 3.3 to a structured, nonsmooth, and convex minimization problem, image denoising, and deblurring problems and a numerical illustration on the classical Banach space \(l_{\frac{3}{2}}\). Finally, we will compare the performance of Algorithm 3.1 with Algorithms (3) and (9).
4.1 Application to a convex minimization problem
In this subsection, we shall give an application of our theorem to the structured nonsmooth convex minimization problem that is to
where f is a real-valued function on E that is smooth and convex and g is an extended real-valued function that is convex and lower-semicontinuous (E is a real Banach space). Problem (34) can be recast as:
where ∇f is the gradient of f and ∂g is the subdifferential of g. Suppose ∇f is monotone and Lipschitz continuous. Then, setting \(A=\nabla f\) and \(B=\partial g\), in Algorithm 3.1 and assuming that the solution set \(\Omega :=F_{J}(T) \cap ( \nabla f+\partial g)^{-1}0 \neq \emptyset \), it follows from Theorem 3.3 that \(\{x_{n}\}\) converges strongly to a point \(x\in \Omega \).
4.2 Application to image-restoration problems
The general image-recovery problem can be modeled as the following undetermined linear equation system:
where \(x\in \mathbb{R}^{N}\) is an original image, \(y\in \mathbb{R}^{M}\) is the observed image with noise ϱ, and \(D:\mathbb{R}^{N}\to \mathbb{R}^{M}\) (\(M< N\)) is a bounded linear operator. It is well known that solving (36) can be viewed as solving the LASSO problem:
where \(\lambda >0\). Following [24], we define \(Ax:=\nabla (\frac {1}{2}\|Dx-y\|_{2}^{2})=D^{T}(Dx-y)\) and \(Bx:=\partial (\lambda \|x\|_{1})\). It is known that A is \(\|D\|^{2}\)-Lipschitz continuous and monotone. Moreover, B is maximal monotone (see [40]).
Remark 5
For the purpose of existence, one can take
In Algorithm (3) of Takahashi et al., [44], we set \(\alpha _{n}=\frac{1}{1000n}\), \(\beta _{n}=\frac{n}{2n+1}\), and \(\lambda _{n}=0.001\), and \(Sx=\frac{nx}{n+1}\), in Algorithm (9) of Cholamjiak [23], we set \(\lambda _{n} = 0.03\), \(\beta _{n} = 0.999\), \(\gamma _{n}= \frac{1}{(n+1)^{2}}\), \(\theta =0.999\), \(\varepsilon _{n}= \frac{1}{(n+5)^{2}}\), \(\theta _{n}=0.95\), and \(Tx:=\frac{nx}{n+1}\) and in our proposed Algorithm 3.1, we set \(\lambda _{n} = 0.03\), \(\beta _{n} = 0.999\), \(\alpha _{n}= \frac{1}{(n+1)^{2}}\), \(\theta =0.999\), \(\varepsilon _{n}= \frac{1}{(n+5)^{2}}\), \(\theta _{n}=0.95\), and \(Tx:=\frac{nx}{n+1}\). The test images were degraded using the following MATLAB blur functions “fspecial(’motion’,9,15)” and “fspecial(’gaussian’, 5,5)” and then we added random noise. Finally, we used a tolerance of 10−4 and the maximum number of iterations (n) of 300, for all the algorithms. The results are presented in Figs. 1 and 2, and Table 1.
Looking at the restored images in Fig. 2, it is difficult to tell which algorithm performs better in the restoration process. To distinguish this, there is a powerful tool that is used to measure the quality of restored images. The tool is called SNR, meaning signal-to-noise ratio. The higher the SNR value for a restored image, the better the restoration process via the algorithm. The SNR is defined as follows:
where x and \(x_{n}\) are the original image and estimated image at iteration n, respectively. The SNR values for the restored images via Algorithms (3), (9), and 3.1 are presented in Table 1.
Discussion of the numerical results. For the restored images in Figs. 1 and 2, with regards to the number of iterations and the quality of the restored images (SNR values) our proposed Algorithm 3.1 outperforms Algorithm (3) of Takahashi et al. [44] and Algorithm (9) of Cholamjiak et al. [23]. In particular, for the brain image, Algorithms (3) failed to restore the image before the maximum number of iterations was exhausted, however, it took our proposed Algorithm 3.1 just 131 iterations to restore the brain image degraded by motion blur and 105 iterations to restore the brain image degraded by Gaussian blur. From the above experiment, our proposed method appears to be competitive and promising.
4.3 An example in \(l_{\frac{3}{2}}\)
In this subsection we present a numerical implementation of our proposed Algorithm 3.1 on the Banach space
It is well known that for \(1< p\leq 2\), \(l_{p}\) spaces are uniformly smooth and 2-uniformly convex. Now, let \(p=\frac{3}{2}\). Since we cannot sum to infinity on a computer, for the purpose of numerical illustration, we considered the subspace of \(l_{\frac{3}{2}}\) consisting of finite, nonzero terms
Example 1
Consider the space \(S^{3}_{\frac{3}{2}}\) with dual space \(S^{3}_{3}\). Let \(A,B,T: S^{3}_{\frac{3}{2}} \to S_{3}^{3}\) be defined by
It is not difficult to verify the map A is 3-Lipschitz, B is maximal monotone, and T is nonexpansive, at the same time it is relatively nonexpansive and relatively J-nonexpansive. Furthermore, the point \(x^{*}= (0.2, 0.1, 0.125, 0, 0,\ldots )\) is the only point in the solution set \(\Omega =(A+B)^{-1}0\cap F_{J}(T)\). In the numerical experiment, we compared the performance of Algorithms (3), (9), and 3.1. For a fair comparison, since these algorithms have similar control parameters we used the same values for each parameter appearing in all the algorithms. For the step-size \(\lambda _{n}\), we used 0.02 for all the algorithms. For \(\alpha _{n}\) defined Algorithms (3) and (9) that was required to satisfy same conditions with \(\gamma _{n}\) defined in our Algorithm 3.1, we used \(\frac{1}{(50{,}000\times n)+1}\) for all the three algorithms. Next, for \(\beta _{n}\) appearing in all the algorithms with the same condition, in Algorithms (9) and 3.1, we used 0.999; however, for Algorithm (3), the choice of \(\beta _{n}=0.5\) gave a better approximation so we used it for the algorithm and finally, to obtain the inertial parameter in our Algorithm 3.1 we chose \(\theta =0.999\), \(\varepsilon _{n}=\frac{1}{(n+5)^{2}}\). We set the Halpern-vector (x or u) to be zero in all the algorithms. The iteration process was started with the initial points \(x_{0}=(2,1,3,0,0,0,\ldots )\) and we observed the behavior of the algorithms as we varied \(x_{1}\) to be: \(\text{First } x_{1}= (1,1,3, 0,0,0, \ldots ) \) and \(\text{Second } x_{1}= (2,0,1,0,0,0 \cdots )\). The iteration process was terminated when \(\|x_{n}-x^{*}\|>10^{-6}\) or \(n>1999\). The results of the experiment are presented in Table 2 and Fig. 3.
Discussion of the numerical results. From the numerical illustrations presented in Example 1, we observe that the iterates generated by Algorithm (3) of Takahashi et al. [44] fail to satisfy the stopping criterion before the prescribed maximum number of iterations was exhausted. While Algorithm (9) of Cholamjiak et al. [23] took 1275 iterations to satisfy the tolerance for the First initial point \(x_{1}\) and 1244 for the Second \(x_{1}\), it took our proposed Algorithm 3.1 just 422 for the First initial point \(x_{1}\) and 423 for the Second \(x_{1}\). Thus, in this example our proposed algorithm outperforms the algorithms of Takahashi et al. [44] and Cholamjiak et al. [23].
4.4 Conclusion
This paper presents a modified inertial extension of the theorem of Cholamjiak [23] whose solutions are J-fixed points of relatively J-nonexpansive mappings. Applications of the theorem to convex minimization and image restoration are presented. Furthermore, some interesting numerical implementations of our proposed algorithm in solving image-recovery problems and an example on \(l_{\frac{3}{2}}\) are presented. Finally, the performance of our proposed method is compared with that of Takahashi et al. [44] and Cholamjiak et al. [23] and from the numerical illustrations our proposed Algorithm 3.1 appears to be competitive and promising.
Availability of data and materials
Not applicable.
References
Abass, H.A., Aremu, K.O., Jolaoso, L.O., Mewomo, O.T.: An inertial forward-backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Funct. Anal. 2020, 6, 1–20 (2020)
Abubakar, A.B., Kumam, P., Awwal, A.M.: A modified self-adaptive conjugate gradient method for solving convex constrained monotone nonlinear equations with applications to signal recovery problems. Bangmod Int. J. Math. Comput. Sci. 5(2), 1–26 (2019)
Adamu, A., Adam, A.A.: Approximation of solutions of split equality fixed point problems with applications. Carpath. J. Math. 37(3), 23–34 (2021)
Adamu, A., Deepho, J., Ibrahim, A.H., Abubakar, A.B.: Approximation of zeros of sum of monotone mappings with applications to variational inequality and image restoration problems. Nonlinear Funct. Anal. Appl. 26(2), 411–432 (2021)
Adamu, A., Kitkuan, D., Kumam, P., Padcharoen, A., Seangwattana, T.: Approximation method for monotone inclusion problems in real Banach spaces with applications. J. Inequal. Appl. 2022(1), 70, 1–20 (2022). https://doi.org/10.1186/s13660-022-02805-0
Adamu, A., Kitkuan, D., Padcharoen, A., Chidume, C.E., Kumam, P.: Inertial viscosity-type iterative method for solving inclusion problems with applications. Math. Comput. Simul. 194, 445–459 (2022)
Alber, Y.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Lecture Notes in Pure and Applied Mathematics, pp. 15–50 (1996)
Alber, Y., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, Netherlands (2006)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)
Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Springer, New York (1976)
Cheng, Q., Su, Y., Zhang, J.: Duality fixed point and zero point theorems and applications. Abstr. Appl. Anal. 2012, Article ID 391301, 1–11 (2012). https://doi.org/10.1155/2012/391301
Chidume, C., Ikechukwu, S., Adamu, A.: Inertial algorithm for approximating a common fixed point for a countable family of relatively nonexpansive maps. Fixed Point Theory Appl. 2018, 9, 1–9 (2018). https://doi.org/10.1186/s13663-018-0634-3
Chidume, C., Kumam, P., Adamu, A.: A hybrid inertial algorithm for approximating solution of convex feasibility problems with applications. Fixed Point Theory Appl. 2020, 12, 1–17 (2020). https://doi.org/10.1186/s13663-020-00678-w
Chidume, C.E., Adamu, A., Chinwendu, L.O.: Approximation of solutions of Hammerstein equations with monotone mappings in real Banach spaces. Carpath. J. Math. 35(3), 305–316 (2019)
Chidume, C.E., Adamu, A., Kumam, P., Kitkuan, D.: Generalized hybrid viscosity-type forward-backward splitting method with application to convex minimization and image restoration problems. Numer. Funct. Anal. Optim. 42, 1586–1607 (2021)
Chidume, C.E., Adamu, A., Minjibir, M., Nnyaba, U.: On the strong convergence of the proximal point algorithm with an application to Hammerstein euations. J. Fixed Point Theory Appl. 22(3), 1–21 (2020)
Chidume, C.E., Adamu, A., Nnakwe, M.O.: Strong convergence of an inertial algorithm for maximal monotone inclusions with applications. Fixed Point Theory Appl. 2020(1), 13 (2020)
Chidume, C.E., Adamu, A., Nnakwe, M.O.: An inertial algorithm for solving Hammerstein equations. Symmetry 13(3), 376 (2021)
Chidume, C.E., Adamu, A., Okereke, L.C.: Strong convergence theorem for some nonexpansive-type mappings in certain Banach spaces. Thai J. Math. 18(3), 1537–1548 (2020)
Chidume, C.E., Adamu, A., Okereke, L.C.: Iterative algorithms for solutions of Hammerstein equations in real Banach spaces. Fixed Point Theory Appl. 2020, 4, 1–23 (2020)
Chidume, C.E., De Souza, G.S., Nnyaba, U.V., Romanus, O.M., Adamu, A.: Approximation of zeros m-accretive mappings, with applications to Hammerstein integral equation. Carpath. J. Math. 36, 45–55 (2020)
Chidume, C.E., Idu, K.: Approximation of zeros of bounded maximal monotone maps, solutions of Hammerstein integral equations and convex minimization problems. Fixed Point Theory Appl. 97, 1–28 (2016). https://doi.org/10.1186/s13663-016-0582-8
Cholamjiak, P., Sunthrayuth, P., Singta, A., Muangchoo, K.: Iterative methods for solving the monotone inclusion problem and the fixed point problem in Banach ppaces. Thai J. Math. 18(3), 1225–1246 (2020)
Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55(4), 1–22 (2018)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Adamu, A.: Accelerated derivative-free method for nonlinear monotone equations with application. Numer. Linear Algebra Appl. 29(3), e2424 (2022). https://doi.org/10.1002/nla.2424
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2002)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)
Liu, B.: Fixed point of strong duality pseudocontractive mappings and applications. Abstr. Appl. Anal. 2012, Article ID 623625, 1–7 (2012). https://doi.org/10.1155/2012/623625
Lorenz, D.A., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)
Maingé, P.E.: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 59(1), 74–79 (2010)
Nilsrakoo, W., Saejung, S.: On the fixed-point set of a family of relatively nonexpansive and generalized nonexpansive mappings. Fixed Point Theory Appl. 2010, 414232, 1–14 (2010)
Nnakwe, M.O.: An algorithm for approximating a common solution of varia- tional inequality and convex minimization problems. Optimization 70, 2227–2246 (2021). https://doi.org/10.1080/02331934.2020.1777995
Nnakwe, M.O., Okeke, C.C.: A common solution of generalized equilibrium problems and fixed points of pseudo-contractive-type maps. J. Appl. Math. Comput. 66(1), 701–716 (2021)
Padcharoen, A., Kitkuan, D., Kumam, W., Kumam, P.: Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems. Comput. Math. Methods 3, 1–14 (2021)
Pan, C., Wang, Y.: Convergence theorems for modified inertial viscosity splitting methods in Banach spaces. Mathematics 7(2), 1–12 (2019)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)
Phairatchatniyom, P., ur Rehman, H., Abubakar, J., Kumam, P., Martinez-Moreno, J.: An inertial iterative scheme for solving split variational inclusion problems in real Hilbert spaces. Bangmod Int. J. Math. Comput. Sci. 7(2), 35–52 (2021)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Rockafellar, R.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Shehu, Y.: Convergence results of forward-backward algorithms for sum of monotone operators in Banach spaces. Results Math. 74(4), 1–24 (2019)
Taddele, G.H., Gebrie, A.G., Abubakar, J.: An iterative method with inertial effect for solving multiple-set split feasibility problem. Bangmod Int. J. Math. Comput. Sci. 7(2), 53–73 (2021)
Taiwo, A., Mewomo, O.T.: Inertial viscosity with alternative regularization for certain optimization and fixed point problems. J. Appl. Numer. Optim. 4(3), 405–423 (2022)
Takahashi, S., Takahashi, W., Toyoda, M.: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27–41 (2010)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)
Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal., Theory Methods Appl. 16(12), 1127–1138 (1991)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)
Xu, Z.B., Roach, G.F.: Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 157, 189–210 (1991)
Yodjai, P., Kumam, P., Kitkuan, D., Jirakitpuwapat, W., Plubtieng, W.: The Halpern approximation of three operators splitting method for convex minimization problems with an application to image inpainting. Bangmod Int. J. Math. Comput. Sci. 5(2), 58–75 (2019)
Zegeye, H.: Strong convergence theorems for maximal monotone mappings in Banach spaces. J. Math. Anal. Appl. 343(2), 663–671 (2008)
Acknowledgements
The authors will like to thank the referees for their esteemed comments and suggestions. The authors would like to dedicate this manuscript to the memory of the late Professor Charles Ejike Chidume who was part of the original draft of the manuscript. He passed away before we compiled the final version submitted to this journal. The first author acknowledges with thanks, the King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship and Center of Excellence in Theoretical and Computational Science (TaCS-CoE) for their financial support.
Funding
This research was supported by the King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship and the National Research Council of Thailand (NRCT) under Research Grants for Talented Mid-Career Researchers (Contract no. N41A640089).
Author information
Authors and Affiliations
Contributions
AA and PK formulated the problem and discussed the formulation with DK and AP. Analysis, proof of the main theorem, and the draft manuscript was written jointly by AA, Pk, DK, and AP. Proofreading and writing of the original manuscript were done jointly by AA and PK. Software and numerical simulations were done jointly by Dk and AP. Finally, PK secured the grant for the research.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
The authors undersigned and gave their consent for the publication of their personal images used to be published in the above Journal and Article.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Adamu, A., Kumam, P., Kitkuan, D. et al. A Tseng-type algorithm for approximating zeros of monotone inclusion and J-fixed-point problems with applications. Fixed Point Theory Algorithms Sci Eng 2023, 3 (2023). https://doi.org/10.1186/s13663-023-00741-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-023-00741-2