- Research
- Open access
- Published:
Split monotone variational inclusion with errors for image-feature extraction with multiple-image blends problem
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2023, Article number: 5 (2023)
Abstract
In this paper, we introduce a new iterative forward–backward splitting algorithm with errors for solving the split monotone variational inclusion problem of the sum of two monotone operators in real Hilbert spaces. We suggest and analyze this method under some mild appropriate conditions imposed on the parameters such that another strong convergence theorem for this problem is obtained. We also apply our main result to image-feature extraction with the multiple-image blends problem, the split minimization problem, and the convex minimization problem, and provide numerical experiments to illustrate the convergence behavior and show the effectiveness of the sequence constructed by the inertial technique.
1 Introduction
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1} \rightarrow H_{2}\) be a bounded linear operator. Let \(f_{1}: H_{1} \rightarrow H_{1}\) and \(f_{2}: H_{2} \rightarrow H_{2}\) be two \(\psi _{1}\) and \(\psi _{2}\) inverse strongly monotone mappings, respectively, and \(B_{1}: H_{1} \rightarrow 2^{H_{1}}\) and \(B_{2}: H_{2} \rightarrow 2^{H_{2}}\) be two multivalued maximal monotone operators. The split monotone variational inclusion problem (SMVIP) is a fundamental problem in optimization theory, it can be applied to solve problems in many areas of science and applied science, engineering, economics, and medicine [1–9] such as image processing, machine learning, and modeling intensity-modulated radiation theory treatment planning [10–15], which is to find \(x^{*} \in H_{1}\) such that
and such that
and we will denote Ω the solution set of (1.1) and (1.2). That is
To solve SMVIP via the fixed-point theory, for \(\lambda >0\), we define the mappings \(J_{\lambda}^{f_{1},B_{1}}:H_{1} \rightarrow H_{1}\) and \(J_{\lambda}^{f_{2},B_{2}}:H_{2} \rightarrow H_{2}\) as follows:
and
where \(J_{\lambda}^{B_{1}} = (I+\lambda B_{1})^{-1}\) and \(J_{\lambda}^{B_{2}}=(I+\lambda B_{2})^{-1}\) are two resolvent operators of \(B_{1}\) and \(B_{2}\) for \(\lambda >0\), respectively. For \(x\in H_{1}\) and \(y=Ax\in H_{2}\), we see that
and in the same way, we have
This suggests the following iteration process to solve SMVIP, which is called the forward–backward splitting algorithm (FBSA) as follows: \(x_{1} \in H_{1}\) and
for all \(n\in \mathbb{N}\), where \(\lambda ,\gamma > 0\). Moudafi [16] proved that the sequence \(\{x_{n}\}\) of FBSA weakly converges to the solution of SMVIP under conditions \(\gamma \in (0,\frac{1}{L})\) and \(\lambda \in (0,2\psi )\) such that \(L=\|A\|^{2}\) and \(\psi =\min \{\psi _{1},\psi _{2}\}\).
Let \(F_{1}:H_{1} \rightarrow \mathbb{R}\) and \(F_{2}:H_{2} \rightarrow \mathbb{R}\) be two convex and differentiable functions and \(G_{1}: H_{1} \rightarrow \mathbb{R} \cup \{ \infty \}\) and \(G_{2}: H_{2} \rightarrow \mathbb{R}\cup \{ \infty \}\) be two convex and lower semicontinuous functions. The SMVIP can be reduced as follows.
If \(f_{1}=\nabla F_{1}\), \(f_{2}=\nabla F_{2}\) and \(B_{1} = \partial G_{1}\), \(B_{2} = \partial G_{2}\), where \(\nabla F_{1}\), \(\nabla F_{2}\) are two gradients of \(F_{1}\), \(F_{2}\), respectively, and \(\partial G_{1}\), \(\partial G_{2}\) are two subdifferentials of \(G_{1}\), \(G_{2}\), respectively, defined by
and
then, SMVIP is reduced to a split convex minimization problem (SCMP), which is to find \(x^{*} \in H_{1}\) such that
and such that \(y^{*} = Ax^{*} \in H_{2}\) solves
and we will denote by Γ the solution set of (1.3) and (1.4). That is,
If \(B_{1} = \partial G_{1} = \partial i_{C}\) and \(B_{2}= \partial G_{2} =\partial i_{Q}\) are two subdifferentials of an indicator function of nonempty, closed, and convex subsets \(C\subset H_{1}\) and \(Q\subset H_{2}\), respectively, defined by
then SMVIP is reduced to split a variational inequality problem (SVIP), which is to find \(x^{*}\in C\) such that
and \(y^{*}=Ax^{*} \in Q\) such that
If \(f_{1} = \nabla F_{1}\), \(f_{2} = \nabla F_{2} \), and \(B_{1} = B_{2} = 0\) then SMVIP is reduced to a split feasibility problem (SFP), which is to find \(x^{*}\in H_{1}\) such that
and \(y^{*}=Ax^{*} \in H_{2}\) such that
If \(f_{2} = B_{2} = 0\) then SMVIP is reduced to a monotone variational inclusion problem (MVIP), which is to find \(x^{*}\in H_{1}\) such that
and when \(f_{1}=\nabla F_{1}\) and \(B_{1} = \partial G_{1}\), it can be reduced to a convex minimization problem (CMP), which is to find \(x^{*}\in H_{1}\) such that
Recall that the proximity operators \(\operatorname{prox}_{\lambda G_{1}}\) of \(\lambda G_{1}\) and \(\operatorname{prox}_{\eta G_{2}}\) of \(\eta G_{2}\) for \(\lambda ,\eta > 0\), respectively, are defined as follows:
and
For \(x \in H_{1}\), we see that
and in the same way, for \(y \in H_{2}\), we have
Therefore, SCMP is reduced to finding \(x^{*} \in H_{1}\) such that
and such that \(y^{*} = Ax^{*} \in H_{2}\) solves
Many researchers have proposed, analyzed, and modified FBSA for solving SMVIP and also for solving other problems such as the variational inclusion problem and related optimization problems (see also, [17–28]). The forward–backward splitting mapping with errors was introduced by Combettes and Wajs (see more details in [12]). Recently, Tianchai introduced a new iterative shrinkage thresholding algorithm (NISTA) with an error, based on the single forward–backward splitting mapping with an error for solving MVIP, and also solving the fixed-point set of nonexpansive mapping S (see, [29]), as follows: \(x_{0},x_{1} \in H_{1}\) and
for all \(n\in \mathbb{N}\), and also introduced an improved fast iterative shrinkage thresholding algorithm (IFISTA) with an error for solving MVIP of the image-deblurring problem (see, [30]) as follows: \(x_{0},x_{1} \in H_{1}\) and
for all \(n\in \mathbb{N}\), where \(J_{\lambda _{n}}^{B_{1}} = (I+\lambda _{n} B_{1})^{-1}\) is a resolvent operator of \(B_{1}\) for \(\lambda _{n} > 0\), f is a contraction mapping, and \(\{\alpha _{n}\} \subset (0,1)\), \(\{\theta _{n}\} \subset [0,1)\), \(\{ \lambda _{n}\} \subset (0,2\psi _{1})\) and \(\{\varepsilon _{n}\} \subset H_{1}\).
We introduce two forward–backward splitting mappings with errors \(J_{\lambda _{n},\varepsilon _{n}}^{f_{1},B_{1}}:H_{1} \rightarrow H_{1}\) and \(J_{\eta _{n},\xi _{n}}^{f_{2},B_{2}}:H_{2} \rightarrow H_{2}\) as follows:
and
for all \(n\in \mathbb{N}\), where \(J_{\lambda _{n}}^{B_{1}} = (I+\lambda _{n} B_{1})^{-1}\) and \(J_{\eta _{n}}^{B_{2}}=(I+\eta _{n} B_{2})^{-1}\) are two resolvent operators of \(B_{1}\) and \(B_{2}\) for \(\lambda _{n},\eta _{n}>0\), respectively, and \(\{\varepsilon _{n}\} \subset H_{1}\), \(\{\xi _{n}\} \subset H_{2}\). In this paper, we introduce the forward–backward splitting algorithm with errors (FBSA_Err) for solving SMVIP under some mild appropriate conditions on their parameters as follows: \(x_{0},x_{1} \in H_{1}\) and
for all \(n\in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1)\), \(\{\theta _{n}\} \subset [0,1)\), \(\{ \lambda _{n}\} \subset (0,2\psi _{1}]\), \(\{\eta _{n}\} \subset (0,2 \psi _{2}]\) and \(\{\gamma _{n}\} \subset (0,\frac{1}{L}) \) such that \(L=\|A|^{2}\). Moreover, it can be applied to solve SCMP under some mild appropriate conditions on their parameters by letting \(f_{1}=\nabla F_{1}\), \(f_{2}=\nabla F_{2}\) and \(B_{1} = \partial G_{1}\), \(B_{2} = \partial G_{2}\) as follows: \(x_{0},x_{1} \in H_{1}\) and
for all \(n\in \mathbb{N}\).
Our work is divided into several sections. In Sect. 2, some basic definitions and concepts are provided. In Sect. 3, the proof of the strong convergence theorem of FBSA_Err is presented. In Sect. 4, we propose the application of image restoration to the image-feature extraction with multiple-image blends problem, the split minimization problem, the convex minimization problem, and demonstrate the effectiveness of the sequence constructed by the inertial technique.
2 Preliminaries
Let C be a nonempty closed convex subset of a real Hilbert space H. We will use the notation: → to denote the strong convergence, ⇀ to denote the weak convergence, and \(\operatorname{Fix}(T) = \{x:Tx=x \}\) to denote the fixed-point set of the mapping T.
Recall that the metric projection \(P_{C}: H \rightarrow C\) is defined as follows: for each \(x \in H\), \(P_{C} x\) is the unique point in C satisfying
The operator \(T:H\rightarrow H\) is called:
-
(i)
monotone if
$$ \langle x-y,Tx-Ty \rangle \geq 0, \quad \forall x,y \in H, $$ -
(ii)
L-Lipschitzian with \(L>0\) if
$$ \Vert Tx-Ty \Vert \leq L \Vert x-y \Vert , \quad \forall x,y \in H, $$ -
(iii)
k-contraction if it is k-Lipschitzian with \(k \in (0,1)\),
-
(iv)
nonexpansive if it is 1-Lipschitzian,
-
(v)
firmly nonexpansive if
$$ \Vert Tx-Ty \Vert ^{2} \leq \Vert x-y \Vert ^{2} - \bigl\Vert (I-T)x-(I-T)y \bigr\Vert ^{2}, \quad \forall x,y \in H, $$ -
(vi)
α-strongly monotone with \(\alpha > 0\) if
$$ \langle Tx-Ty,x-y \rangle \geq \alpha \Vert x-y \Vert ^{2}, \quad \forall x,y \in H, $$ -
(vii)
α-inverse strongly monotone with \(\alpha > 0\) if
$$ \langle Tx-Ty,x-y \rangle \geq \alpha \Vert Tx-Ty \Vert ^{2}, \quad \forall x,y \in H. $$
Let B be a mapping of H into \(2^{H}\). The domain and the range of B are denoted by \(D(B) = \{x\in H : Bx \neq \emptyset \}\) and \(R(B) = \bigcup \{Bx:x \in D(B) \}\), respectively. The inverse of B, denoted by \(B^{-1}\), is defined by \(x\in B^{-1}y\) if and only if \(y\in Bx\). A multivalued mapping B is said to be a monotone operator on H if \(\langle x-y,u-v\rangle \geq 0\) for all \(x,y \in D(B)\), \(u \in Bx\) and \(v \in By\). A monotone operator B on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and \(r>0\), we define the single-valued resolvent operator \(J_{r}^{B}:H\rightarrow D(B)\) by \(J_{r}^{B}=(I+rB)^{-1}\). It is well known that \(J_{r}^{B}\) is firmly nonexpansive and \(\operatorname{Fix}(J_{r}^{B})=B^{-1}(0)\).
We collect together some known lemmas that are the main tools in proving our result.
Lemma 2.1
([31])
Let C be a nonempty closed convex subset of a real Hilbert space H. Then,
-
(i)
\(\|x \pm y\|^{2} = \|x\|^{2} \pm 2 \langle x,y \rangle + \|y\|^{2}\), \(\forall x,y\in H\),
-
(ii)
\(\|\lambda x+(1-\lambda )y\|^{2} = \lambda \|x\|^{2}+(1-\lambda )\|y \|^{2}-\lambda (1-\lambda )\|x-y\|^{2}\), \(\forall x,y\in H, \lambda \in \mathbb{R}\),
-
(iii)
\(\langle x-P_{C} x,P_{C} x - y \rangle \geq 0\), \(\forall x \in H\), \(y \in C\),
-
(iv)
\(\| P_{C} x - P_{C} y\|^{2} \leq \langle x-y,P_{C} x - P_{C} y \rangle \), \(\forall x,y\in H\).
Lemma 2.2
([32])
Let H and K be two real Hilbert spaces and let \(T:K \rightarrow K\) be a firmly nonexpansive mapping such that \(\|(I-T)x\|\) is a convex function from K to \(\overline{\mathbb{R}}=[-\infty ,+\infty ]\). Let \(A:H\rightarrow K\) be a bounded linear operator and \(f(x) = \frac{1}{2}\|(I-T)Ax\|^{2} \) for all \(x\in H\). Then,
-
(i)
f is convex and differentiable,
-
(ii)
\(\nabla f(x) = A^{*}(I-T)Ax \) for all \(x\in H\) such that \(A^{*}\) denotes the adjoint of A,
-
(iii)
f is weakly lower semicontinuous on H,
-
(iv)
∇f is \(\|A\|^{2}\)-Lipschitzian.
Lemma 2.3
([32])
Let H be a real Hilbert space and \(T: H\rightarrow H\) be an operator. The following statements are equivalent:
-
(i)
T is firmly nonexpansive,
-
(ii)
\(\|Tx-Ty\|^{2} \leq \langle x-y,Tx-Ty \rangle \), \(\forall x,y \in H\),
-
(iii)
\(I-T\) is firmly nonexpansive.
Lemma 2.4
([33])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping \(A:C\rightarrow H\) be an α-inverse strongly monotone and \(r>0\) be a constant. Then, we have
for all \(x,y \in C\). In particular, if \(0< r\leq 2\alpha \) then \(I-rA\) is nonexpansive.
Lemma 2.5
([34] (Demiclosedness principle))
Let C be a nonempty, closed, and convex subset of a real Hilbert space H and let \(S:C \rightarrow C\) be a nonexpansive mapping with \(\operatorname{Fix}(S)\neq \emptyset \). If the sequence \(\{x_{n}\}\subset C\) converges weakly to x and the sequence \(\{(I-S)x_{n}\}\) converges strongly to y. Then, \((I-S)x = y\); in particular, if \(y=0\) then \(x\in \operatorname{Fix}(S)\).
Lemma 2.6
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(\{T_{n}\}\) and φ be two classes of nonexpansive mappings of C into itself such that
Then, for any bounded sequence \(\{z_{n}\} \subset C\), we have,
-
(i)
if \(\lim_{n \rightarrow \infty} \|z_{n}-T_{n}z_{n}\|=0\) then \(\lim_{n \rightarrow \infty} \|z_{n}-Tz_{n}\|=0\) for all \(T \in \varphi \); which is called the NST-condition (I),
-
(ii)
if \(\lim_{n \rightarrow \infty} \|z_{n+1}-T_{n}z_{n}\|=0\) then \(\lim_{n \rightarrow \infty} \|z_{n}-T_{m}z_{n}\|=0\) for all \(m \in \mathbb{N}\cup \{0\}\); which is called the NST-condition (II).
Lemma 2.7
([37])
Let \(\{a_{n}\}\) and \(\{c_{n}\}\) be sequences of nonnegative real numbers such that
where \(\{\delta _{n} \}\) is a sequence in \((0,1)\) and \(\{b_{n}\}\) is a real sequence. Assume that \(\sum_{n=0}^{\infty }c_{n} < \infty \). Then, the following results hold:
-
(i)
if \(b_{n} \leq \delta _{n} M\) for some \(M\geq 0\) then \(\{a_{n}\}\) is a bounded sequence,
-
(ii)
if \(\sum_{n=0}^{\infty }\delta _{n} = \infty \) and \(\limsup_{n\rightarrow \infty} b_{n}/\delta _{n} \leq 0\) then \(\lim_{n\rightarrow \infty}a_{n}=0\).
Lemma 2.8
([38])
Assume that \(\{s_{n}\}\) is a sequence of nonnegative real numbers such that
and
where \(\{\mu _{n}\}\) is a sequence in \((0,1)\), \(\{\sigma _{n}\}\) is a sequence of nonnegative real numbers, and \(\{\delta _{n}\}\), \(\{\rho _{n}\}\) are real sequences such that
-
(i)
\(\sum_{n=0}^{\infty }\mu _{n} = \infty \),
-
(ii)
\(\lim_{n\rightarrow \infty} \rho _{n} = 0\),
-
(iii)
if \(\lim_{k\rightarrow \infty}\sigma _{n_{k}} = 0\) then \(\limsup_{k\rightarrow \infty} \delta _{n_{k}} \leq 0\) for any subsequence \(\{n_{k}\}\) of \(\{n\}\).
Then, \(\lim_{n\rightarrow \infty} s_{n} = 0\).
3 Main result
For solving the split monotone variational inclusion problem using the forward–backward splitting algorithm (with errors), we assume an initial condition (A), as follows:
where \(\mathcal{U} = J_{\lambda}^{B_{1}} (I-\lambda f_{1})\), \(\mathcal{V} = J_{ \eta}^{B_{2}} (I-\eta f_{2})\) and \(\mathcal{U}_{n} = J_{\lambda _{n}}^{B_{1}} (I-\lambda _{n} f_{1}) \), \(\mathcal{V}_{n} = J_{\eta _{n}}^{B_{2}} (I-\eta _{n} f_{2})\) with \(\lambda _{n} \rightarrow \lambda \) and \(\eta _{n} \rightarrow \eta \) as →∞ such that \(f_{1}\), \(f_{2}\), \(B_{1}\), \(B_{2}\), and \(\lambda _{n}\), \(\eta _{n}\), λ, η are defined below.
Theorem 3.1
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1} \rightarrow H_{2}\) be a bounded linear operator. Let \(f_{1}: H_{1} \rightarrow H_{1}\) and \(f_{2}: H_{2} \rightarrow H_{2}\) be two \(\psi _{1}\) and \(\psi _{2}\) inverse strongly monotone mappings, respectively, and \(B_{1}: H_{1} \rightarrow 2^{H_{1}}\) and \(B_{2}: H_{2} \rightarrow 2^{H_{2}}\) be two multivalued maximal monotone operators. Let \(f: H_{1} \rightarrow H_{1}\) be a k-contraction mapping, and assume that Ω is nonempty and satisfies the condition (A). Let \(x_{0},x_{1} \in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1)\), \(\{\gamma _{n}\} \subset [a_{1},b_{1}] \subset (0,\frac{1}{L})\) such that \(L = \|A\|^{2}\), and \(\{\lambda _{n}\} \subset [a_{2},b_{2}] \subset (0,2\psi _{1}]\), \(\{ \eta _{n}\} \subset [a_{3},b_{3}] \subset (0,2\psi _{2}]\) such that \(\lambda _{n} \rightarrow \lambda \), \(\eta _{n} \rightarrow \eta \) as \(n \rightarrow \infty \), and \(\{\varepsilon _{n}\} \subset H_{1}\), \(\{\xi _{n}\} \subset H_{2}\), \(\{ \theta _{n}\} \subset [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty} \alpha _{n} = 0\) and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(\lim_{n\rightarrow \infty}\frac{\|\varepsilon _{n}\|}{\alpha _{n}} = \lim_{n\rightarrow \infty}\frac{\|\xi _{n}\|}{\alpha _{n}} =0\),
-
(C3)
\(\sum_{n=1}^{\infty }\|\varepsilon _{n}\| < \infty \) and \(\sum_{n=1}^{\infty }\|\xi _{n}\| < \infty \),
-
(C4)
\(\lim_{n\rightarrow \infty} \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\),
then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Omega \), where \(x^{*} = P_{\Omega }f(x^{*})\).
Proof
Selecting \(p \in \Omega \) and fixing \(n \in \mathbb{N}\), it follows that \(p = J_{\lambda _{n}}^{B_{1}}(I-\lambda _{n} f_{1})p\) and \(Ap = J_{\eta _{n}}^{B_{2}}(I-\eta _{n} f_{2})Ap\). First, we will show that \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{z_{n}\}\) are bounded. Since,
and on the other hand, we have
and
Therefore, it follows by nonexpansiveness of \(J_{\eta _{n}}^{B_{2}}\) and \(I-\eta _{n} f_{2}\) that
This implies that
Hence, by (3.1) and (3.3), and the nonexpansiveness of \(J_{\lambda _{n}}^{B_{1}}\) and \(I-\lambda _{n} f_{1}\), we have
Hence, by conditions (C3) and (C4), and putting \(M = \frac{1}{1-k} ( \|f(p)-p\|+ \sup_{n\in \mathbb{N}} \frac{\theta _{n}}{\alpha _{n}} \|x_{n}-x_{n-1}\| ) \geq 0\) in Lemma 2.7 (i), we conclude that the sequence \(\{\|x_{n}-p\|\}\) is bounded. That is, the sequence \(\{x_{n}\}\) is bounded, and so is \(\{z_{n}\}\). Moreover, by condition (C3), we obtain \(\lim_{n \rightarrow \infty} \varepsilon _{n} = \lim_{n \rightarrow \infty} \xi _{n} =0\), it follows that the sequence \(\{y_{n}\}\) is also bounded.
Since, \(P_{\Omega }f\) is a k-contraction on \(H_{1}\), by Banach’s contraction principle there exists a unique element \(x^{*} \in H_{1}\) such that \(x^{*} = P_{\Omega }f(x^{*})\), that is \(x^{*} \in \Omega \), it follows that \(x^{*} = J_{\lambda _{n}}^{B_{1}}(I-\lambda _{n} f_{1})x^{*}\) and \(Ax^{*} = J_{\eta _{n}}^{B_{2}}(I-\eta _{n} f_{2})Ax^{*}\). Now, we will show that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). On the other hand, we have
Therefore,
Since,
it follows by (3.2) and (3.4), and the nonexpansiveness of \(J_{\lambda _{n}}^{B_{1}}\) and \(I-\lambda _{n} f_{1}\) that
Therefore,
and
which are of the forms
and
respectively, where \(s_{n}=\|x_{n}-x^{*}\|^{2}\), \(\mu _{n} = \alpha _{n} (1-k) \), \(\delta _{n} = \frac{2}{1-k} \frac{\theta _{n}}{\alpha _{n}} \| x_{n}-x_{n-1}\| \|z_{n}-x^{*} \|+\frac{2}{\sqrt{L}(1-k)}\frac{\|\xi _{n}\|}{\alpha _{n}}\|z_{n}-x^{*} \| +\frac{1}{L(1-k)}\frac{\|\xi _{n}\|}{\alpha _{n}} \|\xi _{n}\| + \frac{2}{1-k}\frac{1}{1+\alpha _{n} (1-k)} \langle f(x^{*})-x^{*},x_{n+1}-x^{*} \rangle +\frac{2}{1-k}\frac{\|\varepsilon _{n}\|}{\alpha _{n}} \|y_{n}-x^{*} \|+\frac{1}{1-k} \frac{\|\varepsilon _{n}\|}{\alpha _{n}} \| \varepsilon _{n}\|\), \(\sigma _{n} = (1-\alpha _{n} (1-k))\gamma _{n} (1-L \gamma _{n}) \|J_{\eta _{n}}^{B_{2}}((I-\eta _{n} f_{2})Az_{n}+\xi _{n})-Az_{n} \|^{2}\) and \(\rho _{n} = 2\alpha _{n} \frac{\theta _{n}}{\alpha _{n}} \| x_{n}-x_{n-1} \| \|z_{n}-x^{*} \| +\frac{2}{\sqrt{L}}\|z_{n}-x^{*}\|\|\xi _{n}\|+ \frac{1}{L}\|\xi _{n}\|^{2}+2\alpha _{n} \|f(x^{*})-x^{*}\| \|x_{n+1}-x^{*} \| +2\|y_{n}-x^{*}\|\|\varepsilon _{n}\|+\|\varepsilon _{n}\|^{2}\). Therefore, using conditions (C1), (C3), and (C4), we can check that all those sequences satisfy conditions (i) and (ii) in Lemma 2.8. To complete the proof, we verify that condition (iii) in Lemma 2.8 is satisfied. Let \(\lim_{i\rightarrow \infty}\sigma _{n_{i}} = 0\). Then, by conditions (C1) and (C3), we have
Consider a subsequence \(\{z_{n_{i}}\}\) of \(\{z_{n}\}\). As \(\{z_{n}\}\) is bounded, so is \(\{z_{n_{i}}\}\), and there exists a subsequence \(\{z_{n_{i_{j}}}\}\) of \(\{z_{n_{i}}\}\) that converges weakly to \(x \in H_{1}\). Without loss of generality, we can assume that \(z_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \). It follows that \(Az_{n_{i}} \rightharpoonup Ax\) as \(i\rightarrow \infty \). Hence, by (3.5) and the demiclosedness at zero in Lemma 2.5, we obtain \(y = Ax \in \operatorname{Fix}(J_{\eta}^{B_{2}} (I-\eta f_{2}))\), indeed also, \(y = Ax \in \operatorname{Fix}(J_{\eta _{n_{i}}}^{B_{2}} (I-\eta _{n_{i}}f_{2}))\), that is \(y = Ax\) solves \(0 \in f_{2}(y)+B_{2}(y)\). Since,
and
by conditions (C1), (C3), (C4), and (3.5), we obtain \(y_{n_{i}} \rightharpoonup x\) and \(x_{n_{i}} \rightharpoonup x\) as \(i\rightarrow \infty \), it follows that \(y_{n_{i}} - x_{n_{i}} \rightharpoonup 0\) as \(i \rightarrow \infty \). Hence, by the nonexpansiveness of \(J_{\lambda _{n_{i}}}^{B_{1}}\) and \(I-\lambda _{n_{i}} f_{1}\), we obtain
Hence, by conditions (C1) and (C3), we have
Therefore, by NST-condition (II) in Lemma 2.6, we obtain
Hence, by the demiclosedness at zero in Lemma 2.5 again, we obtain \(x \in \operatorname{Fix}(J_{\lambda}^{B_{1}} (I-\lambda f_{1}))\), indeed also, \(x \in \operatorname{Fix}(J_{\lambda _{n_{i}}}^{B_{1}} (I-\lambda _{n_{i}} f_{1}))\), that is x solves \(0 \in f_{1}(x)+B_{1}(x)\). It follows that \(x = J_{\lambda _{n_{i}}}^{B_{1}} (I-\lambda _{n_{i}} f_{1})x \in \Omega \). Since, by the nonexpansiveness of \(J_{\lambda _{n_{i}}}^{B_{1}}\) and \(I-\lambda _{n_{i}} f_{1}\), we have
it follows by conditions (C1) and (C3) that \(x_{n_{i}+1} - x_{n_{i}} \rightharpoonup 0\) as \(i \rightarrow \infty \). Hence, by Lemma 2.1(iii) we obtain
It follows by conditions (C1), (C2), (C3), and (C4) that \(\limsup_{i\rightarrow \infty} \delta _{n_{i}} \leq 0\). Hence, by Lemma 2.8, we conclude that \(x_{n} \rightarrow x^{*}\) as \(n\rightarrow \infty \). This completes the proof. □
Remark 3.2
Indeed, the parameter \(\theta _{n}\) can be chosen as follows:
and for the speed up of convergence, the parameter \(\theta _{n}\) is often chosen as follows:
where \(N \in \mathbb{N}\) and \(\{\omega _{n}\}\) is a positive sequence such that \(\omega _{n} = o(\alpha _{n})\).
4 Applications and numerical examples
In this section, we give some applications of our result using FBSA_Err to the image-feature extraction with multiple-image blends problem, the split minimization problem and convex minimization problem.
4.1 Image-feature extraction with multiple-image blends problem
Let \(F_{1}:H_{1} \rightarrow \mathbb{R}\) and \(F_{2}:H_{2} \rightarrow \mathbb{R}\) be two convex and differentiable functions, \(G_{1}:H_{1} \rightarrow \mathbb{R}\cup \{ \infty \}\) and \(G_{2}:H_{2} \rightarrow \mathbb{R}\cup \{ \infty \}\) be two convex and lower semicontinuous functions such that the gradients \(\nabla F_{1}\) and \(\nabla F_{2}\) are \(\frac{1}{\psi _{1}}\)- and \(\frac{1}{\psi _{2}}\)-Lipschitz continuous functions, \(\partial G_{1}\) and \(\partial G_{2}\) are subdifferentials of \(G_{1}\) and \(G_{2}\), respectively. It is well known from [39] that \((1/\alpha )\)-Lipschitz continuous functions are also α-inverse strongly monotones, that is, if \(\nabla F_{1}\) and \(\nabla F_{2}\) are \(\frac{1}{\psi _{1}}\)- and \(\frac{1}{\psi _{2}}\)-Lipschitz continuous functions, respectively, then both are also \(\psi _{1}\) and \(\psi _{2}\) inverse strongly monotones, respectively. Moreover, \(\partial G_{1}\) and \(\partial G_{2}\) are maximal monotones [40].
Putting \(f_{1}=\nabla F_{1}\), \(f_{2}=\nabla F_{2}\) and \(B_{1}=\partial G_{1}\), \(B_{2}=\partial G_{2}\) into Theorem 3.1, and we assume an initial condition (\(A^{*}\)) as follows:
where \(\mathcal{U} = \operatorname{prox}_{\lambda G_{1}} (I-\lambda \nabla F_{1})\), \(\mathcal{V} = \operatorname{prox}_{\eta G_{2}} (I-\eta \nabla F_{2})\) and \(\mathcal{U}_{n} = \operatorname{prox}_{\lambda _{n} G_{1}} (I-\lambda _{n} \nabla F_{1})\), \(\mathcal{V}_{n} = \operatorname{prox}_{\eta _{n} G_{2}} (I- \eta _{n} \nabla F_{2})\) with \(\lambda _{n} \rightarrow \lambda \) and \(\eta _{n} \rightarrow \eta \) as →∞ such that \(F_{1}\), \(F_{2}\), \(G_{1}\), \(G_{2}\) and \(\lambda _{n}\), \(\eta _{n}\), λ, η are defined below, we obtain the following result.
Theorem 4.1
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1} \rightarrow H_{2}\) be a bounded linear operator. Let \(F_{1}:H_{1} \rightarrow \mathbb{R}\) and \(F_{2}:H_{2} \rightarrow \mathbb{R}\) be two convex and differentiable functions with \(\frac{1}{\psi _{1}}\)- and \(\frac{1}{\psi _{2}}\)-Lipschitz continuous gradients \(\nabla F_{1}\) and \(\nabla F_{2}\), respectively, and \(G_{1}: H_{1} \rightarrow \mathbb{R}\) and \(G_{2}: H_{2} \rightarrow \mathbb{R}\) be two convex and lower semicontinuous functions. Let \(f: H_{1} \rightarrow H_{1}\) be a k-contraction mapping, and assume that Γ is nonempty and satisfies the condition (\(A^{*}\)). Let \(x_{0},x_{1} \in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1)\), \(\{\gamma _{n}\} \subset [a_{1},b_{1}] \subset (0,\frac{1}{\|A\|^{2}})\), and \(\{\lambda _{n}\} \subset [a_{2},b_{2}] \subset (0,2\psi _{1}]\), \(\{ \eta _{n}\} \subset [a_{3},b_{3}] \subset (0,2\psi _{2}]\) such that \(\lambda _{n} \rightarrow \lambda \), \(\eta _{n} \rightarrow \eta \) as \(n \rightarrow \infty \), and \(\{\varepsilon _{n}\} \subset H_{1}\), \(\{\xi _{n}\} \subset H_{2}\), \(\{ \theta _{n}\} \subset [0,1)\) satisfy the following conditions:
-
(C1)
\(\lim_{n\rightarrow \infty} \alpha _{n} = 0\) and \(\sum_{n=1}^{\infty }\alpha _{n} = \infty \),
-
(C2)
\(\lim_{n\rightarrow \infty}\frac{\|\varepsilon _{n}\|}{\alpha _{n}} = \lim_{n\rightarrow \infty}\frac{\|\xi _{n}\|}{\alpha _{n}} =0\),
-
(C3)
\(\sum_{n=1}^{\infty }\|\varepsilon _{n}\| < \infty \) and \(\sum_{n=1}^{\infty }\|\xi _{n}\| < \infty \),
-
(C4)
\(\lim_{n\rightarrow \infty} \frac{\theta _{n}}{\alpha _{n}}\|x_{n}-x_{n-1} \| = 0\),
then the sequence \(\{x_{n}\}\) converges strongly to a point \(x^{*} \in \Gamma \), where \(x^{*} = P_{\Gamma }f(x^{*})\).
We now propose the image-feature extraction with multiple-image blends using the fixed-point optimization algorithm in Theorem 4.1. The image information or the hidden image (messages) H went through a hybrid image-encryption system to the encrypted image L using the linear chaos-based method, and went through a digital image watermarking system using linear combination of superposition of carrier images \(C_{1},C_{2},\ldots,C_{N}\) and the encrypted image with additive noise \(L^{\dagger}\) to the mixed image M, see Fig. 1.
The discrete logistic chaotic map is defined as follows: \(x_{1}\in (0,1)\) and
where \(0 < \mu \leq 4\), and when \(3.57 \leq \mu \leq 4\), the unpredictability of the sequence \(\{x_{n}\}\) is generated by logistic chaotic maps. We introduce the linear method for image encryption using logistic chaotic maps as follows:
where \(A*x\) is Hadamard product (element-wise multiplication) of A and x such that \(A\in \mathbb{R}^{m \times m}\) represents a known image-encryption operator (which is called the point-spread function: PSF) such that stacking the columns of A corresponding with a discrete logistic chaotic map \(\{x_{n}\}_{n=1}^{m^{2}}\), and \(L\in \mathbb{R}^{m\times m}\) is a known encrypted image, \(\varepsilon \in \mathbb{R}^{m\times m}\) is an unknown white Gaussian noise, and \(x \in \mathbb{R}^{m\times m}\) is an unknown image to be decrypted, the (estimated) image.
Let \(C_{1},C_{2},\ldots,C_{N} \in \mathbb{R}^{m \times m}\) be N-carrier images, and \(\{\mu _{n}\}_{n=1}^{N} \subset (0,1)\). We introduce the linear combination method for image mixing of superposition carrier images \(C_{1},C_{2},\ldots, C_{N}\), and the encrypted image with additive white Gaussian noise \(L^{\dagger }= L+\varepsilon \) as follows:
where \(M = M_{N}\), which is called the superimposed mixed image. For each \(N \in \mathbb{N}\), it is clear that
where \(\rho = \prod_{i=1}^{N} (1-\mu _{i})\) and
For brevity, we use the notation Ax instead of \(A*x\). In order to solve (4.1) and (4.2) of the solution set Γ using Theorem 4.1, we let \(F_{1}(x) = \|Ax-L^{\dagger} \|_{2}^{2}\), \(F_{2}(y) = \|\rho (y) - (M - \sum_{i=1}^{N} \beta _{i} C_{i} ) \|_{2}^{2} \) and \(G_{1}(x) =\kappa _{1} \|x\|_{1}\), \(G_{2}(y) =\kappa _{2} \|y\|_{1}\) with \(y=Ax \in \mathbb{R}^{m \times m}\) for all \(x \in \mathbb{R}^{m \times m}\) such that \(\kappa _{1},\kappa _{2} > 0\), and for \((x_{1},x_{2},\ldots,x_{m^{2}})^{T} \in \mathbb{R}^{m^{2}}\) corresponding to stacking the columns of \(x\in \mathbb{R}^{m \times m}\), \(\|x\|_{1} = \sum_{i=1}^{m^{2}} |x_{i}|\) and \(\|x\|_{2} = \sqrt{\sum_{i=1}^{m^{2}} |x_{i}|^{2}}\). That is, we find the decrypted (hidden) image \(x^{*} \in \mathbb{R}^{m \times m}\) that solves
and such that the watermark (superposition images) extracted image \(y^{*} = Ax^{*} \in \mathbb{R}^{m \times m}\) solves
It is well known from Lemma 2.2 by putting \(T(Ax) = P_{\mathbb{R}^{m\times m}} Ax = L^{\dagger}\) that \(\nabla F_{1}(x) = 2A^{T} (Ax-L^{\dagger})\) and \(\nabla F_{1}\) is \(\frac{1}{\psi _{1}}\)-Lipschitzian such that \(\psi _{1} = \frac{1}{2\|A\|^{2}}\), and putting \(T(\rho (Ax)) = P_{\mathbb{R}^{m\times m}} \rho (Ax) = M - \sum_{i=1}^{N} \beta _{i} C_{i}\) that \(\nabla F_{2}(x) = 2\rho A^{T} (\rho (Ax)-(M - \sum_{i=1}^{N} \beta _{i} C_{i} ) )\) and \(\nabla F_{2}\) is \(\frac{1}{\psi _{2}}\)-Lipschitzian such that \(\psi _{2} =\frac{1}{2\rho ^{2} \|A\|^{2}}\), and \(A^{T}\) stands for the transpose of A, and \(\|A\|\) is the largest singular value of A (i.e., the square root of the largest eigenvalue of the matrix \(A^{T} A\)) or the spectral norm \(\|A\|_{2}\).
By [29] and the references therein, for \((u_{1},u_{2},\ldots,u_{m^{2}})^{T},(\tilde{u} _{1},\tilde{u}_{2},\ldots, \tilde{u}_{m^{2}})^{T} \in \mathbb{R}^{m^{2}}\) corresponding with stacking the columns of \(u,\tilde{u}\in \mathbb{R}^{m \times m}\), respectively, and for each \(n \in \mathbb{N}\), we have
where \(v_{i} = \operatorname{sign}(u_{i})\max \{|u_{i}|-\lambda _{n} \kappa _{1},0\}\) and \(\tilde{v}_{i} = \operatorname{sign}(\tilde{u}_{i})\max \{|(A\tilde{u})_{i}|- \eta _{n} \kappa _{2},0\}\) for all \(i=1,2,\ldots,m^{2}\) such that \((v_{1},v_{2},\ldots,v_{m^{2}})^{T}, (\tilde{v}_{1},\tilde{v}_{2},\ldots, \tilde{v}_{m^{2}})^{T} \in \mathbb{R}^{m^{2}}\) corresponding to stacking the columns of \(v,\tilde{v}\in \mathbb{R}^{m \times m}\), respectively.
Example 4.2
We illustrate the performance of FBSA_Err in Algorithm 1 for solving image-feature extraction with the multiple-image blends problem through (4.3) and (4.4) with \(\kappa _{1} = \kappa _{2} = 10^{-4}\). We implemented them in MATLAB R2019a to solve and run on a personal laptop: Intel(R) Core(TM) i5-8250U CPU @1.80 GHz 8 GB RAM.
Let \(x=(a_{ij})\), \(x_{n}=(b_{ij}) \in \mathbb{R}^{m \times m}\) represent the hidden image and the estimated image at the first n iteration(s), respectively. We use the normalized crosscorrelation (NCC) as the digital image-matching measure (it is better if this is near 1) of the images x and \(x_{n}\), which is defined by
where \(\bar{a} =\frac{1}{m^{2}} \sum_{i=1}^{m} \sum_{j=1}^{m} a_{ij}\) and \(\bar{b} = \frac{1}{m^{2}} \sum_{i=1}^{m} \sum_{j=1}^{m} b_{ij}\), and also use the signal-to-noise ratio (SNR) measure (it is better if this is a large value) of the images x and \(x_{n}\), and the improvement in signal-to-noise ratio (ISNR) measure (it is better if this is a large value) of the images x, \(x_{n}\) and \(L^{\dagger}\), which are defined (measured in decibels: dB) by
where \(L^{\dagger }\in \mathbb{R}^{m \times m}\) represents the observed encrypted image with additive noise.
For illustration, we consider the standard test images downloaded from [41] for Woman, Pirate, and Cameraman, and image information downloaded from [42] with all images converted to the double class type of a monochrome image and resized to \(256 \times 256\) pixels by img = im2double(imread(‘image_name’)) and imresize(img,[256,256]) in MATLAB, respectively, which represent the carrier images \(C_{1},C_{2},C_{3} \in \mathbb{R}^{256 \times 256}\) and the hidden image \(H \in \mathbb{R}^{256 \times 256}\), respectively, see Fig. 2.
The hidden image H went through chaos-based image encryption, to an encrypted image \(L = A*H\) such that stacking the columns of \(A \in \mathbb{R}^{256 \times 256}\) corresponding with discrete logistic chaotic map \(\tilde{x} = \{x_{n}\}_{n=1}^{256^{2}}\) with \(\mu = 3.57\) and the logistic chaotic map initial \(x_{1} = 0.25\) by \(A=\text{reshape}(\tilde{x},[256,256])\) and \(L=A.*H\) in MATLAB, and followed by adding the zero-mean white Gaussian noise ε with standard deviation 10−3 to the image \(L^{\dagger }= L+\varepsilon = L+10^{-3}*\text{randn}(\text{size}(L))\) in MATLAB, see Fig. 3.
The encrypted image with additive noise \(L^{\dagger}\) went through image mixing of superposition carrier images of \(C_{1} = \text{Woman}\), \(C_{2} = \text{Pirate}\), and \(C_{3} = \text{Cameraman}\), respectively, to the superimposed mixed image M with \(\mu _{1} = 0.999\), \(\mu _{2} = 0.25\), and \(\mu _{3} = 0.5\) as follows:
or
where \(M=M_{3}\). That is,
such that \(\rho = (1-\mu _{1})(1-\mu _{2})(1-\mu _{3}) = 0.000375\), \(\beta _{1} =(1- \mu _{2})(1-\mu _{3})\mu _{1} = 0.3746\), \(\beta _{2} = (1-\mu _{3})\mu _{2} = 0.125\) and \(\beta _{3} = \mu _{3} = 0.5\), and \(L^{\dagger}\), \(M_{1}\), \(M_{2}\), and \(M_{3}\) are as in Fig. 4. We now find the decrypted (hidden) image \(x^{*} \in \mathbb{R}^{256 \times 256}\) that solves
and such that the watermark (superposition images) extracted image \(y^{*} = Ax^{*} \in \mathbb{R}^{256 \times 256}\) solves
Let \(L = \|A\|^{2}\), \(\psi _{1} = \frac{1}{2\|A\|^{2}} = \frac{1}{2L}\) and \(\psi _{2} = \frac{1}{2\rho ^{2}\|A\|^{2}} = \frac{1}{2\rho ^{2} L}\). We introduce the best choice types of testing the parameters \(\gamma _{n}\), \(\lambda _{n}\) and \(\eta _{n}\) for the fast convergence with \(L_{0} = \frac{1}{2L}\), \(L_{1} = \psi _{1}\), and \(L_{2} = \psi _{2}\) as in Table 1. For each \(n \in \mathbb{N}\), we assume the parameters \(\lambda _{n}\) and \(\eta _{n}\) are A6 type, which is the best choice type for all cases of the parameter \(\gamma _{n}\), and setting \(\alpha _{n} = \frac{10^{-6}}{n+1}\) and
and the errors \(\varepsilon _{n}=\xi _{n} = \frac{M}{(n+1)^{3}}\), and we also set \(f(x) = \frac{x}{5}\) for all \(x \in \mathbb{R}^{256\times 256}\) and choose the algorithm initials \(x_{0} = x_{1} = M\).
We use NCC, SNR, and ISNR that measure the quality of the decrypted image at the first 10,000 iterations, which are shown in Table 2. Moreover, we also show the relative error that is defined by
where tol denotes a prescribed tolerance value of the algorithm, and their convergence behaviors are shown in Fig. 5.
From the results of all quality measures of the feature-extracted image as in Table 2, we see that the quantity of NCC in cases 1 and 2 are greater than the others, but the quantities of SNR and ISNR of them are lower and greater than others, respectively, and then, we conclude that for quality measure of the decrypted image by NCC measure only, case 1 is the best choice, and by NCC, SNR, and ISNR measures together, case 2 is the best choice, for image-feature extraction with multiple-image blends using FBSA_Err, which are shown in Figs. 6, 7, and 8, and also show the increasing of NCC, SNR, and ISNR of them in the first 10,000 to 100,000 iterations as in Table 3.
We next consider seven different choices of the parameter \(\theta _{n}\) for testing the fast convergence at the first 10,000 iterations of the case 2 only, as follows: \(\sigma _{n} = \frac{1}{2^{n}}\) (choice 1), \(\sigma _{n} = \frac{1}{n+1}\) (choice 2), \(\sigma _{n} = 0\) (choice 3), \(\sigma _{n} = 0.5\) (choice 4), \(\sigma _{n} = \frac{n}{n+1}\) (choice 5), \(\sigma _{n} = \frac{t_{n}-1}{t_{n+1}} \text{ such that } t_{1} = 1 \text{ and } t_{n+1} = \frac{1+\sqrt{1+4t_{n}^{2}}}{2}\) (choice 6) of
and choice 7 is
and the others constant.
From the results of seven different choices of the parameter \(\theta _{n}\) as in Table 4, we see that all quality measures of choice 6 are greater than the others, and then we conclude that the choice 6 of the parameter \(\theta _{n}\) as (4.5) is to be an accelerated choice for the speed up of convergence of solving this complex example.
Remark 4.3
The architecture of the chaos-based image cryptosystem mainly consists of two stages: the confusion (pixel permutation) stage and the diffusion (sequential pixel-value modification) stage, which are directly generated by the point-spread function A on pixels of the hidden image H to the encrypted image \(L = A \diamondsuit H\) (pixel permutation or sequential pixel-value modification of H by A). In this paper, the hidden image H encrypts of the confusion stage to the encrypted image \(L = A*H\), which is generated by the linear method of element-wise multiplication of A and H such that stacking the columns of \(A \in \mathbb{R}^{m \times m}\) corresponding with the discrete logistic chaotic map \(\{x_{n}\}_{n=1}^{m^{2}}\). For the diffusion stage using this linear method, the encrypted image L can be generated by \(L = A\cdot H\) (regular matrix multiplication of A and H) as in Fig. 9.
Open problem
How to write a programming technique of FBSA_Err to solve the image-feature extraction with multiple-image blends problem of the encrypted image \(L = A\cdot H\)?
4.2 Split minimization problem
Let \(G_{1}: H_{1} \rightarrow \mathbb{R} \cup \{ \infty \}\) and \(G_{2}: H_{2} \rightarrow \mathbb{R}\cup \{ \infty \}\) be two convex and lower semicontinuous functions. If \(f_{1}=f_{2}=0\) and \(B_{1}=\partial G_{1}\), and \(B_{2}=\partial G_{2}\) then the SMVIP is reduced to the split variational inclusion problem (SVIP) or the split minimization problem (SMP), which is to find \(x^{*} \in H_{1}\) such that
and such that \(y^{*} = Ax^{*} \in H_{2}\) solves
and we will denote by Φ the solution set of (4.8) and (4.9). That is,
Many researchers have proposed, analyzed, and modified the iteration methods for solving the SMVIP and the SVIP using self-adaptive, step-size methods. Recently, Yao et al. [43] introduced the YSLD method in Algorithm 2.1 for solving SMVIP, and also Tan et al. [44] introduced TQY methods in Algorithms 3.3 and 3.4, and Thong et al. [45] introduced the TDC method in Algorithm 3.3 for solving SVIP, as follows.
Let \(H_{1}\), \(H_{2}\), A, \(f_{1}\), \(f_{2}\), \(B_{1}\), \(B_{2}\), and f be defined as the state of Theorem 3.1, and assume that Ω and Φ are nonempty and satisfy the condition (A).
YSLD method in Algorithm 2.1
Let \(x_{0},x_{1}\in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\theta _{n} \in [0,\bar{\theta _{n}}]\), \(\lambda \in (0,2\psi )\) such that \(\psi = \min \{\psi _{1},\psi _{2}\}\) and
and
such that \(\rho _{n} \in [a,b] \subset (0,1)\), \(\{\omega _{n}\} \in \ell _{1}\), \(\theta \in [0,1)\) and \(\gamma > 0\).
TQY method in Algorithm 3.3
Let \(x_{0},x_{1}\in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\}, \{\beta _{n}\} \subset (0,1)\) such that \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), \(\sum_{n=1}^{\infty } \alpha _{n} = \infty \), \(\{\beta _{n}\} \subset (a,b) \subset (0,1- \alpha _{n})\), and
and
such that \(\theta >0\), \(\gamma >0\), \(\lambda > 0\), \(\rho _{n} \in (0,2)\), \(\omega _{n}=o( \alpha _{n})\) and \(\lim_{n\rightarrow \infty }\frac{\omega _{n}}{\alpha _{n}}= 0\).
TQY method in Algorithm 3.4
Let \(x_{0},x_{1}\in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\{\alpha _{n}\} \subset (0,1)\) such that \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), \(\sum_{n=1}^{\infty } \alpha _{n} = \infty \), \(\theta _{n}\) defined as (4.10) and \(\gamma _{n}\) defined as (4.11) such that \(\theta >0\), \(\gamma >0\), \(\lambda > 0\), \(\rho _{n} \in (0,2)\), \(\omega _{n}=o( \alpha _{n})\) and \(\lim_{n\rightarrow \infty }\frac{\omega _{n}}{\alpha _{n}}= 0\).
TDC method in Algorithm 3.3
Let \(x_{0},x_{1}\in H_{1}\) and \(\{x_{n}\} \subset H_{1}\) be a sequence generated by
for all \(n \in \mathbb{N}\), where \(\theta _{n} \in [0,\bar{\theta _{n}}] \), \(\{\alpha _{n}\} \subset (0,1)\), \(\{\gamma _{n}\} \subset [a,b]\subset (0,\frac{1}{L})\) such that \(L = \|A\|^{2}\), \(\lim_{n\rightarrow \infty } \alpha _{n} = 0\), \(\sum_{n=1}^{ \infty }\alpha _{n} = \infty \) and
and
such that
and \(\theta >0\), \(\lambda > 0\), \(\omega _{n}=o(\alpha _{n}) \in [0,\theta )\), and \(\lim_{n\rightarrow \infty }\frac{\omega _{n}}{\alpha _{n}}= 0\).
Example 4.4
We illustrate the performance of our Algorithm 2 in Theorem 3.1 compared with the YSLD Algorithm 2.1, TQY Algorithms 3.3 and 3.4, and the TDC Algorithm 3.3. We implemented them in MATHEMATICA 5.0 to solve and run on a personal laptop: Intel(R) Core(TM) i5-8250U CPU @1.80 GHz 8 GB RAM.
Let \(H_{1}=H_{2}=L^{2}([0,1])\) embedded with the inner product \(\langle x(t),y(t) \rangle = \int _{0}^{1} x(t)y(t)\,dt\) and the induced norm \(\|x(t)\| = (\int _{0}^{1} |x(t)|^{2}\,dt)^{1/2}\) for all \(x(t),y(t) \in L^{2}([0,1])\). Let \(A:L^{2} ([0,1])\rightarrow L^{2}([0,1])\) be the Volterra integration operator, which is given by \((Ax)(t) = \int _{0}^{t} x(s)\,ds\) for all \(t\in [0,1]\) and \(x(t) \in L^{2}([0,1])\). It is well known that the adjoint \(A^{*}\) of A, which is defined by \((A^{*}x)(t) = \int _{t}^{1} x(s)\,ds\) for all \(t\in [0,1]\) and \(x(t) \in L^{2}([0,1])\), is a bounded linear operator and \(\|A\| = \frac{2}{\pi}\) (see [46]).
Let \(f_{1}=f_{2}=0\) and \(B_{1}=\partial \|x(t)\|\), \(B_{2}=\partial \|y(t)\|\) with \(y(t)=Ax\) for all \(x(t),y(t) \in L^{2}([0,1])\). Then, the SMVIP and the SVIP are reduced to finding \(x^{*}(t) \in L^{2}([0,1])\) such that
and such that \(y^{*}(t) = Ax^{*} \in L^{2}([0,1])\) solves
Note that \((x(t),y(t))=(0,0) \in \Omega = \Phi \). For \(\lambda > 0\) and \(x(t) \in L^{2}([0,1])\), by [43] we have
In the compared algorithms, all parameters have been set to their high performance. Since, \(f_{1}\) and \(f_{2}\) are \(\psi _{1}\) and \(\psi _{2}\) inverse strongly monotones for all \(\psi _{1},\psi _{2} > 0\), respectively, we fix \(\psi _{1} =\psi _{2} \) and let \(L=\|A\|^{2}\), \(L_{0}=\frac{1}{2L}\), \(L_{1}=\psi _{1}\) and \(L_{2}=\psi _{2}\).
For each \(n\in \mathbb{N}\), we set \(\alpha _{n} = \frac{10^{-10}}{n+1}\), \(\beta _{n}=(1-10^{-10})(1- \alpha _{n})\), \(\gamma _{n} = L_{0}\) (A3 type in Table 2 for our Alg. 2 and TDC Alg. 3.3), \(\lambda _{n}=2L_{1}\) (A6 type in Table 2), \(\lambda = 2L_{1}-0.01\) (for YLSD Alg. 2.1), \(\lambda =2L_{1}\) (for TQY Algs. 3.3 and 3.4, and TDC Alg. 3.3), \(\eta _{n} = 2L_{2}\) (A6 type in Table 2), \(\theta _{n}\) (for our Alg. 2) is as (4.5), \(\theta _{n} = \bar{\theta _{n}}\) (for YSLD Alg. 2.1 and TDC Alg. 3.3), \(\omega _{n}=\frac{1}{(n+1)^{2}}\), \(\rho _{n} = \theta =\gamma = 0.5\), the errors \(\varepsilon _{n}(t) = \xi _{n}(t) = \frac{1}{(n+1)^{3}}\) and \(f(x(t)) = \frac{x}{5}\) for all \(x(t) \in L^{2}([0,1])\).
We use \(\|x_{n+1}-x_{n}\|< \epsilon \) such that the error \(\epsilon = 10^{-10}\) is the stopping criterion in the process of all compared algorithms. The numerical results are shown in Table 5 (for \(\psi _{1} = \psi _{2} = 1\)), Table 6 (for \(\psi _{1} = \psi _{2} = 10\)), and Table 7 (for \(\psi _{1} = \psi _{2} = 20\)), and the convergence behaviors of the error sequences \(\{\|x_{n+1}-x_{n}\|\}\) are shown in Fig. 10 (for \(\psi _{1} = \psi _{2} = 1\) only) with four different initial functions \(x_{0}(t) = x_{1}(t)\) (except for the TDC Algorithm 3.3 because their convergence is slow). Moreover, we also show the approximate solution functions of some case studies via the speed up of convergence from increased values \(\psi _{1}=\psi _{2} = 1,10,20\) with initial functions \(x_{0}(t) =x_{1}(t) = 15\), see Fig. 11.
Remark 4.5
From the results in Tables 5, 6, and 7, we see that the quantities of loops n and the CPU times usage of all the compared algorithms depend on \(\psi _{1}\) and \(\psi _{2}\), which means that they are better for large \(\psi _{1}\) and \(\psi _{2}\), and worse for small \(\psi _{1}\) and \(\psi _{2}\), at the same time the speed of convergence behaves similarly. Moreover, the speed of convergence to the solution of the YLSD Algorithm 2.1 is better than others, their approximate solution function is \((x^{*}(t),y^{*}(t))=(0.,0.)\), where “0.” means that it is in interval \((-p,p)\), where \(p=2.22507 \times 10^{-308}\) (the smallest positive machine-precision number in MATHEMATICA), see Fig. 11.
4.3 Convex minimization problem
Let \(F_{1}:H_{1} \rightarrow \mathbb{R}\) be a convex and differentiable function and \(G_{1}:H_{1} \rightarrow \mathbb{R}\cup \{ \infty \}\) be a convex and lower semicontinuous function such that the gradient \(\nabla F_{1}\) is a \(\frac{1}{\psi _{1}}\)-Lipschitz continuous function and \(\partial G_{1}\) is a subdifferential of \(G_{1}\). If \(F_{2} = G_{2} = 0\) then the SCMP is reduced to a convex minimization problem (CMP), which is to find \(x^{*} \in H_{1}\) such that
Example 4.6
We illustrate the performance of our Algorithm 3 in Theorem 4.1 for solving a convex minimization problem. We implemented them in MATHEMATICA 5.0 to solve and run on a personal laptop: Intel(R) Core(TM) i5-8250U CPU @1.80 GHz 8 GB RAM.
Find the minimization of the following \(\ell _{1}\)-least-square problem:
where \(x =(u,v,w)^{T} \in \mathbb{R}^{3}\).
Let \(H_{1} = H_{2} = (\mathbb{R}^{3},\|\cdot \|_{2})\), \(F_{1}(x)=\frac{1}{2}\|x\|_{2}^{2}-(2,3,4)x+3\), \(F_{2}(x) = 0\) and \(G_{1}(x)=\|x\|_{1}\), \(G_{2}(x) = 0\) for all \(x \in \mathbb{R}^{3}\), and \(A=I\). Then, \(\nabla F_{1}(x) = (u-2,v-3,w-4)^{T}\) and \(\nabla F_{2}(x) = (0,0,0)^{T}\) for all \(x \in \mathbb{R}^{3}\). It follows that \(F_{1}\) is convex and differentiable on \(\mathbb{R}^{3}\) with \(\psi _{1}=1\) of \(\frac{1}{\psi _{1}}\)-Lipschitz continuous gradient \(\nabla F_{1}\). Moreover, \(G_{1}\) is convex and lower semicontinuous but not differentiable on \(\mathbb{R}^{3}\).
We set \(f(x) = \frac{x}{5}\) for all \(x \in \mathbb{R}^{3}\). Then, f is a contraction. For each \(n \in \mathbb{N}\), we choose \(\alpha _{n} = \frac{10^{-6}}{n+1}\), \(\varepsilon _{n}= \frac{1}{(n+1)^{3}}(1,1,1)^{T}\), \(\xi _{n} = (0,0,0)^{T}\), and we define \(\theta _{n}\) as (4.5), and for all \(x \in \mathbb{R}^{3}\) we have
We choose the initial points \(x_{0}=(-1,2,1)^{T}\) and \(x_{1} = (2,-1,-2)^{T}\) for computing the recursive of the sequence \(\{x_{n}\}\) using Algorithm 3 in Theorem 4.1 with an error \(\epsilon =10^{-6}\) in each of the chosen types of the sequences \(\{\lambda _{n} \}\) with \(L_{1}= \psi _{1} = 1\) as in Table 8 (except for A5, A6, and B3 types because their convergence is slow). As \(n \rightarrow \infty \), we obtain \(x_{n} \rightarrow x^{*}\) such that the approximate minimization of \(F_{1}+G_{1}\) is \((1,2,3)^{T}\) and its approximate minimum value is −4, as in Table 8, and we also show the convergence behavior of the error sequences \(\{ \|x_{n+1}-x_{n}\|_{2} \}\) that converge to the zero value for each of the best choices A4, B2, and C2 types of the sequences \(\{ \lambda _{n} \}\), see Fig. 12.
We next consider seven different choices of the parameter \(\theta _{n}\) for testing the fast convergence as (4.6) and (4.7) with C2 type of the parameter \(\lambda _{n}\) only, and the others as they were.
From the results of the seven different choices of the parameter \(\theta _{n}\) as in Table 9, we see that the quantities of loops n and the CPU times usage of choices 1, 2, and 3 are less than the others, and we conclude that choices 1, 2, and 3 of the parameter \(\theta _{n}\) are to be an accelerated choice for the speed up of convergence of solving this simple example.
5 Conclusion
A new iterative forward–backward splitting algorithm with errors (FBSA_Err) for solving SMVIP is obtained in our main result. It can be applied to solving the image-feature extraction with multiple-image blends problem. Under the encrypted image \(L = A*H\), which is generated by the linear method of element-wise multiplication of A and H such that stacking the columns of \(A \in \mathbb{R}^{m \times m}\) corresponding with the discrete logistic chaotic map \(\{x_{n}\}_{n=1}^{m^{2}}\), and setting all parameters to their fast convergence, we obtain the following results:
-
1.
For the quality measure of the decrypted image by the NCC measure only, A1, A6, and A6 types of the parameters \(\gamma _{n}\), \(\lambda _{n}\), and \(\eta _{n}\), respectively, and choice 6 of the parameter \(\theta _{n}\) as (4.5) are the best choices to solve the image-feature extraction with multiple-image blends problem using FBSA_Err.
-
2.
For the quality measure of the decrypted image by NCC, SNR, and ISNR measures together, A2, A6, and A6 types of the parameters \(\gamma _{n}\), \(\lambda _{n}\), and \(\eta _{n}\), respectively, and choice 6 of the parameter \(\theta _{n}\) as (4.5) are the best choices to solve the image-feature extraction with multiple-image blends problem using FBSA_Err.
For application of our main result to the split variational inclusion problem or the split minimization problem, compared with the YSLD Algorithm 2.1 [43], TQY Algorithms 3.3 and 3.4 [44], and the TDC Algorithm 3.3 [45], the speed of convergence to the solution of the YLSD Algorithm 2.1 is better than the others except for complex problems (e.g., the image/signal-recovery problems).
For application of our main result to the convex minimization problem, theC2 type of the parameter \(\lambda _{n}\) and choices 1, 2, and 3 of the parameter \(\theta _{n}\) are the best choices to solve the convex minimization problem.
Availability of data and materials
Not applicable.
References
Bauschke, H.H.: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202, 150–159 (1996)
Chidume, C.E., Bashir, A.: Convergence of path and iterative method for families of nonexpansive mappings. Appl. Anal. 67, 117–129 (2008)
Halpern, B.: Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 73, 957–961 (1967)
Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)
Klen, R., Manojlovic, V., Simic, S., Vuorinen, M.: Bernoulli inequality and hypergeometric functions. Proc. Am. Math. Soc. 142, 559–573 (2014)
Kunze, H., La Torre, D., Mendivil, F., Vrscay, E.R.: Generalized fractal n transforms and self-similar objects in cone metric spaces. Comput. Math. Appl. 64, 1761–1769 (2012)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Radenovic, S., Rhoades, B.E.: Fixed point theorem for two non-self mappings in cone metric spaces. Comput. Math. Appl. 57, 1701–1707 (2009)
Todorcevic, V.: Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics. Springer, Basel (2019)
Byrne, C.: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004)
Combettes, P.L., Wajs, V.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple set split feasibility problem and its applications. Inverse Probl. 21, 2071–2084 (2005)
Censor, Y., Motova, A., Segal, A.: Perturbed projections and subgradient projections for the multiple-sets feasibility problem. J. Math. Anal. 327, 1244–1256 (2007)
Moudafi, A.: Split monotone variational inclusions. Adv. Differ. Equ. 150, 275–283 (2011)
Nimana, N., Petrot, N.: Viscosity approximation methods for split variational inclusion and fixed point problems in Hilbert spaces. In: Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. 2 (2014)
Che, H., Li, M.: Solving split variational inclusion problem and fixed point problem for nonexpansive semigroup without prior knowledge of operator norms. Math. Probl. Eng. 2015, Article ID 408165 (2015)
Shehu, Y., Ogbusi, F.U.: An iterative method for solving split monotone variational inclusion and fixed point problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 110, 503–518 (2016)
Thong, D.V., Cholamjiak, P.: Strong convergence of a forward-backward splitting method with a new step size for solving monotone inclusions. Comput. Appl. Math. 38, 94 (2019)
Alansari, M., Farid, M., Ali, R.: An iterative scheme for split monotone variational inclusion, variational inequality and fixed point problems. Adv. Differ. Equ. 2020, 485 (2020)
Ogbuisi, F.U., Mewomo, O.T.: Solving split monotone variational inclusion problem and fixed point problem for certain multivalued maps in Hilbert spaces. Thai J. Math. 19(2), 503–520 (2021)
Alakoya, T.O., Mewomo, O.T.: Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 41(1), 39 (2022)
Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Inertial iterative method with self-adaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces. Demonstr. Math. 55(1), 193–216 (2022)
Godwin, E.C., Alakoya, A., Mewomo, O.T., Yao, J.C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2107913
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems. Math. Methods Appl. Sci. (2022). https://doi.org/10.1002/mma.8849
Alakoya, T.O., Uzor, V.A., Mewomo, O.T., Yao, J.C.: On a system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 2022, 47 (2022)
Alakoya, T.O., Uzor, V.A., Mewomo, O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. (2022). https://doi.org/10.1007/s40314-022-02138-0
Tianchai, P.: The zeros of monotone operators for the variational inclusion problem in Hilbert spaces. J. Inequal. Appl. 2021, 126 (2021)
Tianchai, P.: An improved fast iterative shrinkage thresholding algorithm with an error for image deblurring problem. Fixed Point Theory Algorithms Sci. Eng. 2021, 18 (2021)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publ., Yokohama (2009)
Tang, J.F., Chang, S.S., Yuan, F.: A strong convergence theorem for equilibrium problems and split feasibility problems in Hilbert spaces. Fixed Point Theory Appl. 2014, 36 (2014)
Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)
Geobel, K., Kirk, W.A.: Topic in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8(1), 11–34 (2007)
Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276–286 (2008)
Takahashi, W., Xu, H.-K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, Article ID 942315 (2013)
Baillon, J.B., Haddad, G.: Quelques proprietes des operateurs angle-bornes et cycliquement monotones. Isr. J. Math. 26, 137–150 (1977)
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Image Databases: Available online: http://www.imageprocessingplace.com/downloads_V3/root_downloads/image_databases/standard_test_images.zip. Accessed 1 Sept 2021
Can Stock Photo: Available online: https://www.canstockphoto.com/help-message-in-a-bottle-25858836.html. Accessed 1 Sept 2021
Yao, Y., Shehu, Y., Li, X.H., Dong, Q.L.: A method with inertial extrapolation step for split monotone inclusion problems. Optimization 70(4), 741–761 (2021)
Tan, B., Qin, X., Yao, J.C.: Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 87, 20 (2021)
Thong, D.V., Dung, V.T., Cho, Y.J.: A new strong convergence for solving split variational inclusion problems. Numer. Algorithms 86, 565–591 (2021)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)
Acknowledgements
The author would like to thank the Editor and anonymous referees for comments and remarks that improved the quality and presentation of the paper, and the Faculty of Science, Maejo University for its financial support.
Funding
This research was supported by the Faculty of Science, Maejo University.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tianchai, P. Split monotone variational inclusion with errors for image-feature extraction with multiple-image blends problem. Fixed Point Theory Algorithms Sci Eng 2023, 5 (2023). https://doi.org/10.1186/s13663-023-00743-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-023-00743-0