Skip to main content

Relaxed inertial self-adaptive algorithm for the split feasibility problem with multiple output sets and fixed-point problem in the class of demicontractive mappings

Abstract

The purpose of this paper is to study the split feasibility problem with multiple output sets and fixed point problem in the class of demicontractive mappings and propose relaxed inertial self-adaptive algorithms that do not use the least squares method. Under some appropriate assumptions, we establish a strong convergence result for the sequence generated by the proposed algorithm. Our result generalizes and extends several results existing in the literature. Finally, we illustrate the convergence of the proposed algorithm by using a numerical example.

1 Introduction

Let C be a nonempty, closed, and convex subset of a real Hilbert space H, \(S:C \rightarrow C\) be a mapping, and \(\text{ Fix}(S) := \{x \in C : Sx=x\}\). Then S is said to be

  1. (a)

    nonexpansive if

    $$ \|Sx-Sy\|\leq \|x-y\|, $$

    \(\forall x, y \in C\).

  2. (b)

    quasi-nonexpansive if \(\text{Fix(S)} \ne \emptyset \) and

    $$ \|Sx-y\|\leq \|x-y\|, $$

    \(\forall x \in C\) and \(y \in \text{Fix(S)}\).

  3. (c)

    k-strictly pseudo-contractive if there exists a constant \(k \in [0, 1)\) such that

    $$ \|Sx-Sy\|^{2}\leq \|x-y\|^{2}+k\|(I-S)x-(I-S)y\|^{2}, $$

    \(\forall x, y \in C\).

  4. (d)

    demicontractive if \(\text{Fix(S)} \ne \emptyset \) and there exists a constant \(k \in [0, 1)\) such that

    $$ \|Sx-y\|^{2}\leq \|x-y\|^{2}+k\|(I-S)x\|^{2}, $$

    \(\forall x \in C\) and \(y \in \text{Fix(S)}\).

Observe that the class of demicontractive mappings include various types of nonlinear mappings such as nonexpansive mappings, quasi-nonexpansive mappings, and strictly pseudo-contractive mappings.

A fixed point problem for a demicontractive mapping \(S: C \rightarrow C\): Find \(x \in C\) such that

$$ Sx = x. $$
(1)

The split feasibility problem (SFP) is to find a point

$$ x^{*}\in C \text{ such that } Ax^{*}\in Q, $$
(2)

where C and Q are nonempty, closed, and convex subsets of real Hilbert spaces H and \(H_{1}\), respectively and \(A:H\to H_{1}\) is a bounded linear operator. Let the solution set of the SFP (2) is given by \(SFP(C,Q) := \{x \in C : Ax \in Q\}\).

The SFP in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [7] for modeling various inverse problems that have many real life applications such as medical image reconstruction and signal processing (see [5, 6]). The SFP attracts the attention of many authors due to its wide range of applications and several generalizations of the SFP have been studied by many authors, see, for instance, the multiple-sets SFP [8, 20, 21], the SFP with multiple output sets (SFPMOS) [12, 15, 17, 18, 23, 24], the split variational inequality problem [9, 10, 15], the multiple-sets split variational inequality problem [25], the split variational inequality problem with multiple output sets [1], and the multiple-sets split feasibility problem with multiple output sets [13, 22].

In 2024, Berinde [4] introduced an inertial self-adaptive viscosity algorithm for solving split feasibility and fixed point problems in the class of demicontractive mappings, which is shown below.

$$\begin{aligned} v_{n} :=& z_{n}+\mu _{n}(z_{n}-z_{n-1}) \\ w_{n} :=&p_{C}\left ((1-\beta _{n})(v_{n}-\zeta _{n}A^{*}(I-P_{Q})Av_{n})+ \beta _{n}S_{\lambda}v_{n}\right ), \\ z_{n+1} :=&\sigma _{n}g(z_{n})+\theta _{n}v_{n}+\alpha _{n}w_{n}, \end{aligned}$$

with \(S_{\lambda}:=(1-\lambda )I+\lambda S, \lambda \in (0, 1)\),

$$ \mu _{n}= \textstyle\begin{cases} \text{min}\left \{\mu , \dfrac{\tau _{n}}{||z_{n}-z_{n-1}||}\right \} \quad \quad \text{ if } z_{n} \neq z_{n-1}, \\ \mu \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \text{ otherwise}, \end{cases} $$
(3)

\(\mu \geq 0\) is a given number, \(\zeta _{n}:=\frac{\delta _{n}f(v_{n})}{\|\nabla f(v_{n})\|^{2}}\) where \(f(v_{n}):=\frac{1}{2}\|(I-P_{Q})Av_{n}\|^{2}\), \(\delta _{n} \in (0, 4)\) and \(\{\sigma _{n}\}, \{\theta _{n}\}, \{\alpha _{n}\}\), \(\{\beta _{n}\}\) are sequences in \((0, 1)\) and \(\tau_{n}\) is a positive sequence satisfying some suitable conditions, and \(S: C \rightarrow C\) is a k-demicontractive mapping. He proved the strong convergence of the sequence generated by his algorithm to some \(x^{*} \in \text{ Fix}(S) \cap SFP(C, Q)\).

The SFPMOS which was introduced by Kim et al. [13] in general Hilbert spaces is to find a point \(x^{*}\) such that

$$ x^{*}\in C\cap \Big(\cap _{j=1}^{p} T^{-1}_{j}\Big(\cap _{k=1}^{r_{j}}Q_{jk} \Big)\Big)\ne \emptyset , $$
(4)

where C and \(Q_{jk},\, j=1, 2, \dots , p, \, k=1,\ldots , r_{j}\) are nonempty, closed and convex subsets of real Hilbert spaces H and \(H_{j}\), respectively, and \(T_{j}: H\to H_{j}\) are bounded linear operators.

In order to approximate a solution to the SFP (2), many algorithms first transform this problem into an equivalent unconstrained convex minimization problem and obtain a minimizing element using the least squares method. In 2023, Reich and Tuyen [19] extended the Fermat-Torricelli problem and showed that the SFPMOS can be considered as a special case of this problem. Moreover, they provide a new approach for solving the SFPMOS in Hilbert spaces. The generalized Fermat-Torricelli problem is stated as follows:

$$ \begin{aligned} &f(x) \rightarrow \min,\\ &\text{subject to } x \in C, \end{aligned} $$

where \(f(x)=\sum_{j=1}^{p}\sum_{k=1}^{r_{j}}\beta_{jk} f_{jk}(T_{j} x), \beta_{jk} , j = 1, 2, \cdots, p, k = 1, 2, \cdots, r_{j}\), are given positive real numbers, and \(f_{jk}(y) = \|(I^{H_{j}} - P^{H_{j}}_{Q_{jk}})y\|\) for all \(y \in H_{j}\) and \(j = 1, 2, \cdots, p, k = 1, 2, \cdots, r_{j}\).

As a continuation of the aforementioned work, Reich and Tuyen [16] developed self-adaptive algorithms for solving the split feasibility problem with multiple output sets that do not use the least square method and proved strong and weak convergence theorems.

Motivated by the above works specially that of Berinde [4], Riech and Tuyen [16], and Kim et al. [13], we propose relaxed inertial self-adaptive algorithm for solving the SFPMOS and fixed-point problem in the class of demicontractive mappings. The main contributions of our paper are summarized as follows.

  • The problem considered is a general problem as it combines the SFPMOS and fixed point problem.

  • The proposed algorithm incorporates the relaxation method in order to speed up its convergence.

  • The proposed method does not use the least squares method.

  • Our result generalizes and extends several related results existing in the literature as demicontractive mappings include various types of nonlinear mappings.

This work is structured as follows. In Sect. 2, we state some notations, basic definitions, and lemmas that we will need in the proof of our main result. In Sect. 3, we give convergence analysis of our proposed algorithm. In Sect. 4, we provide a numerical experiment to validate our proposed algorithm. Finally, in Sect. 5, we give a concluding remark.

2 Preliminaries

The weak ω-limit set of \(\{t_{n}\}\) is given by \(\omega _{\omega}(t_{n})=\big\{t\in H:\exists \{t_{n_{k}}\}\subseteq \{t_{n}\} \text{ such that } t_{n_{k}}\rightharpoonup t\big\}\).

It is well known that for every element \(x\in H\), there exists a unique nearest point in C, denoted by \(P_{C}(x)\) such that

$$ \|x-P_{C}(x)\|=\min \{\|x-z\|:z\in{C}\}. $$

The operator \(P_{C} : H\rightarrow {C}\) is called a metric projection of H onto C. It has got important characterization shown below:

$$ \langle x-P_{C}x,z-P_{C}x \rangle \leq 0, $$
(5)

for all \(x \in H\) and \(z \in C\). We can deduce from (5) that the operator \(P_{C}\) is a nonexpansive mapping.

Lemma 1

(see [2]) For all \(x,y \in H \) and \(z \in C\), the following inequalities hold.

  1. (i)

    \(\|P_{C}x-P_{C}y\|^{2}\leq \langle P_{C}x-P_{C}y, x-y \rangle \);

  2. (ii)

    \(\langle x-y, (I-P_{C})x-(I-P_{C})y \rangle \geq \|(I-P_{C})x-(I-P_{C})y \|^{2}\);

  3. (iii)

    \(\|P_{C}x-z\|^{2}\leq \|x-z\|^{2}-\|P_{C}x -x\|^{2}\).

Definition 1

Let \(f: H\rightarrow (-\infty, +\infty]\) be a given function. Then,

  1. (1)

    The function f is proper if

    $$\{x\in H : f(x)< +\infty\}\ne\emptyset.$$
  2. (2)

    A proper function f is convex if for each \(\sigma \in (0, 1)\),

    $$f(\sigma x+(1-\sigma)y)\leq \sigma f(x)+(1-\sigma)f(y), \forall x,y\in H.$$
  3. (3)

    f is σ-strongly convex, where \(\sigma>0\), if

    $$f(\delta x+(1-\delta)y)+\frac{\sigma}{2}\delta(1-\delta)\|x-y\|^{2}\leq \delta f(x)+(1-\delta)f(y), \forall \delta \in (0, 1) \mbox{ and }\forall x,y\in H.$$

    Moreover, f is σ-strongly convex if \(f(x)-(\sigma/2)\|x\|^{2}\) is convex.

Definition 2

Let \(f: H\rightarrow (-\infty, +\infty]\) be a proper function.

  1. (1)

    A vector \(\xi\in H\) is a subgradient of f at a point x if

    $$f(y)\geq f(x)+\langle \xi, ~y-x\rangle,~\forall y\in H.$$
  2. (2)

    The set of all subgradients of f at \(x\in H\), denoted by \(\partial f(x)\), is called the subdifferential of f, and is defined by

    $$\partial f(x)=\{\xi\in H:f(y)\geq f(x)+\langle \xi, ~y-x \rangle, ~\text{for each } y\in H\}.$$
  3. (3)

    If \(\partial f(x)\neq\emptyset\), f is said to be subdifferentiable at x. If the function f is continuously differentiable then \(\partial f(x)=\{\nabla f(x)\}\).

Definition 3

Let \(f: H\rightarrow (-\infty, +\infty]\) be a proper function. Then, f is lower semi-continuous (lsc) at x if \(x_{n} \rightarrow x\) implies

$$f(x)\leq \liminf\limits_{k\rightarrow\infty}f(x_{n}).$$

Definition 4

Let C be a closed and convex subset of a real Hilbert space H and \(S:C \rightarrow C\) is a mapping. If, for any sequence \(\{x_{k}\}\) in C, such that \(x_{k} \rightharpoonup x\), and \(Sx_{k} \rightarrow 0\), we have \(Sx=0\), then S is said to be demiclosed at 0 in C.

Lemma 2

[3] Let H be a real Hilbert space, \(C \subset H\) be a closed and convex set. If \(T:C \rightarrow C\) is k-demicontractive, then for any \(\lambda \in (0, 1-k)\), \(T_{\lambda}:=(1-\lambda )I+\lambda T\) is a quasi-nonexpansive.

Lemma 3

(see [11]) Let \(\{s_{n}\}\) be a non-negative real sequence, such that

$$ \textstyle\begin{array}{l} s_{n+1}\leq (1-\sigma _{n})s_{n}+\sigma _{n}\mu _{n},~n\geq 1, \\ s_{n+1}\leq s_{n}-\phi _{n}+\varphi _{n},~n\geq 1, \end{array} $$

where \(\{\sigma _{n}\}\subset (0, 1)\), \(\{\phi _{n}\} \subset [0,\infty )\), and \(\{\mu _{n}\}\), \(\{\varphi _{n}\} \subset (-\infty ,\infty )\). In addition, suppose the following conditions hold.

(i):

\(\sum _{n=1}^{\infty}\sigma _{n}=\infty \);

(ii):

\(\lim \limits _{n\rightarrow \infty}\varphi _{n}=0\);

(iii):

\(\lim \limits _{k\rightarrow \infty}\phi _{n_{k}}=0\) implies \(\limsup \limits _{k\rightarrow \infty}\mu _{n_{k}}\leq 0\) for every subsequence \(\{n_{k}\}\) of \(\{n\}\).

Then, \(\lim \limits _{n\rightarrow \infty}s_{n}=0\).

3 Main results

Let C and \(Q_{jk},\, j=1, 2, \dots , p, \, k=1,\ldots , r_{j}\) be nonempty, closed and convex subsets of real Hilbert spaces H and \(H_{j}\), respectively, and \(T_{j}: H\to H_{j}\) are bounded linear operators. Let \(S:C \rightarrow C\) be a demicontractive mapping. Assuming that

$$ x^{*}\in \Omega :=C\cap \Big(\cap _{j=1}^{p} T^{-1}_{j}\Big(\cap _{k=1}^{r_{j}}Q_{jk} \Big)\Big) \cap \text{ Fix}(S) \ne \emptyset , $$
(6)

we consider the problem of finding an element \(x^{*} \in \Omega\).

In this section, we state our algorithms and analyze their convergence.

For simplicity, let \(\Gamma :=C\cap \Big(\cap _{j=1}^{p} T^{-1}_{j}\Big(\cap _{k=1}^{r_{j}}Q_{jk} \Big)\Big), J_{1}: = \{1, 2, \dots , p\}\), and \(J_{2}: = \{1, 2, \dots , r_{j}\}\).

We take the following assumptions to undergo the analysis.

  1. (C1)

    The nonempty level sets C and \(Q_{jk}\) in the Problem (6) are defined as follows

    $$ C= \{x\in H:c(x)\leq 0\} ~~\text{ and } ~~Q_{jk}=\{y\in H_{j}:q_{jk}(y) \leq 0\}, $$
    (7)

    where \(c: H\rightarrow (- \infty , + \infty ]\) and \(q_{jk}: H_{j}\rightarrow (-\infty , +\infty ]\) for all \(j\in J_{1}, k \in J_{2}\) are ϖ-strongly and \(\omega _{j}\)-strongly convex subdifferentiable functions, respectively. Then c and \(q_{jk}\) are also lower semicontinuous (See, [2] Theorem 9.1)

    The projections onto C and \(Q_{jk}\) are not easily implemented in general. To avoid this difficulty, we adopted a technique of projecting on to the half spaces and construct the relaxed sets (half-spaces) \(C^{n}\) and \(Q_{jk}^{n} (j \in J_{1}, k \in J_{2}\) (see [14] for more).

  2. (C2)

    Let c and \(q_{jk}\) be as defined in (7). Assume that at least one subgradient \(\xi_{n} \in \partial c(x)\) and \(\eta_{jk,n} \in \partial q_{jk}(y)\) can be computed for any \(x\in H\) and \(y\in H_{j}\). Moreover, both ∂c and \(\partial q_{jk}(j\in J_{1},k\in J_{2})\) are bounded operators (bounded on bounded sets). The sets \(C^{n}\) and \(Q_{jk}^{n}\) (\(j\in J_{1}, k \in J_{2}\)) are constructed as follows:

    $$ C^{n} = \Big\{ x \in H : c(x_{n})+\langle \xi_{n}, ~x-x_{n} \rangle + \frac{\varpi}{2}\|x-x_{n}\|^{2} \leq 0 \Big\} , $$
    (8)

    where \(\xi_{n}\in \partial c(x_{n})\) and

    $$ Q_{jk}^{n} = \Big\{ y \in H_{j} : q_{jk}(T_{j}x_{n})+\langle \eta _{jk,n}, ~y-T_{j}x_{n} \rangle + \frac{\omega _{j}}{2}\|y-T_{j}x_{n}\|^{2} \leq 0 \Big\} , $$
    (9)

    where \(\eta _{jk,n}\in \partial q_{jk}(T_{j}x_{n})\). It is not difficult to show that \(C \subset C^{n}\) and \(Q_{jk} \subset Q_{jk}^{n}\).

  3. (C3)

    Let the sequence \(\{\rho _{n}\} \in (0, 2)\), the sequences \(\{\delta _{n}\}, \{\sigma _{n}\}, \{\gamma _{n}\}\), and \(\{\alpha _{n}\}\) in \((0,1)\), and the sequence \(\{\epsilon _{n}\} \in (0, \infty )\) satisfying the following conditions.

    1. (i)

      \(\liminf _{ n \to \infty} \gamma _{n} > 0\);

    2. (ii)

      \(\lim \limits _{n \to \infty}\frac{\epsilon _{n}}{\sigma _{n}}=0\);

    3. (iii)

      \(\lim _{n\to \infty}\sigma _{n}=0\) and \(\sum _{n=1}^{\infty}\sigma _{n} = \infty \);

    4. (iv)

      \(0 <\liminf _{n \to \infty}\delta _{n} \leq \limsup _{n \to \infty} \delta _{n} <1\);

    5. (v)

      \(\sigma _{n}+\alpha _{n}+\gamma _{n} =1,\forall n \geq 1\).

Now, we delve into the details of the convergence analysis of Algorithm 1.

Algorithm 1
figure a

A strongly convergent algorithm for solving problem (6)

Proposition 1

In Algorithm 1, if \(\sum_{j=1}^{p}\sum_{k=1}^{r_{j}}\beta_{jk}T^{*}_{j}d^{n}_{jk} = 0\) and \(w_{n}=y_{n}\), then \(y_{n}\) is a solution of Problem (6).

Proof

Pick \(t^{*} \in \Omega\). For each n, let \(\Delta_{n}=\{(j,k):d_{jk}^{n} \neq 0\}\).

By using an argument similar to the one used in the proof of Proposition 6 of [16], we get \(y_{n} \in \Gamma\). Moreover, since \(\sum_{j=1}^{p}\sum_{k=1}^{r_{j}}\beta_{jk}T^{*}_{j}d^{n}_{jk} = 0\) and \(w_{n}=y_{n}\), and using (14), we have \(y_{n} \in \text{ Fix}(S)\). Consequently, \(y_{n} \in \Omega=\Gamma \cap \text{ Fix}(S)\). □

Lemma 4

Let \(\Omega \ne \emptyset \) and \(\{t_{n}\}\) be a sequence generated by Algorithm 1 such that Assumption \(C(3)\) holds. Then the sequence \(\{t_{n}\}\) is bounded.

Proof

Let \(t^{*}\in \Omega \), then we have

$$\begin{aligned} \|w_{n}-t^{*}\|^{2} =&\left \|(1-\delta _{n})\Bigg(y_{n}-\tau _{n} \sum _{j=1}^{p}\sum _{k=1}^{r_{j}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \Bigg)+\delta _{n}S_{\lambda}y_{n}-t^{*}\right \|^{2} \\ =&\left \|(1-\delta _{n})\Bigg((y_{n}-t^{*})-\tau _{n} \sum _{j=1}^{p} \sum _{k=1}^{r_{j}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\Bigg)+\delta _{n}(S_{ \lambda}y_{n}-t^{*})\right \|^{2} \\ =&(1-\delta _{n})\left \|(y_{n}-t^{*})-\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2}+\delta _{n}\|S_{ \lambda}y_{n}-t^{*}\|^{2}- \\ {} &\delta _{n}(1-\delta _{n})\left \|S_{\lambda}y_{n}-y_{n}+\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2} . \end{aligned}$$
(10)

Estimating \(\left \|(y_{n}-t^{*})-\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \right \|^{2}\), we get

$$\begin{aligned} &\left \|(y_{n}-t^{*})-\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \right \|^{2} \\ \quad&= \|y_{n}-t^{*}\|^{2}+\tau _{n}^{2} \left\|\sum _{(j,k)\in \Delta _{n}} \beta _{jk}T^{*}_{j}d^{n}_{jk}\right\|^{2}-2\tau _{n}\left \langle \sum _{(j,k) \in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk}, y_{n}-t^{*}\right \rangle \\ &\quad =\|v_{n}-t^{*}\|^{2}+\tau _{n}^{2}\left \|\sum _{j=1}^{p}\sum _{k=1}^{r_{j}} \beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2}-2\tau _{n}\sum _{(j,k) \in \Delta _{n}}\beta _{jk}\left \langle d^{n}_{jk}, T_{j}y_{n}-T_{j}t^{*} \right \rangle \\ &\quad=\|y_{n}-t^{*}\|^{2}+\tau _{n}^{2}\left \|\sum _{j=1}^{p}\sum _{k=1}^{r_{j}} \beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2} \\ &\qquad{}-2\tau _{n}\sum _{(j,k) \in \Delta _{n}}\beta _{jk}\Biggl\langle \dfrac{\left (I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}y_{n}}{\left \|\left (I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}y_{n}\right \|}, T_{j}y_{n}-T_{j}t^{*}\Biggr\rangle \\ &\quad=\|y_{n}-t^{*}\|^{2}+\tau _{n}^{2}\left \|\sum _{j=1}^{p}\sum _{k=1}^{r_{j}} \beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2} \\ &\qquad{}-2\tau _{n}\sum _{(j,k) \in \Delta _{n}}\beta _{jk}\Biggl\langle \dfrac{\left (I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}y_{n}-\left (I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}t^{*}}{\left \| \left (I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}y_{n}\right \|}, T_{j}y_{n}-T_{j}t^{*}\Biggr\rangle \\ &\quad \leq\|y_{n}-t^{*}\|^{2}+\tau _{n}^{2}\left \|\sum _{j=1}^{p} \sum _{k=1}^{r_{j}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2}-2\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}\left \|\left (I^{\mathcal{H}_{j}}-P^{ \mathcal{H}_{j}}_{Q_{jk}^{n}}\right )T_{j}y_{n}\right \|. \end{aligned}$$
(11)

Substituting (15) into (18) and simplifying, we get

$$\begin{aligned} \left \|(y_{n}-t^{*})-\tau _{n}\sum _{(j,k)\in\delta_{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \right \|^{2} {}\leq &\|y_{n}-t^{*}\|^{2}-\rho _{n}(2-\rho _{n})g_{jk}(y_{n}) \end{aligned}$$
(12)
$$\begin{aligned} {}\leq &\|y_{n}-t^{*}\|^{2}, \end{aligned}$$
(13)

where

$$ g_{jk}(y_{n}):=\left ( \dfrac{\sum _{(j,k)\in \Delta _{n}}\beta _{jk}f_{jk}(T_{j}y_{n})}{\Xi_{n}}\right)^{2}. $$
(14)

Again, substituting (19) into (17) and simplifying, we get

$$\begin{aligned} \|w_{n}-t^{*}\|^{2} \leq &\|y_{n}-t^{*}\|^{2}-(1-\delta _{n})\rho _{n}(2- \rho _{n})g_{jk}(y_{n})- \\ {} &\delta _{n}(1-\delta _{n})\left \|S_{\lambda}y_{n}-y_{n}+\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2} . \end{aligned}$$
(15)

Using the definition of \(z_{n}\), we have

$$\begin{aligned} \|z_{n}-t^{*}\|^{2} =&\left \|P_{C^{n}}w_{n}-t^{*}\right \|^{2} \\ \leq &\|w_{n}-t^{*}\|^{2}-\left \|(I-P_{C^{n}})w_{n} \right \|^{2}. \end{aligned}$$
(16)

Substituting (22) into (23), we get

$$\begin{aligned} \|z_{n}-t^{*}\|^{2} \leq &\|y_{n}-t^{*}\|^{2}-(1-\delta _{n})\rho _{n}(2- \rho _{n})g_{jk}(y_{n})- \\ {} &\delta _{n}(1-\delta _{n})\left \|S_{\lambda}y_{n}-y_{n}+\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk}\right \|^{2}- \\ {} & \left \|(I-P_{C^{n}})w_{n} \right \|^{2} \end{aligned}$$
(17)
$$\begin{aligned} \leq &\|y_{n}-t^{*}\|^{2}. \end{aligned}$$
(18)

Now, denote

$$ u_{n}:=\frac{1}{1-\sigma _{n}}\left (\alpha _{n}y_{n}+\gamma _{n}z_{n} \right ). $$
(19)

It follows that

$$\begin{aligned} \|u_{n}-t^{*}\|^{2} =&\left \|\frac{\alpha _{n}}{1-\sigma _{n}}y_{n}+ \frac{\gamma _{n}}{1-\sigma _{n}}z_{n}-t^{*}\right \|^{2} \\ =&\left \|\frac{\alpha _{n}}{1-\sigma _{n}}(y_{n}-t^{*})+ \frac{\gamma _{n}}{1-\sigma _{n}}(z_{n}-t^{*})\right \|^{2} \\ \leq&\frac{\alpha _{n}}{1-\sigma _{n}}\|y_{n}-t^{*}\|^{2}+ \frac{\gamma _{n}}{1-\sigma _{n}}\|z_{n}-t^{*}\|^{2}. \end{aligned}$$
(20)

Substituting (24) into (27) and simplifying, we get

$$\begin{aligned} \|u_{n}-t^{*}\|^{2} \leq &\|y_{n}-t^{*}\|^{2}-(1-\delta _{n})\rho _{n}(2- \rho _{n})\frac{\gamma _{n}}{1-\sigma _{n}}g_{jk}(y_{n})- \\ {} &\delta _{n}(1-\delta _{n})\frac{\gamma _{n}}{1-\sigma _{n}}\left \|S_{\lambda}y_{n}-y_{n}+\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \right \|^{2}- \\ {} & \frac{\gamma _{n}}{1-\sigma _{n}}\left \|(I-P_{C^{n}})w_{n} \right \|^{2} \end{aligned}$$
(21)
$$\begin{aligned} \leq &\|y_{n}-t^{*}\|^{2}. \end{aligned}$$
(22)

Using the definition of \(y_{n}\), we have

$$\begin{aligned} \|y_{n}-t^{*}\| {}=& \|t_{n}+\theta _{n}(t_{n}-t_{n-1})-t^{*}\| \\ {}\leq & \|t_{n}-t^{*}\|+ \theta _{n}\|t_{n}-t_{n-1}\| \\ {}=& \|t_{n}-t^{*}\|+ \sigma _{n}\left [ \dfrac{\theta _{n}}{\sigma _{n}}\|t_{n}-t_{n-1}\|\right ] \end{aligned}$$
(23)

Using the definition of \(t_{n}\) and (29), we obtain

$$\begin{aligned} \|t_{n+1}-t^{*}\|=&\|\sigma _{n}(v(t_{n})-t^{*})+(1-\sigma _{n})(u_{n}-t^{*}) \| \\ =&\sigma _{n}\|v(t_{n})-t^{*}\|+(1-\sigma _{n})\|u_{n}-t^{*}\| \\ \leq &\sigma _{n}\|v(t_{n})-v(t^{*})\|+\sigma _{n}\|v(t^{*})-t^{*}\|+(1- \alpha _{n})\|u_{n}-t^{*}\| \\ \leq &\mu \sigma _{n}\|t_{n}-t^{*}\|+\sigma _{n}\|v(t^{*})-t^{*} \|+(1-\sigma _{n})\|u_{n}-t^{*}\| \\ \leq &\mu \sigma _{n}\|t_{n}-t^{*}\|+\sigma _{n}\|v(t^{*})-t^{*} \|+(1-\sigma _{n})\|y_{n}-t^{*}\| \end{aligned}$$
(24)

Now, combining (30) and (31), we have

$$ \|t_{n+1}-t^{*}\|\leq [1-(1-\mu )\sigma _{n}]\|t_{n}-t^{*}\|+ \sigma _{n}\biggl[\dfrac{\theta _{n}}{\sigma _{n}}\|t_{n}-t_{n-1}\|+ \|v(t^{*})-t^{*}\|\biggr]. $$
(25)

Using \(C(3) (ii)\) and (10), we have \(\lim _{n\to \infty}\frac{\theta _{n}}{\sigma _{n}}\|t_{n}-t_{n-1}\|=0\). Hence, we can find a constant \(M \geq 0\) such that

$$ \dfrac{\theta _{n}}{\sigma _{n}}\|t_{n}-t_{n-1}\| \leq M. $$

Now, (32) becomes

$$ \begin{aligned} \|t_{n+1}-t^{*}\|\leq &[1-(1-\eta )\sigma _{n}]\|t_{n}-t^{*}\|+ \sigma _{n}\bigl[M+\|v(t)-t^{*}\|\bigr] \\ =&[1-(1-\eta )\sigma _{n}]\|t_{n}-t^{*}\|+\sigma _{n}(1-\eta )\left [ \dfrac{M+\|v(t)-t^{*}\|}{1-\eta}\right ]. \end{aligned} $$

Proceeding inductively, we arrive at

$$ \|t_{n+1}-t^{*}\| \leq \text{ max}\left \{\|t_{1}-t^{*}\|, \dfrac{M+\|v(t^{*})-t^{*}\|}{1-\eta}\right \}, $$

for all \(n \geq 1\), which proves that \(\{t_{n}\}\) is bounded. □

Lemma 5

Let \(\Omega \ne \emptyset \), \(S: C \rightarrow C\) be a demicontractive mapping such that \(I-S\) is demiclosed at zero, \(v: C \rightarrow C\) be a μ-contraction, and suppose that \(\{\sigma _{n}\}, \{\rho _{n}\}, \{\alpha _{n}\}, \{\gamma _{n}\}, \{ \delta _{n}\}\), and \(\{\epsilon _{n}\}\) are sequences satisfying Assumption \(C(3)\).

Let \(t^{*} \in \Omega \), \(t^{*}=p_{\Omega}{v(t^{*})}\), \(\{t_{n}\}\) be the sequence generated by Algorithm 1, the function \(g_{jk}(y_{n})\) and the sequence \(u_{n}\) be given as in (21) and (26), respectively.

For \(n \geq 1\), let us denote

$$\begin{aligned} \Theta _{n} :=&2(1-\mu )\sigma _{n}; \\ \Phi_{n} :=& 2\sigma_{n}\langle t_{n+1}-t^{*}, v(t_{n})-t^{*}\rangle ;\\ \Delta_{n} :=&\dfrac{1}{2(1-\mu)}\biggl(2((1-\sigma_{n})^{2}+\mu\sigma_{n}) \dfrac{\epsilon_{n}}{\sigma_{n}}\|y_{n}-t^{*}\|+\sigma_{n}\|v(t_{n})-t^{*}\|^{2}+ \\ {} &2\sigma_{n}\|v(t_{n})-t^{*}\|\|u_{n}-t^{*}\|+\sigma_{n}\|t_{n}-t^{*}\|^{2}+2\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle+\sigma_{n}\|t_{n}-t^{*}\|^{2}\biggr); \end{aligned}$$

and

$$\begin{aligned} \Psi _{n}: =&(1-\delta _{n})\rho _{n}(4-\rho _{n}) \frac{\gamma _{n}}{1-\sigma _{n}}g_{jk}(y_{n})+\\ {} & \delta _{n}(1-\delta _{n})\frac{\gamma _{n}}{1-\sigma _{n}}\left \|S_{ \lambda}y_{n}-y_{n}+\tau _{n} \sum _{(j,k)\in \Delta _{n}}\beta _{jk}T^{*}_{j}d^{n}_{jk} \right \|^{2}+ \\ {} &\frac{\gamma _{n}}{1-\sigma _{n}}\left \|(I-P_{C^{n}})w_{n} \right \|^{2}. \end{aligned}$$

Then, for any subsequence \(\{n_{l}\}\) of n, we have

$$ \limsup _{ l \to \infty}\Delta _{{n_{l}}} \leq 0, $$
(26)

whenever,

$$ \lim \limits _{ l \to \infty}\Psi _{{n_{l}}} = 0. $$
(27)

Proof

Suppose (34) holds. It follows that

$$ \lim \limits _{l \to \infty}g_{jk}(y_{n_{l}})=0, $$
(28)

and based on the assumptions listed under \(C(3)\), it follows that

$$ \lim \limits _{l \to \infty} \dfrac{\sum _{(j,k)\in \Delta _{n_{l}}}\beta _{jk}f_{jk}(T_{j}y_{n_{l}})}{\Xi_{n_{l}}}=0, $$
(29)

for all \((j,k)\in \Delta _{n_{l}}\).

Let \(\Xi:=\max\{\tau, \sum_{(j,k)\in \Delta_{n_{l}}}\beta_{jk}\|T^{*}_{j}\|\}\). By using \(\|d^{n_{l}}_{jk}\|=1\) for all \((j,k)\in \Delta_{n_{l}}\), we get

$$ 0\leq \dfrac{\sum _{(j,k)\in \Delta _{n_{l}}}\beta _{jk}f_{jk}(T_{j}y_{n_{l}})}{\Xi} \leq \dfrac{\sum _{(j,k)\in \Delta _{n_{l}}}\beta _{jk}f_{jk}(T_{j}y_{n_{l}})}{\Xi_{n_{l}}}. $$
(30)

Combining (36) and (37), we have

$$ \lim\limits_{l \to \infty}\sum _{(j,k)\in \Delta _{n_{l}}}\beta _{jk}f_{jk}(T_{j}y_{n_{l}})=0, $$
(31)

or equivalently

$$ \lim \limits _{l \to \infty}\|(I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n_{l}}})T_{j}y_{n_{l}} \|=0, $$

for all \((j,k)\in \Delta _{n_{l}}\).

Note that from the definition of \(\Delta _{n_{l}}\) and \(d_{jk}^{n_{l}}\), we have \(T_{j}y_{n_{l}} \in Q_{jk}^{n_{l}}\) when \((j,k)\notin \Delta _{n_{l}}\) and hence \(\|(I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n_{l}}})T_{j}y_{n_{l}} \|=0\). As a result, we get

$$ \lim \limits _{l \to \infty}\|(I^{\mathcal{H}_{j}}-P^{\mathcal{H}_{j}}_{Q_{jk}^{n_{l}}})T_{j}y_{n_{l}} \|=0, $$
(32)

for all \(j \in J_{1}\) and \(k \in J_{2}\).

By (34), we also get

$$ \lim \limits _{l \to \infty}\left \|S_{\lambda}y_{n_{l}}-y_{{n_{l}}}+ \tau _{n_{l}} \sum _{(j,k)\in \Delta _{n_{l}}}\beta _{jk}T^{*}_{j}d^{n_{l}}_{jk} \right \|^{2}=0, $$

and due to (38), we obtain

$$ \lim \limits _{l \to \infty}\left \|S_{\lambda}y_{n_{l}}-y_{n_{l}} \right \|=0. $$
(33)

On the other hand, by (34) and using the definition of \(z_{n}\), we also obtain

$$\begin{aligned} &\lim \limits _{l \to \infty}\left \|(I-P_{C^{n_{l}}})w_{{n_{l}}} \right \| \\ &\quad=\lim \limits _{l \to \infty}\left \|w_{{n_{l}}}-P_{C^{n}}w_{{n_{l}}} \right \| \\ &\quad=\lim \limits _{l \to \infty}\left \|\left ((1-\delta _{n_{l}}) \Bigg(y_{{n_{l}}}-\tau _{{n_{l}}} \sum _{(j,k)\in \Delta_{n_{l}}} \beta _{jk}T^{*}_{j}d^{n_{l}}_{jk}\Bigg)+\delta _{n_{l}}S_{\lambda}y_{n_{l}} \right )-z_{{n_{l}}}\right \| \\ &\quad= \lim \limits _{l \to \infty}\left \|(1-\delta _{n_{l}})y_{{n_{l}}}+ \delta _{n_{l}}S_{\lambda}y_{n_{l}}-z_{{n_{l}}}-(1-\delta _{n_{l}}) \tau _{{n_{l}}} \sum _{(j,k)\in\Delta_{n_{l}}}\beta _{jk}T^{*}_{j}d^{n_{l}}_{jk} \right \| \\ &\quad =0. \end{aligned}$$
(34)

Using (38) and (41), we get

$$ \lim \limits _{l \to \infty}\left \|(1-\delta _{n_{l}})y_{{n_{l}}}+ \delta _{n_{l}}S_{\lambda}y_{n_{l}}-z_{{{n_{l}}_{l}}}\right \|=\lim \limits _{l \to \infty}\left \|y_{{n_{l}}}-z_{{{n_{l}}_{l}}}+\delta _{n_{l}} \left (S_{\lambda}y_{n_{l}}-y_{{n_{l}}}\right )\right \|=0. $$
(35)

Similarly, using (40) and (42), we get

$$ \lim \limits _{l \to \infty}\|y_{n_{l}}-z_{{n_{l}}}\|=0. $$
(36)

By using the definition of \(u_{n}\), we have

$$\begin{aligned} \|u_{n_{l}}-y_{n_{l}}\| =&\left \| \frac{\alpha _{n_{l}}}{1-\sigma _{n_{l}}}y_{n_{l}}+ \frac{\gamma _{{n_{l}}}}{1-\sigma _{{n_{l}}}}z_{n_{l}}-y_{n_{l}} \right \| \\ \leq &\frac{\gamma _{{n_{l}}}}{1-\sigma _{{n_{l}}}}\left \|z_{n_{l}}-y_{n_{l}} \right \|, \end{aligned}$$

which, by (43), yields

$$ \lim \limits _{l \to \infty}\|u_{n_{l}}-y_{n_{l}}\|=0. $$
(37)

Next, we need to show that \(\omega _{w}(y_{n})\subset \Omega \). Since \(\{y_{n}\}\) is bounded, \(\omega _{w}(y_{n})\ne \emptyset \). Let \(\bar{y}\in \omega _{w}(y_{n})\). It follows that there exists a subsequence \(\{y_{n_{l}}\}\) of \(\{y_{n}\}\) such that \(y_{n_{l}}\rightharpoonup \bar{y}\).

Now, due to the linearity and boundedness of \(T_{j}\), we have \(T_{j}y_{n_{l}} \rightharpoonup T_{j}\bar{y}\).

We claim that \(\bar{y} \in \Omega \). To show this, it is suffices to show that \(\bar{y} \in C^{n}\) and \(T_{j}(\bar{y}) \in Q_{jk}^{n}\) for all \(j \in J_{1}, k \in J_{2}\).

From the assumption (C2), we can see that \(\partial q_{jk}\) is bounded on bounded sets for each \(j \in J_{1}, k \in J_{2}\). It follows that we can find a constant \(\eta >0\) such that \(\|\eta _{jk,n_{l}}\|\leq \eta \), where \(\eta _{jk,n_{l}}\in \partial q_{jk}(T_{j}y_{n_{l}})\) for each \(j\in J_{1}, k \in J_{2}\).

Now, using (9), (39), and the fact that \(P_{Q_{jk}}^{n_{l}}\Big(T_{j}y_{n_{l}}\Big)\in Q_{jk}^{n_{l}}\), we get

$$\begin{aligned} q_{jk}\Big(T_{j}y_{n_{l}}\Big) {}\leq &\Big\langle \eta _{jk,n_{l}}, ~T_{j}y_{n_{l}}-P_{Q_{jk}^{n_{l}}}\Big(T_{j}y_{n_{l}}\Big) \Big\rangle -\frac{\omega _{j}}{2}\Big\| T_{j}y_{n_{l}}-P_{Q_{jk}^{n_{l}}} \Big(T_{j}y_{n_{l}}\Big)\Big\| ^{2} \\ {}\leq &\Big\langle \eta _{jk,n_{l}}, ~T_{j}y_{n_{l}}-P_{Q_{jk}^{n_{l}}} \Big(T_{j}y_{n_{l}}\Big) \Big\rangle \\ {}\leq &\Big\| \eta _{jk,n_{l}}\Big\| \Big\| \Big(I-P_{Q_{jk}^{n_{l}}} \Big)T_{j}y_{n_{l}}\Big\| \\ {}\leq &\eta \Big\| \Big(I-P_{Q_{jk}^{n_{l}}}\Big)T_{j}y_{n_{l}} \Big\| \to 0. \end{aligned}$$
(38)

Noting \(q_{jk}\) is weakly lower semi-continuous, it follows that

$$ q_{jk}(T_{j}\bar{y})\leq \liminf \limits _{l\rightarrow \infty}q_{jk} \Big(T_{j}y_{n_{l}}\Big)\leq \lim \limits _{l\rightarrow \infty}\eta \Big\| \Big(I-P_{Q_{jk}^{n_{l}}}\Big)T_{j}y_{n_{l}}\Big\| = 0, $$

for all \(j\in J_{1}, k \in J_{2}\). It turns out that, \(T_{j}\bar{y}\in Q_{jk}\) for all \(j \in J_{1}, k \in J_{2}\).

Again, from the assumption (C2), we can see that ∂c is bounded on bounded sets. It follows that there is a constant \(\xi >0\) such that \(\|\xi_{n_{l}}\|\leq \xi \), where \(\xi_{n_{l}}\in \partial c(y_{n_{l}})\).

By using (8) and (44), we have as \(l\to \infty \) that

$$\begin{aligned} c(y_{n_{l}}) {}\leq &\Big\langle \xi_{n_{l}}, ~u_{n_{l}}-y_{n_{l}} \rangle -\frac{\varpi}{2}\|u_{n_{l}}-y_{n_{l}}\|^{2} \\ {}\leq &\|\xi_{n_{l}}\|\|u_{n_{l}}-y_{n_{l}}\| \\ {}\leq &\xi \|u_{n_{l}}-y_{n_{l}}\|\to 0. \end{aligned}$$
(39)

Noting c is weakly lower semi-continuous, it follows that

$$ c(\bar{y})\leq \liminf \limits _{l\rightarrow \infty}c(y_{n_{l}}) \leq \lim \limits _{l\rightarrow \infty}\xi \Big\| y_{n_{l}}-u_{n_{l}} \Big\| =0. $$

Thus, \(\bar{y}\in C^{n}\). Consequently, \(\omega _{\omega}(y_{n_{l}})\subset \Gamma \).

Since \(S_{\lambda}=(I-\lambda )I+\lambda S\) and \(I-S\) is demiclosed at zero, we see that \(I-S_{\lambda}\) is demiclosed at zero. Now, taking \(\{y_{n_{l}} \}\rightharpoonup \bar{y}\) and (40) into account, we deduce that \(\omega _{\omega}(y_{n_{l}})\subset \text{ Fix}(S)\). Putting the above results together, we see that \(\omega _{\omega}(y_{n_{l}})\subset \Omega =\text{ Fix}(S) \cap \Gamma \).

Since the mapping \(P_{\Omega }v\) is a strict contraction on H, there exists a unique point \(t^{*} \in H\) such that \(t^{*}=P_{\Omega }v(t^{*})\). It then follows from (5) that

$$ \langle v(t^{*})-t^{*},z-t^{*}\rangle \leq 0, $$
(40)

for all \(z \in \Omega \).

Next, we choose a subsequence \(\{y_{n_{l_{m}}}\}\) of \(\{y_{n_{l}}\}\) such that

$$ \limsup _{l \to \infty}\langle v(t^{*})-t^{*}, y_{n_{l}}-t^{*} \rangle =\lim _{m\to \infty}\langle v(t^{*})-t^{*}, y_{n_{l_{m}}}-t^{*} \rangle . $$
(41)

We may assume, without any loss of generality, that \(y_{n_{l_{m}}} \rightharpoonup \bar{y}\) as \(m \to \infty \).

Now, using (44), (47), and (48), we get

$$\begin{aligned} &\limsup _{l \to \infty}\langle v(t^{*})-t^{*}, u_{n_{l}}-t^{*} \rangle \\ &\quad=\limsup _{l \to \infty}\langle v(t^{*})-t^{*}, u_{n_{l}}-y_{n_{l}}+y_{n_{l}}-t^{*} \rangle \\ &\quad=\limsup _{l \to \infty}\langle v(t^{*})-t^{*}, u_{n_{l}}-y_{n_{l}} \rangle +\limsup _{l \to \infty}\langle v(t^{*})-t^{*}, y_{n_{l}}-t^{*} \rangle \\ &\quad\leq\limsup _{l \to \infty}\|v(t^{*})-t^{*}\|\|u_{n_{l}}-y_{n_{l}} \|+\limsup _{l \to \infty}\langle v(t^{*})-t^{*}, y_{n_{l}}-t^{*} \rangle \\ &\quad=\lim _{m\to \infty}\langle v(t^{*})-t^{*}, y_{n_{l_{m}}}-t^{*} \rangle \\ &\quad= \langle v(t^{*})-t^{*}, \bar{y}-t^{*} \rangle \\ &\quad\leq 0, \end{aligned}$$
(42)

which shows that (33) holds. □

Theorem 6

Let \(\Omega \ne \emptyset \), \(S: C \rightarrow C\) be a demicontractive mappings such that \(I-S\) is demiclosed at zero, \(v: C \rightarrow C\) be a μ-contraction, and suppose that \(\{\rho_{n}\}, \{\epsilon_{n}\}, \{\sigma _{n}\}, \{\alpha _{n}\}, \{\gamma _{n}\}\), and \(\{\delta _{n}\}\) are sequences satisfying Assumption \(C(3)\). Then, the sequence generated by Algorithm 1 converges to \(t^{*}=P_{\Omega }v(t^{*})\).

Proof

Using the definition of \(y_{n}\), we have

$$\begin{aligned} \|y_{n}-t^{*}\|^{2} {} = & \|t_{n}+\theta _{n}(t_{n}-t_{n-1})-t^{*} \|^{2} \\ {} = & \|(t_{n}-t^{*})+\theta _{n}(t_{n}-t_{n-1})\|^{2} \\ {} \leq & \|t_{n}-t^{*}\|^{2}+2\theta _{n}\langle y_{n}-t^{*}, t_{n}-t_{n-1} \rangle \\ {} \leq & \|t_{n}-t^{*}\|^{2}+2\theta _{n}\|t_{n}-t_{n-1}\|\|y_{n}-t^{*} \| \\ \leq & \|t_{n}-t^{*}\|^{2}+2\epsilon _{n}\|y_{n}-t^{*}\|. \end{aligned}$$
(43)

Using (29) and (50), we get

$$ \|u_{n}-t^{*}\|^{2} \leq \|t_{n}-t^{*}\|^{2}+2\epsilon_{n}\|y_{n}-t^{*}\|. $$
(44)

Using the definition of \(t_{n}\), we have

$$\begin{aligned} &\|t_{n+1}-t^{*}\|^{2} \\ &\quad=\|\sigma_{n}(v(t_{n})-t^{*})+(1-\sigma_{n})(u_{n}-t^{*})\|^{2} \\ &\quad=\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}(1-\sigma_{n})\langle v(t_{n})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad=\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}\langle v(t_{n})-t^{*}, u_{n}-t^{*}\rangle- \\ &\qquad 2\sigma_{n}^{2}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad\leq\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+ \\ &\qquad 2\sigma_{n}\langle v(t_{n})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad=\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+\\ &\qquad 2\sigma_{n}\langle v(t_{n})-v(t^{*}), u_{n}-t^{*}\rangle+2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad\leq\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+ \\ &\qquad 2\sigma_{n}\|v(t_{n})-v(t^{*})\|\| u_{n}-t^{*}\|+2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad\leq\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+ \\ &\qquad 2\mu\sigma_{n}\|t_{n}-t^{*}\|\| u_{n}-t^{*}\|+2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle \\ &\quad\leq\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+(1-\sigma_{n})^{2}\|u_{n}-t^{*}\|^{2}+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+ \\ &\qquad \mu\sigma_{n}(\|t_{n}-t^{*}\|^{2}+\| u_{n}-t^{*}\|^{2})+2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle. \end{aligned}$$
(45)

Substituting (51) into (52), we get

$$ \begin{aligned} \|t_{n+1}-t^{*}\|^{2} \leq&\sigma_{n}^{2}\|v(t_{n})-t^{*}\|^{2}+[(1-\sigma_{n})^{2}+2\mu\sigma_{n}]\|t_{n}-t^{*}\|^{2}+\\&[2\epsilon_{n}(1-\sigma_{n})^{2}+2\mu\sigma_{n}\epsilon_{n}]\|y_{n}-t^{*}\|+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+\\&2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle\\ \leq&[1-2\sigma_{n}(1-\mu)]\|t_{n}-t^{*}\|^{2}+\sigma_{n}^{2}\|v(t_{n})-t^{*}\|+\\&2\mu\sigma_{n}\epsilon_{n}\|y_{n}-t^{*}\|+2\sigma_{n}^{2}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+\\&2\sigma_{n}\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle+\sigma_{n}^{2}\|t_{n}-t^{*}\|^{2}\\ \leq&[1-2\sigma_{n}(1-\mu)]\|t_{n}-t^{*}\|^{2}+2\sigma_{n}(1-\mu)\frac{1}{2(1-\mu)}\Bigl[\sigma_{n}\|v(t_{n})-t^{*}\|+\\&2[(1-\sigma_{n})^{2}+\mu\sigma_{n}]\frac{\epsilon_{n}}{\sigma_{n}}\|y_{n}-t^{*}\|+2\sigma_{n}\|v(t_{n})-t^{*}\|\| u_{n}-t^{*}\|+\\&2\langle v(t^{*})-t^{*}, u_{n}-t^{*}\rangle+\sigma_{n}\|t_{n}-t^{*}\|^{2}\Bigr]. \end{aligned} $$
(46)

Again, using the definition of \(t_{n}\) and (28), we get

$$\begin{aligned} \|t_{n+1}-t^{*}\|^{2} =&\|\sigma_{n}(v(t_{n})-t^{*})+(1-\sigma_{n})(u_{n}-t^{*})\|^{2} \\ \leq&\|u_{n}-t^{*}\|^{2}+\sigma_{n}\langle v(t_{n})-t^{*},t_{n+1}-t^{*} \rangle \\ \leq&\|y_{n}-t^{*}\|^{2}-(1-\delta_{n})\rho_{n}(4-\rho_{n})\frac{\gamma_{n}}{1-\sigma_{n}}g_{jk}(y_{n})- \\ {} &\delta_{n}(1-\delta_{n})\frac{\gamma_{n}}{1-\sigma_{n}}\left\|S_{\lambda}y_{n}-y_{n}+\tau_{n} \sum_{(j,k)\in \Delta_{n}}\beta_{jk}T^{*}_{j}d^{n}_{jk}\right\|^{2}- \\ {} & \frac{\gamma_{n}}{1-\sigma_{n}}\left\|(I-P_{C^{n}})y_{n} \right\|^{2}+2\sigma_{n}\langle t_{n+1}-t^{*}, u_{n}-t^{*} \rangle. \end{aligned}$$
(47)

According to the notations, we introduced in Lemma 5, inequalities (53) and (54) can be briefly expressed as

$$\begin{aligned}& \|t_{n+1}-t^{*}\|^{2} \leq (1-\Theta _{n})\|t_{n}-t^{*}\|^{2}+\Theta _{n} \Delta _{n}, \forall n\geq 1,\\& \|t_{n+1}-t^{*}\|^{2} \leq \|t_{n}-t^{*}\|^{2}-\Psi _{n}+\Phi _{n}, \forall n\geq 1, \end{aligned}$$

respectively.

Applying the conditions listed under \(C(3)\), we immediately obtain

$$ \sum _{n=1}^{\infty}\Theta _{n}=\infty \text{ and } \lim \limits _{n \to \infty}\Phi _{n}=0. $$

All is now set to give the strong convergence of \(\{t_{n}\}\). From the results we obtained above and in Lemma 5, we can see that all the hypotheses of Lemma 3 are satisfied. Hence

$$ \lim \limits _{n \to \infty}\|t_{n}-t^{*}\|=0, $$

which shows that the sequence \(\{t_{n}\}\) converges strongly to \(t^{*}=P_{\Omega }v(t^{*})\). □

4 Numerical experiment

In this section, we illustrate the convergence of Algorithm 1 using a numerical example.

Example 1

Let \(H=\mathbb{R}^{S}\), \(H_{1}=\mathbb{R}^{R}\), \(H_{2}=\mathbb{R}^{N}\), \(H_{3}=\mathbb{R}^{M}\), \(H_{4}=\mathbb{R}^{L}\).

Let \(C=\{x\in \mathbb{R}^{S}:\|x-\textbf{o}\|^{2}\leq \textbf{r}^{2}\}\) where \(\textbf{o} \in \mathbb{R}^{S}\) and \(\textbf{r} \in \mathbb{R}\). Clearly C is a nonempty closed and convex subsets of H.

Let \(Q_{11}=\{T_{1}x\in \mathbb{R}^{R}:\|T_{1}x-\textbf{a}_{1}\|^{2}\leq { \varrho}^{2}_{1}\}\), \(Q_{21}=\{T_{2}x\in \mathbb{R}^{N}:\|T_{2}x-\textbf{a}_{2}\|^{2}\leq { \varrho}^{2}_{2}\}\), \(Q_{31}=\{T_{3}x\in \mathbb{R}^{M}:\|T_{3}x-\textbf{a}_{3}\|^{2}\leq { \varrho}^{2}_{3}\}\), and \(Q_{41}=\{T_{4}x\in \mathbb{R}^{L}:\|T_{4}x-\textbf{a}_{4}\|^{2}\leq { \varrho}^{2}_{4}\}\) where \(\textbf{a}_{1}\in \mathbb{R}^{R}\), \(\textbf{a}_{2}\in \mathbb{R}^{N}\), \(\textbf{a}_{3}\in \mathbb{R}^{M}\), \(\textbf{a}_{4}\in \mathbb{R}^{L}\) and \(\varrho _{1}, \varrho _{2}, \varrho _{3}, \varrho _{4}\in \mathbb{R}\).

Let \(T_{1}:\mathbb{R}^{S}\to \mathbb{R}^{R}\), \(T_{2}:\mathbb{R}^{S}\to \mathbb{R}^{N}\), \(T_{3}:\mathbb{R}^{S}\to \mathbb{R}^{M}\), \(T_{4}:\mathbb{R}^{S}\to \mathbb{R}^{L}\) where their entries are randomly generated in the closed interval \([-5, 5]\).

Now, we construct the balls \(C^{n}\) and \(Q_{j1}^{n}\) (\(j=1, 2, 3, 4\)) given in (8) and (9) of the sets C and \(Q_{j1}\), respectively, as follows.

For any \(x\in \mathbb{R}^{S}\), we have \(c(x)=\|x-\textbf{o}\|^{2}-\textbf{r}^{2}\) and \(q_{j1}(T_{j}x)=\|T_{j}x-\textbf{a}_{j}\|^{2}-{\varrho}^{2}_{j}\) for \(j=1, 2, 3, 4\). In what follows, the subgradients \(\xi_{n}\) and \(\eta _{j1,n}\) of respectively \(c(y_{n})\) and \(q_{j1}(T_{j}y_{n})\) can be calculated respectively at the points \(y_{n}\) and \(T_{j}y_{n}\) by \(\xi_{n} (y_{n})=2(y_{n}-\textbf{o})\) and \(\eta _{j1,n} (T_{j}y_{n})=2T_{j}^{*}(T_{j}y_{n}-\textbf{c}_{j})\). The metric projections onto the balls \(C^{n}\) (\(i=1,2,3,4\)) and \(Q_{j1}^{n}\) (\(j=1,2,3,4\)), can be easily calculated.

We randomly generate the coordinates of o and \(\textbf{a}_{j}\) in \([-1, 1]\) and, r and \({\varrho}_{j}\) in \([S, 2S]\), \([R, 2R]\), \([N, 2N]\), \([M, 2M]\), and \([L, 2L]\), respectively. We take the initial points as \(t_{0} = 100(1, 1, \dots , 1)^{T} \in \mathbb{R}^{S}\) and \(t_{1} = -10(1, 1, \dots , 1)^{T}\in \mathbb{R}^{S}\). We take a \(\frac{1}{6}\)-demicontractive mapping \(S(x)=-\frac{7}{5}x, x \in \mathbb{R}^{S}\) and \(v(x)=0.95x, x \in \mathbb{R}^{S}\), respectively. Now, using Lemma 2, taking \(\lambda =\frac{1}{3}\), we get \(S_{\lambda}(x)=\frac{1}{5}x, x \in \mathbb{R}^{S}\) which is a quasi-nonexpansive mapping.

We take \(\varpi = 0.5\). For \(j=1,2,3,4\), we take \(\beta _{j} = \frac{j}{10}\) and \(\omega _{j} = 1.5\), \(\theta = 0.3\), \(\epsilon _{n} = \frac{1}{20n^{30}+1}\), \(\delta _{n}=0.4\), \(\alpha _{n}=0.5\), \(\rho _{n}=\frac{n}{40n+1}\), \(\sigma _{n}=\frac{1}{n+1}\), \(\alpha _{n}=0.6, \tau=2\), and \(\gamma _{n}=1-\alpha _{n}-\sigma _{n}\). We use \(Error_{n}=\|t_{n+1}-t_{n}\|^{2}<10^{-8}\) as a stopping criterion in this example. The algorithms are coded in MATLAB 2023b on a personal computer (13th Gen Intel(R) Core(TM) i7-1355U 1.70 GHz, and a 16.0 GB RAM). All results are reported in Table 1 and Fig. 1.

Figure 1
figure 1

Iter. (n) vs \(Error_{n}\), experimental results of Algorithm 1 (when \(k=1\)) for different choices of \(S, R, N, M, L\)

Table 1 Numerical results of Algorithm 1 (when \(k=1\)) for different choices of \(S, R, N, M, L\)

5 Conclusion

In this paper, we study the split feasibility problem with multiple output sets and fixed point problem in the class of demicontractive mappings. We propose relaxed inertial self-adaptive algorithm and prove strong convergence result for the sequence generated by the proposed algorithm. The proposed method combines the SFPMOS and fixed point problem of demicontractive mappings. So, it generalizes a number of related works as the two problems are larger classes of problems. The proposed algorithm also incorporates the relaxation method in order to speed up its convergence. We adopted a new approach for solving the SFPMOS which does not use the least squares method. Finally, we illustrate the convergence of the proposed algorithm using a numerical example.

Data Availability

No datasets were generated or analysed during the current study.

References

  1. Alakoya, T.O., Mewomo, O.T.: Mann-type inertial projection and contraction method for solving split pseudomonotone variational inequality problem with multiple output sets. Mediterr. J. Math. 20(6), 336 (2023)

    Article  MathSciNet  Google Scholar 

  2. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. New York (2011)

    Book  Google Scholar 

  3. Berinde, V.: Approximating fixed points of demicontractive mappings via the quasi-nonexpansive case. Carpath. J. Math. 39(1), 73–85 (2023)

    Article  MathSciNet  Google Scholar 

  4. Berinde, V.: An inertial self-adaptive algorithm for solving split feasibility problems and fixed point problems in the class of demicontractive mappings. J. Inequal. Appl. 2024(1), 82 (2024)

    Article  MathSciNet  Google Scholar 

  5. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18(2), 441 (2002)

    Article  MathSciNet  Google Scholar 

  6. Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20(1), 103 (2003)

    Article  MathSciNet  Google Scholar 

  7. Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)

    Article  MathSciNet  Google Scholar 

  8. Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21(6),3 2071 (2005)

    Article  MathSciNet  Google Scholar 

  9. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)

    Article  MathSciNet  Google Scholar 

  10. Censor, Y., Motova, A., Segal, A.: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327(2), 1244–1256 (2007)

    Article  MathSciNet  Google Scholar 

  11. He, S., Yang, C.: Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 942315 (2013)

    Article  MathSciNet  Google Scholar 

  12. Jia, H., Liu, S., Dang, Y.: An inertial accelerated algorithm for solving split feasibility problem with multiple output sets. J. Math. 2021, 1–12 (2021)

    MathSciNet  Google Scholar 

  13. Kim, J.K., Tuyen, T.M., Ha, M.T.N.: Two projection methods for solving the split common fixed point problem with multiple output sets in Hilbert spaces. Numer. Funct. Anal. Optim. 42(8), 973–988 (2021)

    Article  MathSciNet  Google Scholar 

  14. Li, H., Wu, Y., Wang, F.: New inertial relaxed CQ algorithms for solving the split feasibility problems in Hilbert spaces. J. Math. 2021, 1–13, (2021)

    MathSciNet  Google Scholar 

  15. Okeke, C.C.: An improved inertial extragradient subgradient method for solving split variational inequality problems. Bol. Soc. Mat. Mexicana 28(1), 16 (2022)

    Article  MathSciNet  Google Scholar 

  16. Reich, S., Minh Tuyen, T.: Two new self-adaptive algorithms for solving the split feasibility problem in Hilbert space. Numer. Algorithms 1(22) (2023)

  17. Reich, S., Truong, M.T., Mai, T.N.H.: The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 14, 2335–2353 (2020)

    Article  MathSciNet  Google Scholar 

  18. Reich, S., Tuyen, T.M.: Projection algorithms for solving the split feasibility problem with multiple output sets. J. Optim. Theory Appl. 190, 861–878 (2021)

    Article  MathSciNet  Google Scholar 

  19. Reich, S., Tuyen, T.M.: The generalized Fermat-Torricelli problem in Hilbert spaces. J. Optim. Theory Appl. 196(1), 78–97 (2023)

    Article  MathSciNet  Google Scholar 

  20. Suantai, S., Pholasa, N., Cholamjiak, P.: Relaxed CQ algorithms involving the inertial technique for multiple-sets split feasibility problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113, 1081–1099 (2019)

    Article  MathSciNet  Google Scholar 

  21. Taddele, G.H., Kumam, P., Gebrie, A.G., Sitthithakerngkiet, K.: Half-space relaxation projection method for solving multiple-set split feasibility problem. Math. Comput. Appl. 25(3), 47 (2020)

    MathSciNet  Google Scholar 

  22. Taddele, G.H., Kumam, P., Gibali, A., Kumam, W.: An outer quadratic approximation method for solving split feasibility problems. J. Appl. Numer. Optim. 5(3) (2023)

  23. Taddele, G.H., Kumam, P., Sunthrayuth, P., Gebrie, A.G.: Self-adaptive algorithms for solving split feasibility problem with multiple output sets. Numer. Algorithms 92(2), 1335–1366 (2023)

    Article  MathSciNet  Google Scholar 

  24. Taddele, G.H., Kumam, P., ur Rehman, H., Gebrie, A.G.: Self adaptive inertial relaxed CQ algorithms for solving split feasibility problem with multiple output sets. J. Ind. Manag. Optim. 19(1), 1–29 (2022)

    Article  MathSciNet  Google Scholar 

  25. Thuy, N.T.T., Nghia, N.T.: A new iterative method for solving the multiple-set split variational inequality problem in Hilbert spaces. Optimization 72(6), 1549–1575 (2023)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. The first author was supported by Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi with Grant No. 51/2565.

Funding

This research budget was allocated by National Science, Research and Innovation Fund (NSRF), and King Mongkut’s University of Technology North Bangkok (Project no. KMUTNB-FF-67-B-05).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this research manuscript.

Corresponding author

Correspondence to Poom Kumam.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gebregiorgis, S., Kumam, P. & Sitthithakerngkiet, K. Relaxed inertial self-adaptive algorithm for the split feasibility problem with multiple output sets and fixed-point problem in the class of demicontractive mappings. Fixed Point Theory Algorithms Sci Eng 2024, 15 (2024). https://doi.org/10.1186/s13663-024-00771-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-024-00771-4

Keywords