Skip to main content

A self-adaptive extragradient method for fixed-point and pseudomonotone equilibrium problems in Hadamard spaces

Abstract

In this work, we study a self-adaptive extragradient algorithm for approximating a common solution of a pseudomonotone equilibrium problem and fixed-point problem for multivalued nonexpansive mapping in Hadamard spaces. Our proposed algorithm is designed in such a way that its step size does not require knowledge of the Lipschitz-like constants of the bifunction. Under some appropriate conditions, we establish the strong convergence of the algorithm without prior knowledge of the Lipschitz constants. Furthermore, we provide a numerical experiment to demonstrate the efficiency of our algorithm. This result extends and complements recent results in the literature.

1 Introduction

In this article, we consider the problem of approximating a common solution of an equilibrium problem (EP) and a fixed-point problem for a multivalued mapping in a nonlinear space. Let C be a nonempty, closed, and convex subset of a metric space X. A multivalued nonlinear mapping \(T:X\to 2^{X}\) is said to have a fixed point \(x\in X\) if \(x\in Tx\). We denote the fixed-point set of T by \(F(T)\), i.e., \(F(T)=\{x\in X:x\in Tx\}\). Let \(f : C \times C \to \mathbb{R}\) be a bifunction, the EP is to find a point \(x^{*}\in C\) such that

$$\begin{aligned} f\bigl(x^{*}, y\bigr) \geq 0,\quad \forall y\in C, \end{aligned}$$
(1.1)

where \(f(x,x)=0\) and \(f(x,\cdot )\) is convex on C. The set of solutions of (1.1) is denoted by \(\operatorname{EP}(f,C)\). The EP was first introduced in the linear setting by Ky Fan [13] and was later developed by Muu and Oettli [35] (see also Blum and Oettli [3]). The problem (1.1) is known to cover other important mathematical problems such as the minimization problem, the variational inequality problem, the saddle-point problem, Nash equilibrium involving noncooperative games, the fixed-point problem, etc. (see [3, 35]). It is well known that many real-life problems are characterized by phenomena that can be modeled as nonlinear, nonconvex, continuous optimization problems. Approximating the solution of such problems may actually be complicated in the framework of linear spaces. Dedieu et al. [7] (see also Li and Wang [32]) noted that these problems can be overcome in the nonlinear space settings. For this reason, Colao et al. [6] extended problem (1.1) to nonlinear spaces (in particular, a Riemannian manifold) and later in 2019 Khatibzadeh and Mohebbi [27] also studied EP in Hadamard spaces. These pioneering works have increased the interest of researchers in solving EP (1.1) in nonlinear spaces; see [1, 19, 28, 31]. We remark here that the aforementioned works were in the case when the bifunction f in (1.1) is monotone. Usually, the iterative methods employed in this instance adopt the popular regularization technique associated with a strongly monotone subproblem. Hoewever, in the case when f in (1.1) is nonmonotone, the regularization subproblem ceases to be strongly monotone and the iterative method becomes complex.

It is well known that pseudomonotone operators are obvious generalizations of the monotone operators and many real-life EPs can be modeled with pseudomonotone bifunctions. One of the notable methods used to solve the nonmonotone (in particular, the pseudomonotone) EP is the extragradient algorithm (EA). The EA was introduced by Koperlevich [30] and later redefined by Trans et al. [40]. Recently, Khammahawong et al. [26] employed an EA to approximate the solution of a strongly pseudomonotone EP in a Hadamard (complete Riemannian) manifold. Their proposed algorithm is as follows: Given \(x_{0}, y_{0}\in C\), the sequence \(\{y_{n}\}\) is generated as:

$$\begin{aligned} \textstyle\begin{cases} x_{n+1} \in \arg \min_{y\in C} \{f(y_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y) \}, \\ y_{n+1} \in \arg \min_{y\in C} \{f(y_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n+1},y) \}, \end{cases}\displaystyle \end{aligned}$$
(1.2)

where \(\{\lambda _{n}\}\) is a positive sequence and f is a strongly pseudomonotone bifunction. They obtained the convergence result of (1.2) under some mild conditions. We note that algorithm (1.2) has been extensively studied in both Hilbert and Banach spaces, see [3739] and other references therein. To obtain the convergence of the sequence generated by (1.2) (and in the work of Khatibzadeh and Mohebbi [27] in Hadamard spaces) there is need to know in advance the estimates of the Lipschitz-like constants of the bifunction. These are usually not easy to derive when the structure of the bifunction is not simple and may affect the efficiency of the algorithm.

In 2021, Iusem and Mohebbi [18] introduced two EA with a linesearch technique to solve (1.2) with a pseudomontone bifunction in Hadamard spaces. Their proposed algorithms are as follows:

Algorithm 1.1

(Extragradient Algorithm with Linesearch (EAL))

Initialization: Choose \(x_{0} \in C\subset X\). Take \(\delta \in (0,1), \hat{\lambda}, \bar{\lambda}\) satisfying \(0<\hat{\lambda}\leq \bar{\lambda}\), and \(\lambda _{n}\subseteq [\hat{\lambda},\bar{\lambda}]\).

Iterative Step: Given \(x_{n}\), define

$$ z_{n} \in \arg \min \biggl\{ f(x_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$
(1.3)

If \(x_{n}=z_{n}\) stop. Otherwise, let

$$ l(n)=\min \biggl\{ l\geq 0:\lambda _{n}f(y_{l},x_{n})- \lambda _{n}f(y_{l},z_{n}) \geq \frac{\delta}{2}d^{2}(z_{n},x_{n}) \biggr\} , $$

where

$$ y_{l}=2^{-l}z_{n}\oplus \bigl(1-2^{-l}\bigr)x_{n}. $$

Take

$$\begin{aligned}& \alpha _{n}:=2^{-l(n)}, \\& y_{n}=\alpha _{n}z_{n}\oplus (1-\alpha _{n})x_{n}, \\& w_{n}=P_{H_{n}}(x_{n}), \end{aligned}$$

where

$$ H_{n}=\bigl\{ y\in X:f(y_{n},y)\leq 0\bigr\} . $$

Then,

$$ x_{n+1} =P_{C}{w_{n}}. $$

Iusem and Mohebbi [18] obtained the Δ-convergence of EAL to an element in \(\operatorname{EP}(f,C)\). In addition, they proposed another algorithm to ensure strong convergence that was more desirable than the Δ-convergence in EAL. Precisely, the algorithm is as follows:

Algorithm 1.2

(Halpern Extragradient Algorithm with Linesearch (HEAL))

Initialization: Choose \(x_{0}\in C\), \(u \in X\). Consider a sequence \(\{\gamma _{n}\}\subset (0,1)\) such that \(\lim_{n\to \infty}\gamma _{n}=0\) and \(\sum_{n=0}^{\infty}\gamma _{n}=\infty \).

Iterative Step: Given \(x_{n}\), define

$$ z_{n} \in \arg \min \biggl\{ f(x_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$
(1.4)

If \(x_{n}=z_{n}\) stop. Otherwise, let

$$ l(n)=\min \biggl\{ l\geq 0:\lambda _{n}f(y_{l},x_{n})- \lambda _{n}f(y_{l},z_{n}) \geq \frac{\delta}{2}d^{2}(z_{n},x_{n}) \biggr\} , $$

where

$$ y_{l}=2^{-l}z_{n}\oplus \bigl(1-2^{-l}\bigr)x_{n}. $$

Take

$$\begin{aligned}& \alpha _{n}:=2^{-l(n)}, \\& y_{n}=\alpha _{n}z_{n}\oplus (1-\alpha _{n})x_{n}, \\& w_{n}=P_{H_{n}}(x_{n}), \end{aligned}$$

where

$$ H_{n}=\bigl\{ y\in X:f(y_{n},y)\leq 0\bigr\} . $$

Then,

$$\begin{aligned}& v_{n}=\gamma _{n}u\oplus (1-\gamma _{n})w_{n}, \\& x_{n+1} =P_{C}{v_{n}}. \end{aligned}$$

It is worth noting that Algorithm 1.1 and Algorithm 1.2 eliminate the challenges observed in (1.2) and [27] by introducing a linesearch technique. However, the presence of linesearch techniques in the algorithms to select step sizes will require more computation of nested iterations, which will be time consuming.

Motivated by the work of Iusem and Mohebbi [18] and other works in this direction, we introduce a self-adaptive extragradient algorithm in Hadamard spaces that does not need to have prior knowledge of Lipschitz-like constants or require a linesearch technique for execution. Our algorithm converges strongly to a common solution of the EP and the fixed point of a multivalued nonexpansive mapping. Furthermore, we give a numerical example in a Hadamard space to demonstrate the efficiency of our method. Our result extends and complements other recent results in this direction. Our contributions in this work are briefly highlighted as follows:

  1. (i)

    Our work extends step-adaptive algorithms for pseudomonotone EP from linear spaces (see, for example, [4, 1416, 21, 23]) to nonlinear spaces.

  2. (ii)

    Our work improves the works of Khatibzadeh and Mohebbi [27], also that of Iusem and Mohebbi [18]. That is, our algorithm neither depends on the Lipschitz constants nor uses a linesearch technique.

2 Preliminaries

In this section, we present some notations, known definitions, and useful results that will be needed in the proof of our main result. Throughout this work, we use the notations “” and “→” to denote Δ-convergence and strong convergence, respectively.

Let X be a metric space and \(x, y \in X\). A geodesic from x to y is a map (or a curve) c from the closed interval \([0, d(x, y)] \subset \mathbb{R}\) to X such that \(c(0) = x\), \(c(d(x, y)) = y\) and \(d(c(t), c(t')) = |t - t'|\) for all \(t, t' \in [0, d(x, y)]\). The image of c is called a geodesic segment joining x to y. When it is unique, this geodesic segment is denoted by \([x, y]\). The space \((X, d)\) is said to be a geodesic space if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each \(x, y \in X\). A subset C of a geodesic space X is said to be convex, if for any two points \(x, y \in C\), the geodesic joining x and y is contained in C, that is, if \(c : [0, d(x, y)] \to X\) is a geodesic such that \(x = c(0)\) and \(y = c(d(x, y))\), then \(c(t) \in C\ \forall t \in [0, d(x, y)]\). A geodesic triangle \(\Delta (x_{1},x_{2},x_{3})\) in a geodesic metric space \((X,d)\) consists of three vertices (points in X) with unparameterized geodesic segments between each pair of vertices. For any geodesic triangle there is a comparison (Alexandrov) triangle \(\bar{\Delta}\subset \mathbb{R}^{2}\), such that \(d(x_{i},x_{j})=d_{\mathbb{R}^{2}}(\bar{x}_{i},\bar{x}_{j})\), for \(i,j\in \{1,2,3\}\). A geodesic space X is a CAT(0) space if the distance between an arbitrary pair of points on a geodesic triangle Δ does not exceed the distance between its corresponding pair of points on its comparison triangle Δ̄. If Δ and Δ̄ are geodesic and comparison triangles in X, respectively, then Δ is said to satisfy the CAT(0) inequality for all points x, y of Δ and , ȳ of Δ̄ if

$$\begin{aligned} d(x,y)=d_{\mathbb{R}^{2}}(\bar{x},\bar{y}). \end{aligned}$$
(2.1)

Let x, y, z be points in X and \(y_{0}\) be the midpoint of the segment \([y,z]\), then the CAT(0) inequality implies

$$\begin{aligned} d^{2}(x,y_{0})\leq \frac{1}{2}d^{2}(x,y)+ \frac{1}{2}d^{2}(x,z)- \frac{1}{4}d(y,z). \end{aligned}$$
(2.2)

Berg and Nikolaev [2] introduced the notion of quasilinearization in a CAT(0) space as follows: Let a pair \((a, b) \in X \times X\) denoted by \(\overrightarrow{ab}\), be called a vector. Then, the quasilinearization map \(\langle \cdot ,\cdot \rangle :(X\times X)\times (X\times X) \rightarrow \mathbb{R}\) is defined by

$$\begin{aligned} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle =\frac{1}{2} \bigl(d^{2}(a,d)+d^{2}(b,c)-d^{2}(a,c)-d^{2}(b,d) \bigr),\quad \text{for all } a,b,c,d \in X. \end{aligned}$$
(2.3)

It is easy to see that \(\langle \overrightarrow{ab}, \overrightarrow{ab}\rangle =d^{2}(a, b)\), \(\langle \overrightarrow{ba}, \overrightarrow{cd}\rangle =-\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle \), \(\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{ae}, \overrightarrow{cd}\rangle +\langle \overrightarrow{eb}, \overrightarrow{cd}\rangle \), and \(\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{cd}, \overrightarrow{ab}\rangle \), for all \(a,b,c,d,e\in X\). Furthermore, a geodesic space X is said to satisfy the Cauchy–Schwartz inequality, if

$$ \langle \overrightarrow{ab},\overrightarrow{cd}\rangle \leq d(a,b)d(c,d) $$

for all \(a,b,c,d \in X\). It is well known that a geodesically connected space is a CAT(0) space if and only if it satisfies the Cauchy–Schwartz inequality [12]. Also, it is known that complete CAT(0) spaces are called Hadamard spaces.

Let \(\{x_{n}\}\) be a bounded sequence in a metric space X and \(r(\cdot ,\{x_{n}\}) : X \to [0, \infty )\) be a continuous functional defined by \(r(x,\{x_{n}\}) = \limsup_{n \to \infty} d(x, x_{n})\). The asymptotic radius of \(\{x_{n}\}\) is given by \(r(\{x_{n}\}) := \inf \{r(x, \{x_{n}\}) : x \in X\}\), while the asymptotic center of \(\{x_{n}\}\) is the set \(A(\{x_{n}\}) = \{x \in X : r(x, \{x_{n}\}) = r(\{x_{n}\})\}\). A sequence \(\{x_{n}\}\) in X is said to be Δ-convergent to a point \(x \in X\) if \(A(\{x_{n_{k}}\}) = \{x\}\) for every subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\). In this case, we say that x is the Δ-limit of \(\{x_{n}\}\) (see [11, 29]). The notion of Δ-convergence in metric spaces was introduced and studied by Lim [33], and it is known as the analog of the notion of weak convergence in Banach spaces.

Let X be a geodesic convex metric space and A be a nonempty subset of X. A subset A is called proximinal (see [9]), if for each \(x\in X\) there exists \(a\in A\) such that

$$ \operatorname{dist}(x, A)=\inf \bigl\{ d(x,a):a \in A\bigr\} . $$

It is well known that if A is proximinal, then A is closed. We denote the family of all nonempty proximinal subsets of X by \(P(X)\) and the family of closed and bounded subsets of X by \(CB(X)\), respectively. If A and B are nonempty subsets of X, then the Hausdorff metric H on \(P(X)\) is defined by

$$ H(A, B) = \max \Bigl\{ \sup_{a \in A} \operatorname{dist}(a,B), \sup_{b\in B} \operatorname{dist}(b,A) \Bigr\} ,\quad \forall A,B \in P(X). $$

Let \(T:X\rightarrow CB(X)\) be a multivalued mapping. A point \(x\in X\) is called a strict fixed point of T if \(Tx=\{x\}\). In this case, T is said to satisfy the end point condition and denote the set of end points of the mapping T as \(E(T)\). The mapping T is said to be multivalued nonexpansive, if

$$ H(Tx, Ty) \leq d(x, y)\quad \text{for all } x,y\in X. $$

Remark 2.1

Let \(T:C\to CB(C)\) be a multivalued mapping. If \(p\in C\) is an end point of the mapping T, then p is also a fixed point of T. That is, \(E(T)\subseteq F(T)\), in fact equality holds if T is single valued. However, the converse may not hold.

Example 2.2

Let \(X=\mathbb{R}\) and \(C=\{x:0\leq x \leq 1\}\) with the usual metric. For each \(x\in C\), let \(T:C\to CB(X)\) be defined as \(Tx=[0,x^{n}]\), \(1\leq n <\infty \). It is obvious that T is nonexpansive with \(E(T)=\{0\}\) and \(F(T)=\{0,1\}\).

Definition 2.3

Let X be a Hadamard space. A multivalued nonlinear mapping \(T : X \to 2^{X}\) is said to be demiclosed if for any bounded sequence \(\{x_{n}\}\) in X such that \(\Delta -\lim_{n \to \infty}x_{n} = x^{*}\) and \(\lim_{n \to \infty} d(x_{n}, z_{n}) = 0\), (where \(z_{n}\in Tx_{n}\)) we have that \(x^{*} \in F(T)\).

Lemma 2.4

([17])

Let X be a metric space and A, B are nonempty subsets in \(P(X)\). Then, for all \(a\in A\), there exists \(b\in B\) such that \(d(a,b)\leq H(A, B)\).

Definition 2.5

Let X be a Hadamard space. A function \(h:X\to (-\infty , \infty ]\) is said to be

  1. (i)

    convex, if

    $$ h\bigl(\lambda x\oplus (1-\lambda ) y\bigr)\leq \lambda h(x)+(1-\lambda )h(y),\quad \forall x, y\in X, \lambda \in (0, 1), $$
  2. (ii)

    lower semicontinuous (or upper semicontinuous) at a point \(x\in C\), if

    $$\begin{aligned} h(x)\leq \liminf_{n\to \infty} h(x_{n})\qquad \Bigl(\text{or } h(x)\geq \limsup_{n\to \infty} h(x_{n}) \Bigr), \end{aligned}$$
    (2.4)

    for each sequence \(\{x_{n}\}\) in C such that \(\underset{n\to \infty}{\lim}x_{n}=x\),

  3. (iii)

    lower semicontinuous (or upper semicontinuous) on C, if it is lower semicontinuous (or upper semicontinuous) at every point in C.

The convex programming associated with the proper, convex, and lower semicontinuous function h for all \(\lambda >0\) is given by:

$$\begin{aligned} \arg \min_{x\in X} \biggl(h(x)+ \frac{1}{2\lambda}d^{2}(x,y) \biggr) \end{aligned}$$
(2.5)

for all \(y\in X\).

Remark 2.6

([24])

The subproblem (2.5) is well defined for all \(\lambda >0\).

Definition 2.7

Let C be a nonempty, closed, and convex subset of a Hadamard space X. A bifunction \(f:C\times C\to \mathbb{R}\) is said to be

  1. (i)

    monotone, if

    $$ f(x, y) + f(y, x) \leq 0, \quad \forall x,y\in C; $$
  2. (ii)

    pseudomonotone if

    $$ f(x, y)\geq 0 \quad \Rightarrow \quad f(y, x) \leq 0, \quad \forall x,y\in C. $$

It is well known that monotone bifunctions are pseudomonotone, but the converse is not true in general (see, for instance [20, 22]).

Definition 2.8

Let X be a Hadamard space. A bifunction f is said to satisfy the Lipschitz-like continuity if there exist two constants \(c_{1},c_{2}>0\) such that

$$ f(x,y)+f(y,z)\geq f(x,z) - c_{1}d^{2}(x,y) - c_{2}d^{2}(y,z),\quad \forall x,y,z\in X. $$

To solve the EP, the following assumptions are important and necessary on the bifunction f in establishing our result:

  1. (A1)

    \(f(x,\cdot ):X\to \mathbb{R}\) is convex and lower semicontinuous for all \(x\in X\),

  2. (A2)

    \(f(\cdot ,y)\) is Δ-upper semicontinuous for all \(x\in X\),

  3. (A3)

    f satisfies the Lipschitz-type continuity condition,

  4. (A4)

    f is pseudomonotone.

Definition 2.9

Let C be a nonempty, closed, and convex subset of a Hadamard space X. The metric projection is a mapping \(P_{C}:X\rightarrow C\) that assigns to each \(x\in X\), the unique point \(P_{C}x\in C\) such that

$$ d(x,P_{C}x)=\inf \bigl\{ d(x,y):y\in C\bigr\} . $$

Lemma 2.10

([12])

Every bounded sequence in a Hadamard space has a Δ-convergent subsequence.

Lemma 2.11

Let X be a Hadamard space, \(x,y,z\in X\), and \(t, s\in [0,1]\). Then,

  1. (i)

    \(d(t x\oplus (1-t)y,z)\leq t d(x,z)+(1-t)d(y,z)\) (see [12]).

  2. (ii)

    \(d^{2}(t x\oplus (1-t)y,z)\leq t d^{2}(x,z)+(1-t)d^{2}(y,z)-t(1-t)d^{2}(x,y)\) (see [12]).

  3. (iii)

    \(d^{2}(t x\oplus (1-t)y,z)\leq t^{2} d^{2}(x,z)+(1-t)^{2}d^{2}(y,z) +2t(1-t) \langle \overrightarrow{xz},\overrightarrow{yz}\rangle \) (see [8]).

Lemma 2.12

([25])

Let X be a Hadamard space, \(\{x_{n}\}\) be a sequence in X, and \(x \in X\). Then, \(\{x_{n}\}\) Δ-converges to x if and only if

$$ \limsup_{n\to \infty}\langle \overrightarrow{x_{n}x}, \overrightarrow{yx}\rangle \leq 0, \quad \forall y\in X. $$

Lemma 2.13

([8])

Let C be a nonempty, convex subset of a Hadamard space X, \(x\in X\) and \(u\in C\). Then, \(u=P_{C}x\) if and only if \(\langle \overrightarrow{ux},\overrightarrow{yu}\rangle \leq 0\), for all \(y\in C\).

Lemma 2.14

([10])

Let X be a Hadamard space and \(T:X\to X\) be a nonexpansive mapping. Then, T is Δ-demiclosed.

Lemma 2.15

([36])

Let X be a Hadamard space and \(\{x_{n}\}\) be a sequence in X. If there exists a nonempty subset C in which

  1. (i)

    \(\lim_{n\to \infty}d(x_{n},z)\) exists for every \(z \in C\), and

  2. (ii)

    if \(\{x_{n_{k}}\}\) is a subsequence of \(\{x_{n}\}\) that is Δ-convergent to x, then \(x \in C\),

then, there is a \(p \in C\) such that \(\{x_{n}\}\) is Δ-convergent to p in X.

Lemma 2.16

([41])

Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers satisfying

$$ a_{n+1}\leq (1-\alpha _{n})a_{n}+\alpha _{n}\delta _{n}+\gamma _{n},\quad n \geq 0, $$

where \(\{\alpha _{n}\}\), \(\{\delta _{n}\}\), and \(\{\gamma _{n}\}\) satisfy the following conditions:

  1. (i)

    \(\{\alpha _{n}\}\subset [0,1]\), \(\Sigma _{n=0}^{\infty}\alpha _{n}= \infty \),

  2. (ii)

    \(\limsup_{n\rightarrow \infty}\delta _{n}\leq 0\),

  3. (iii)

    \(\gamma _{n}\geq 0 (n\geq 0)\), \(\Sigma _{n=0}^{\infty}\gamma _{n}< \infty \).

Then, \(\lim_{n\rightarrow \infty}a_{n}=0\).

Lemma 2.17

([34])

Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{j}\}\) of \(\{n\}\) with \(a_{n_{j}}< a_{n_{j}+1}\) \(\forall j\in \mathbb{N}\). Then, there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\) such that \(m_{k}\to \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):

$$ a_{m_{k}}\leq a_{m_{k}+1}\quad \textit{and} \quad a_{k} \leq a_{m_{k}+1}. $$

In fact, \(m_{k}=\max \{i\leq k :a_{i}< a_{i+1}\}\).

3 Main results

In this section, we present our algorithm and its convergence analysis for approximating the solutions of EP and the fixed point of a multivalued nonexpansive mapping in Hadamard spaces. In the following, let C be a nonempty, closed, convex subset of a Hadamard space \(X, f : X \times X \to \mathbb{R}\) be a bifunction satisfying (A1)–(A4) and let \(T : C \to CB(X)\) be a multivalued nonexpansive mapping such that \(Tx^{*}=\{x^{*}\}\). Suppose the solution set of the aforementioned problems is

$$\begin{aligned} \Gamma := \operatorname{EP}(f,C)\cap F(T)\neq \emptyset . \end{aligned}$$

Now, we present our algorithm as follows:

Algorithm 3.1

(Self-Adaptive Extragradient Algorithm (SAEA))

Initialization: Choose \(u,x_{0} \in C\), \(n \geq 0\), \(\lambda _{n}>0\), \(\mu \in (0,1)\).

Iterative steps: Take \(\{\alpha _{n}\}\subseteq (0,1)\) such that \(\lim_{n\to \infty}\alpha _{n}=0\), \(\sum_{n=0}^{ \infty}\alpha _{n}=\infty \) and \(\{\beta _{n}\}\subseteq (0,1)\) such that \(\liminf_{n \to \infty}(1-\alpha _{n})\beta _{n}>0\). Given the nth iterate, compute the \((n+1)\)th iterate via the following procedure:

Step 1::

Compute

$$ y_{n} = \arg \min \biggl\{ f(x_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$
(3.1)

If \(x_{n} = y_{n}\), then stop. Otherwise go to Step 2.

Step 2::

Compute

$$ w_{n} = \arg \min \biggl\{ f(y_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$
(3.2)
Step 3::

Compute

$$\begin{aligned} x_{n+1} = \alpha _{n}u\oplus (1-\alpha _{n}) \bigl[\beta _{n}h_{n} \oplus (1-\beta _{n})w_{n} \bigr], \end{aligned}$$

where \(h_{n}\in Tw_{n}\).

Step 4::

Evaluate

$$\begin{aligned} \lambda _{n+1}= \textstyle\begin{cases} \min \{\lambda _{n}, \frac{\mu [d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n})]}{2[f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n})]_{+}} \}, \\ \quad \text{if } f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n})>0, \\ \lambda _{0}, \quad \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(3.3)

Set \(n:=n+1\) and go back to Step 1.

We begin with the following lemma that is crucial to the nonincreasing monotonicity of sequence (3.3). The lemma has been proved by many authors in the framework of Hilbert and Banach spaces (see [22, 42] and other references therein). We state the lemma here in a Hadamard-space setting and give the proof for completeness.

Lemma 3.2

The sequence \(\{\lambda _{n}\}\) defined by (3.3) is monotonically nonincreasing and bounded and

$$ \lim_{n\to \infty}\lambda _{n}=\lambda \geq \biggl\{ \frac{\mu}{2\max \{c_{1},c_{2}\}},\lambda _{1} \biggr\} . $$

Proof

It is obvious that the sequence \(\{\lambda _{n}\}\) is monotonic nonincreasing. By the Lipschitz-like property of f, we obtain that

$$\begin{aligned} \frac{[d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n})]}{2[f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n})]}& \geq \frac{[d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n})]}{2[c_{1}d^{2}(x_{n},y_{n})+c_{2}d^{2}f(y_{n},w_{n})]} \geq \frac{\mu}{2\max \{c_{1},c_{2}\}}. \end{aligned}$$

Hence, the sequence \(\{\lambda _{n}\}\) is nonincreasing and has the lower bound \(\frac{\mu}{2\max \{c_{1},c_{2}\}}\). Hence, the limit \(\lim_{n\to \infty}\lambda _{n}=\lambda >0\) exists. □

The following lemma is vital in proving the convergence of our proposed Algorithm 3.1.

Lemma 3.3

Assume the bifunction f satisfies Assumption (A1). Suppose \(\{w_{n}\}\), \(\{x_{n}\}\), and \(\{y_{n}\}\) are generated as in Algorithm 3.1and \(y\in C\). Then,

$$\begin{aligned} \frac{1}{2} \bigl[ d^{2}(w_{n},x_{n})-d^{2}(x_{n},y)+d^{2}(w_{n},y) \bigr]\leq \lambda _{n} \bigl[ f(x_{n},y)-f(x_{n},w_{n}) \bigr]. \end{aligned}$$
(3.4)

Proof

Take \(y\in C\). Since \(w_{n}\) is a solution of (3.2), let \(v_{n}=t y\oplus (1-t)w_{n}\) such that \(t\in [0,1)\). Then, from Lemma 2.11(i), we have

$$\begin{aligned} &f(x_{n},w_{n})+\frac{1}{2\lambda _{n}}d^{2}(w_{n},x_{n}) \\ &\quad \leq f(x_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(y,x_{n}) \\ &\quad \leq f\bigl(x_{n},ty\oplus (1-t)w_{n}\bigr)+ \frac{1}{2\lambda _{n}}d^{2}\bigl(ty \oplus (1-t)w_{n},x_{n} \bigr) \\ &\quad \leq t f(x_{n}, y) +(1-t)f(x_{n},w_{n}) \\ &\quad \quad{} + \frac{1}{2\lambda _{n}} \bigl\{ t d^{2}(y,x_{n}) + (1-t)d^{2}(w_{n},x_{n})- t(1-t)d^{2}(y,w_{n}) \bigr\} . \end{aligned}$$
(3.5)

The inequality (3.5) can be reduced to

$$\begin{aligned} \frac{1}{2\lambda _{n}} \bigl\{ d^{2}(w_{n},x_{n}) - d^{2}(y,x_{n}) -(1-t)d^{2}(y,w_{n}) \bigr\} & \leq f(x_{n},y)-f(x_{n},w_{n}). \end{aligned}$$
(3.6)

If \(t\to 1^{-}\) in (3.6), we have

$$\begin{aligned} \frac{1}{2\lambda _{n}} \bigl\{ d^{2}(w_{n},x_{n}) - d^{2}(y,x_{n}) - d^{2}(y,w_{n}) \bigr\} \leq f(x_{n},y)-f(x_{n},w_{n}), \end{aligned}$$

which implies that

$$\begin{aligned} \frac{1}{2} \bigl\{ d^{2}(w_{n},x_{n}) - d^{2}(y,x_{n}) - d^{2}(y,w_{n}) \bigr\} & \leq \lambda _{n} \bigl[ f(x_{n},y)-f(x_{n},w_{n}) \bigr]. \end{aligned}$$
(3.7)

 □

Lemma 3.4

Suppose \(\{w_{n}\}\), \(\{x_{n}\}\), and \(\{y_{n}\}\) are sequences generated by Algorithm 3.1. Then,

$$ d^{2}\bigl(w_{n},x^{*}\bigr)\leq d^{2}\bigl(x_{n},x^{*}\bigr)- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(w_{n},y_{n}). $$

Proof

From (3.2), we have

$$\begin{aligned} f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n}) & \leq \frac{\mu}{2\lambda _{n+1}} \bigl[d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n}) \bigr]. \end{aligned}$$
(3.8)

Since \(\lambda _{n}>0\), we obtain from (3.8) that

$$\begin{aligned} \lambda _{n} f(x_{n},w_{n}) \leq \lambda _{n} \bigl[f(x_{n},y_{n})+f(y_{n},w_{n}) \bigr] + \frac{\mu \lambda _{n}}{2\lambda _{n+1}} \bigl[d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n}) \bigr]. \end{aligned}$$
(3.9)

This implies from (3.6) and the quasilinearization properties that

$$\begin{aligned} \langle \overrightarrow{w_{n}y}, \overrightarrow{x_{n}w_{n}}\rangle \geq \lambda _{n} \bigl[f(x_{n},w_{n})-f(x_{n},y) \bigr]. \end{aligned}$$
(3.10)

Thus, from (3.9) and (3.10), we have

$$\begin{aligned} \langle \overrightarrow{w_{n}y}, \overrightarrow{x_{n}w_{n}}\rangle &\geq \lambda _{n} \bigl[f(x_{n},w_{n})-f(x_{n},y) \bigr]- \biggl(1- \frac{\mu \lambda _{n}}{2\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n}) \\ &\quad {}- \biggl(1-\frac{\mu \lambda _{n}}{2\lambda _{n+1}} \biggr)d^{2}(w_{n},y_{n}). \end{aligned}$$
(3.11)

From (3.2) and Remark 2.6, we can obtain that

$$\begin{aligned} \lambda _{n} \bigl[f(x_{n},w_{n})-f(x_{n},y_{n}) \bigr]\geq \langle \overrightarrow{y_{n}x_{n}}, \overrightarrow{y_{n}w_{n}}\rangle . \end{aligned}$$
(3.12)

Hence, from (3.11) and (3.12), we have

$$\begin{aligned} \langle \overrightarrow{w_{n}y},\overrightarrow{x_{n}w_{n}} \rangle \geq \langle \overrightarrow{y_{n}x_{n}}, \overrightarrow{y_{n}w_{n}} \rangle - \biggl(1- \frac{\mu \lambda _{n}}{2\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1-\frac{\mu \lambda _{n}}{2\lambda _{n+1}} \biggr)d^{2}(w_{n},y_{n}), \end{aligned}$$

which implies that

$$\begin{aligned} 2\langle \overrightarrow{y_{n}x_{n}}, \overrightarrow{y_{n}w_{n}} \rangle \geq 2\langle \overrightarrow{y_{n}x_{n}}, \overrightarrow{y_{n}w_{n}} \rangle - \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr)d^{2}(w_{n},y_{n}). \end{aligned}$$
(3.13)

The following facts are obvious from the quasilinearization properties

$$\begin{aligned} \textstyle\begin{cases} 2\langle \overrightarrow{y_{n}x_{n}},\overrightarrow{y_{n}w_{n}} \rangle = \{d^{2}(y_{n},w_{n}) + d^{2}(x_{n},y_{n}) - d^{2}(x_{n},w_{n}) \}, \\ 2\langle \overrightarrow{x_{n}w_{n}},\overrightarrow{w_{n}y}\rangle = \{d^{2}(x_{n},y) - d^{2}(x_{n},w_{n}) - d^{2}(w_{n},y) \}. \end{cases}\displaystyle \end{aligned}$$
(3.14)

Hence, from (3.13) and (3.14), we obtain

$$\begin{aligned} d^{2}(w_{n},y)\leq d^{2}(x_{n},y)- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(w_{n},y_{n}). \end{aligned}$$
(3.15)

For each \(x^{*}\in \Gamma \), by Assumption (A4) on f and \(f(x^{*},y_{n})\geq 0\), it implies that \(f(y_{n},x^{*})\leq 0\). Hence, if \(x^{*}=y\) in (3.15) we have

$$\begin{aligned} d^{2}\bigl(w_{n},x^{*} \bigr)\leq d^{2}\bigl(x_{n},x^{*}\bigr)- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(w_{n},y_{n}). \end{aligned}$$
(3.16)

 □

Theorem 3.5

Let C be a nonempty, closed, and convex subset of a Hadamard space X. Suppose that \(f:C\times C\to \mathbb{R}\) is a bifunction satisfying conditions (A1)(A4) and \(T:X\to CB(X)\) is a multivalued nonexpansive mapping such that \(Tx^{*}=\{x^{*}\}\). Suppose that the solution set \(\Gamma \neq \emptyset \). Then, the sequence \(\{x_{n}\}\) generated by Algorithm 3.1converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\).

Proof

We first show that the sequence \(\{x_{n}\}\) is bounded. Let \(\kappa \in (1-\mu )\) be some fixed number. From Lemma 3.2

$$ \lim_{n\to \infty} \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr)=1-\mu > \kappa >0. $$

Thus, there exists \(n\in \mathbb{N}\) such that

$$ \biggl(1-\frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr)>\kappa >0,\quad \forall n\in \mathbb{N}. $$

This implies that

$$\begin{aligned} d^{2}\bigl(w_{n},x^{*} \bigr)\leq d^{2}\bigl(x_{n},x^{*}\bigr)-\kappa \bigl(d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n}) \bigr). \end{aligned}$$
(3.17)

Let \(x^{*}\in \Gamma \) and from Algorithm 3.1, Lemma 2.11(i), (3.17), and the multivalued nonexpansivity of T we obtain

$$\begin{aligned} d\bigl(x_{n+1},x^{*}\bigr)& = d\bigl( \alpha _{n}u\oplus (1-\alpha _{n}) \bigl(\beta _{n}h_{n} \oplus (1-\beta _{n})w_{n} \bigr),x^{*}\bigr) \\ & \leq \alpha _{n}d\bigl(u,x^{*}\bigr)+(1-\alpha _{n})d\bigl(\beta _{n}h_{n}\oplus (1- \beta _{n})w_{n},x^{*}\bigr) \\ & \leq \alpha _{n}d\bigl(u,x^{*}\bigr)+(1-\alpha _{n}) \bigl[\beta _{n}d\bigl(h_{n},x^{*} \bigr)+(1- \beta _{n})d\bigl(w_{n},x^{*}\bigr) \bigr] \\ & \leq \alpha _{n}d\bigl(u,x^{*}\bigr)+(1-\alpha _{n}) \bigl[\beta _{n}H\bigl(Tw_{n},Tx^{*} \bigr)+(1- \beta _{n})d\bigl(w_{n},x^{*}\bigr) \bigr] \\ & = \alpha _{n}d\bigl(u,x^{*}\bigr)+(1-\alpha _{n})d\bigl(w_{n},x^{*}\bigr) \\ & \leq \alpha _{n}d\bigl(u,x^{*}\bigr)+(1-\alpha _{n})d\bigl(x_{n},x^{*}\bigr) \\ & \leq \max \bigl\{ d\bigl(u,x^{*}\bigr),d\bigl(x_{n},x^{*} \bigr)\bigr\} \\ &\vdots \\ & \leq \max \bigl\{ d\bigl(u,x^{*}\bigr),d\bigl(x_{1}, x^{*}\bigr) \bigr\} . \end{aligned}$$
(3.18)

Therefore, \(\{x_{n}\}\) is bounded. It follows also that \(\{w_{n}\}\) and \(\{y_{n}\}\) are bounded.

From Algorithm 3.1, Lemma 2.11(ii), Lemma 3.4, and (3.17), we obtain that

$$\begin{aligned} &d^{2}\bigl(x_{n+1},x^{*} \bigr) \\ &\quad = d^{2}(\alpha _{n}u\oplus (1-\alpha _{n}) \bigl( \beta _{n}h_{n}\oplus (1-\beta _{n},)w_{n},x^{*}\bigr) \\ &\quad \leq \alpha _{n}d^{2}\bigl(u,x^{*}\bigr) + (1-\alpha _{n})d^{2}\bigl(\beta _{n}h_{n} \oplus (1-\beta _{n},)w_{n},x^{*}\bigr) \\ &\quad \quad {}- \alpha _{n}(1-\alpha _{n})d^{2}\bigl(u, \beta _{n}h_{n}\oplus (1-\beta _{n},)w_{n} \bigr) \\ &\quad \leq \alpha _{n}d^{2}\bigl(u,x^{*}\bigr) + (1-\alpha _{n}) \bigl[\beta _{n}d^{2} \bigl(h_{n},x^{*}\bigr)+(1- \beta _{n},)d^{2} \bigl(w_{n},x^{*}\bigr) -\beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}) \bigr] \\ &\quad \quad{} - \alpha _{n}(1-\alpha _{n}) \bigl[\beta _{n}d^{2}(u,h_{n})+(1-\beta _{n},)d^{2}(u,w_{n}) -\beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}) \bigr] \\ &\quad \leq \alpha _{n}d^{2}\bigl(u,x^{*}\bigr) \\ &\quad \quad {}+ (1-\alpha _{n}) \bigl[\beta _{n}H^{2} \bigl(Tw_{n},Tx^{*}\bigr)+(1- \beta _{n},)d^{2} \bigl(w_{n},x^{*}\bigr) -\beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}) \bigr] \\ &\quad \quad{} - \alpha _{n}(1-\alpha _{n}) \bigl[\beta _{n}d^{2}(u,h_{n})+(1-\beta _{n},)d^{2}(u,w_{n}) \bigr] -\beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}) ] \\ &\quad \leq \alpha _{n}d^{2}\bigl(u,x^{*}\bigr) +(1-\alpha _{n}) \bigl[d^{2}\bigl(w_{n},x^{*} \bigr)- \beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}) \bigr] \\ &\quad \leq \alpha _{n}d^{2}\bigl(u,x^{*}\bigr) \\ &\quad \quad {}+(1-\alpha _{n}) \biggl[d^{2}\bigl(x_{n},x^{*} \bigr)- \biggl(1-\frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1-\frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(w_{n},y_{n}) \biggr] \\ &\quad \quad{} - (1-\alpha _{n})\beta _{n}(1-\beta _{n})d^{2}(h_{n},w_{n}). \end{aligned}$$
(3.19)

This implies from (3.19) that

$$\begin{aligned} d^{2}(h_{n},w_{n})\leq \frac{\alpha _{n}d^{2}(u,x^{*})}{(1-\alpha _{n})\beta _{n}(1-\beta _{n})} + \frac{[d^{2}(x_{n},x^{*})-d^{2}(x_{n+1},x^{*})]}{(1-\alpha _{n})\beta _{n}(1-\beta _{n})}. \end{aligned}$$
(3.20)

We next divide the rest of the proof into two cases:

Case 1: Assume that \(\{d^{2}(x_{n},x^{*})\}\) is a monotone nondecreasing sequence. Then, \(\{d^{2}(x_{n},x^{*})\}\) is convergent and

$$\begin{aligned} \lim_{n\to \infty} \bigl(d^{2} \bigl(x_{n},x^{*}\bigr)-d^{2} \bigl(x_{n+1},x^{*}\bigr) \bigr)=0. \end{aligned}$$
(3.21)

Then, by this fact and the condition on \(\alpha _{n}\), we obtain that

$$\begin{aligned} \lim_{n\to \infty}d^{2}(h_{n},w_{n})=0. \end{aligned}$$
(3.22)

From (3.19) we have

$$\begin{aligned}& (1-\alpha _{n}) \biggl[ \biggl(1-\frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(x_{n},y_{n})- \biggl(1- \frac{\mu \lambda _{n}}{\lambda _{n+1}} \biggr) d^{2}(w_{n},y_{n}) \biggr] \\& \quad \leq \alpha _{n}d^{2}\bigl(u,x^{*} \bigr) + d^{2}\bigl(x_{n},x^{*} \bigr)-d^{2}\bigl(x_{n+1},x^{*}\bigr) . \end{aligned}$$

This implies that

$$\begin{aligned} \lim_{n\to \infty} d^{2}(x_{n},y_{n})=d^{2}(w_{n},y_{n})=0. \end{aligned}$$
(3.23)

We obtain from (3.23) that

$$\begin{aligned} d(x_{n},w_{n})\leq d(x_{n},y_{n})+d(y_{n},w_{n}) \longrightarrow 0. \end{aligned}$$
(3.24)

Also, from (3.22) and (3.23) we obtain that

$$\begin{aligned} d(h_{n},x_{n})\leq d(h_{n},w_{n})+d(w_{n},x_{n}) \longrightarrow 0. \end{aligned}$$
(3.25)

Again, from (3.22), (3.24), and (3.25), we have

$$\begin{aligned} \operatorname{dist}(w_{n},Tw_{n})& \leq d(w_{n},x_{n})+d(x_{n},h_{n})+ \operatorname{dist}(h_{n},Tw_{n}) \\ & \leq d(w_{n},x_{n})+d(x_{n},h_{n})+d(h_{n},w_{n}) \longrightarrow 0. \end{aligned}$$
(3.26)

By the nonexpansivity of T, (3.25), and (3.26), we obtain

$$\begin{aligned} \operatorname{dist}(x_{n},Tx_{n})& \leq d(x_{n},w_{n})+\operatorname{dist}(w_{n},Tw_{n})+H(Tw_{n},Tx_{n}) \\ & \leq 2d(x_{n},w_{n})+\operatorname{dist}(w_{n},Tw_{n}) \longrightarrow 0. \end{aligned}$$
(3.27)

From Algorithm 3.1 and Lemma 2.11(i), we obtain

$$\begin{aligned} d(x_{n+1},x_{n})& \leq \alpha _{n}d(u,x_{n})+(1- \alpha _{n})\beta _{n}d(h_{n},x_{n})+(1- \alpha _{n}) (1-\beta _{n})d(w_{n},x_{n}). \end{aligned}$$

Hence, from (3.24), (3.25), and the condition on \(\alpha _{n}\), we have

$$\begin{aligned} \lim_{n\to \infty}d(x_{n+1},x_{n})=0. \end{aligned}$$
(3.28)

Since \(\{x_{n}\}\) is bounded, by Lemma 2.10 there exists a subsequence \(\{x_{n_{k}}\}\) of the sequence \(\{x_{n}\}\) such that Δ-\(\lim_{n\to \infty}x_{n_{k}}=z\) for some \(z\in C\). Then, it follows from (3.25) and the demiclosedness property of T that \(z\in F(T)\).

From Algorithm 3.1, \(w_{n}\) solves the subproblem (3.2). By letting \(v=t w_{n}\oplus (1-t)y\) such that \(t\in [0,1)\) and \(y\in C\), we have

$$\begin{aligned} &f(y_{n},w_{n})+\frac{1}{2\lambda _{n}}d^{2}(x_{n},w_{n}) \\ &\quad \leq f(y_{n},v)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},v) \\ &\quad \leq f\bigl(y_{n},t w_{n}\oplus (1-t)y\bigr)+ \frac{1}{2\lambda _{n}}d^{2}\bigl(x_{n},t w_{n} \oplus (1-t)y\bigr) \\ &\quad \leq t f(y_{n}, w_{n}) +(1-t)f(y_{n},y) + \frac{1}{2\lambda _{n}} \bigl\{ t d^{2}(x_{n},w_{n}) + (1-t)d^{2}(x_{n},y) \\ &\quad \quad{} - t(1-t)d^{2}(w_{n},y) \bigr\} . \end{aligned}$$
(3.29)

By a similar approach as in (3.8)–(3.10), we have from (3.29) that

$$\begin{aligned} f(y_{n},w_{n})-f(y_{n},y)\leq \frac{1}{2\lambda _{n}} \bigl\{ d^{2}(x_{n},y)-d^{2}(x_{n},w_{n})-t d^{2}(w_{n},y) \bigr\} . \end{aligned}$$
(3.30)

If \(t\to 1^{-}\), we obtain that

$$\begin{aligned} f(y_{n},w_{n})-f(y_{n},y) \leq \frac{1}{2\lambda _{n}} \bigl\{ d^{2}(x_{n},y)-d^{2}(x_{n},w_{n})-d^{2}(w_{n},y) \bigr\} . \end{aligned}$$
(3.31)

This implies from (3.31) that

$$\begin{aligned} f(y_{n},y)\geq \frac{1}{2\lambda _{n}} \bigl\{ d^{2}(x_{n},w_{n})+d^{2}(w_{n},y)-d^{2}(x_{n},y) \bigr\} +f(y_{n},w_{n}), \end{aligned}$$
(3.32)

which by quasilinearization properties is equivalent to

$$\begin{aligned} f(y_{n},y)\geq \frac{1}{\lambda _{n}}\langle \overrightarrow{w_{n}y}, \overrightarrow{x_{n}w_{n}} \rangle +f(y_{n},w_{n}). \end{aligned}$$
(3.33)

Thus, from (3.23), (3.24), and \(\lambda _{n}>0\) we have that \(f(y_{n},y)\geq 0\). Since \(\{x_{n}\}\) is Δ-convergent to z, by the fact that \(f(y_{n},y)\geq 0\) and Assumption (A3), we conclude that \(f(z,y)\geq 0\). Thus, \(z\in \operatorname{EP}(f,C)\). Hence, \(z\in \Gamma \). Now, let \(s_{n}=\beta _{n}h_{n}\oplus (1-\beta _{n})w_{n}\), then by Lemma 2.11(i), (3.24), and (3.25) we have that

$$\begin{aligned} d(s_{n},x_{n})& \leq \beta _{n}d(h_{n},x_{n})+(1-\beta _{n})d(w_{n},x_{n}) \longrightarrow 0. \end{aligned}$$
(3.34)

Since \(\{x_{n}\}\) is bounded, we can choose a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) that is Δ-convergent to \({z}\in \Gamma \) such that

$$ \limsup_{n\to \infty}\langle \overrightarrow{u\hat{u}}, \overrightarrow{x_{n}\hat{u}}\rangle =\lim_{k\to \infty} \langle \overrightarrow{u\hat{u}},\overrightarrow{x_{n_{k}}\hat{u}} \rangle =\langle \overrightarrow{u\hat{u}},\overrightarrow{z\hat{u}} \rangle . $$

Then, by Lemma 2.13, we obtain

$$\begin{aligned} \limsup_{n\to \infty}\langle \overrightarrow{u \hat{u}}, \overrightarrow{x_{n}\hat{u}}\rangle =\langle \overrightarrow{u\hat{u}},\overrightarrow{z\hat{u}}\rangle \leq 0. \end{aligned}$$
(3.35)

Furthermore, by quasilinearization properties, we have

$$\begin{aligned} \langle \overrightarrow{u\hat{u}},\overrightarrow{s_{n} \hat{u}} \rangle & = \langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}x_{n}} \rangle +\langle \overrightarrow{u\hat{u}}, \overrightarrow{x_{n} \hat{u}}\rangle \\ & \leq d(u,\hat{u})d(s_{n},x_{n})+\langle \overrightarrow{u\hat{u}}, \overrightarrow{x_{n}\hat{u}}\rangle . \end{aligned}$$
(3.36)

Thus, from (3.34) and (3.35), we obtain that

$$\begin{aligned} \limsup_{n\to \infty}\langle \overrightarrow{u \hat{u}}, \overrightarrow{s_{n}\hat{u}}\rangle \leq 0. \end{aligned}$$
(3.37)

Also, by the condition on \(\alpha _{n}\) and inequality (3.37), we obtain

$$\begin{aligned} \limsup_{n\to \infty} \bigl[\alpha _{n}d^{2}(u,\hat{u}) + 2(1 - \alpha _{n}) \langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}\hat{u}} \rangle \bigr]\leq 0. \end{aligned}$$
(3.38)

Furthermore, we obtain from Algorithm 3.1, (3.18), Lemma 3.4, and Lemma 2.11(iii) that

$$\begin{aligned} d^{2}(x_{n+1},\hat{u})& \leq \alpha _{n}^{2}d^{2}(u,\hat{u})+(1- \alpha _{n})^{2}d^{2}(s_{n},\hat{u}) + 2 \alpha _{n}(1-\alpha _{n}) \langle \overrightarrow{u \hat{u}},\overrightarrow{s_{n}\hat{u}} \rangle \\ & \leq \alpha _{n}^{2}d^{2}(u,\hat{u})+(1- \alpha _{n})^{2} \bigl[ \beta _{n}d^{2}(h_{n}, \hat{u})+(1-\beta _{n})d^{2}(w_{n},\hat{u}) \bigr] \\ &\quad {} + 2\alpha _{n}(1-\alpha _{n})\langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}\hat{u}}\rangle \\ & \leq \alpha _{n}^{2}d^{2}(u,\hat{u})+(1- \alpha _{n})^{2} \bigl[\beta _{n}H^{2}(Tw_{n},T \hat{u})+(1-\beta _{n})d^{2}(w_{n},\hat{u}) \bigr] \\ &\quad {} + 2\alpha _{n}(1- \alpha _{n})\langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}\hat{u}}\rangle \\ & \leq \alpha _{n}^{2}d^{2}(u,\hat{u})+(1- \alpha _{n})^{2}d^{2}(w_{n}, \hat{u}) + 2\alpha _{n}(1-\alpha _{n})\langle \overrightarrow{u\hat{u}},\overrightarrow{s_{n}\hat{u}}\rangle \\ & \leq \alpha _{n}^{2}d^{2}(u,\hat{u})+(1- \alpha _{n})^{2}d^{2}(x_{n}, \hat{u}) + 2\alpha _{n}(1-\alpha _{n})\langle \overrightarrow{u\hat{u}},\overrightarrow{s_{n}\hat{u}}\rangle \\ & \leq (1-\alpha _{n})d^{2}(x_{n},\hat{u})+ \alpha _{n} \bigl(\alpha _{n}d^{2}(u, \hat{u}) + 2(1 - \alpha _{n})\langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}\hat{u}}\rangle \bigr). \end{aligned}$$
(3.39)

Therefore, from (3.38) and (3.39), we conclude by Lemma 2.16 that \(\{x_{n}\}\) converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\).

Case 2: Suppose that \(\{d^{2}(x_{n},\hat{u})\}\) is not a monotone decreasing sequence, then there exists a subsequence \(\{d^{2}(x_{n_{k}},\hat{u})\}\) of \(\{d^{2}(x_{n},\hat{u})\}\) such that \(d^{2}(x_{n_{k}},\hat{u})\leq d^{2}(x_{n_{k}+1},\hat{u})\), \(\forall k \in \mathbb{N}\). Then, by Lemma 2.17, there exists a nondecreasing sequence \(\{m_{k}\}\subseteq \mathbb{N}\) such that \(m_{k}\to \infty \) and

$$\begin{aligned} d^{2}(x_{m_{k}},\hat{u})< d^{2}(x_{m_{k}+1}, \hat{u})\quad \text{and} \quad d^{2}(x_{k}, \hat{u})< d^{2}(x_{k+1},\hat{u}),\quad k\in \mathbb{N}. \end{aligned}$$
(3.40)

Thus, from Algorithm 3.1, Lemma 2.11(i), and (3.18), we have

$$\begin{aligned} 0 & \leq \lim_{k\to \infty} \bigl(d^{2}(x_{m_{k}+1}, \hat{u}) - d^{2}(x_{m_{k}},\hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty} \bigl(d^{2}(x_{n+1}, \hat{u}) - d^{2}(x_{n}, \hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty} \bigl(\alpha _{n}d^{2}(u, \hat{u})+(1- \alpha _{n})d^{2}(s_{n},\hat{u}) - d^{2}(x_{n},\hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty} \bigl(\alpha _{n}d^{2}(u, \hat{u})+(1- \alpha _{n}) \bigl[\beta _{n}d^{2}(h_{n}, \hat{u})+(1-\beta _{n})d^{2}(w_{n}, \hat{u}) \bigr] - d^{2}(x_{n},\hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty} \bigl(\alpha _{n}d^{2}(u, \hat{u})+(1- \alpha _{n}) \bigl[\beta _{n}H^{2}(Tw_{n},T \hat{u})+(1-\beta _{n})d^{2}(w_{n}, \hat{u}) \bigr] - d^{2}(x_{n},\hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty} \bigl(\alpha _{n}d^{2}(u, \hat{u})+(1- \alpha _{n})d^{2}(w_{n},\hat{u}) - d^{2}(x_{n},\hat{u}) \bigr) \\ & \leq \limsup_{n\to \infty}\alpha _{n} \bigl(d^{2}(u,\hat{u}) - d^{2}(x_{n},\hat{u}) \bigr)=0. \end{aligned}$$

This implies that

$$ \lim_{k\to \infty} \bigl(d\bigl(x_{m_{k}+1},x^{*} \bigr) - d\bigl(x_{m_{k}},x^{*}\bigr) \bigr)=0. $$

From (3.34), this implies that \(\lim_{k\to \infty}d(s_{m_{k}},x_{m_{k}})=0\). Hence, by this fact and \(\alpha _{m_{k}}\to 0\), we have

$$\begin{aligned} d(x_{m_{k}+1},x_{m_{k}})\leq \alpha _{m_{k}}d(u,x_{m_{k}})+(1- \alpha _{n_{k}})d(s_{m_{k}},x_{nk}) \longrightarrow 0. \end{aligned}$$
(3.41)

Following the same argument as in Case 1, we obtain

$$ \lim_{k\to \infty}\langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{m_{k}}\hat{u}}\rangle \leq 0 $$

and

$$\begin{aligned} \lim_{k\to \infty} \bigl[\alpha _{m_{k}}d^{2}(u,\hat{u}) + 2(1 - \alpha _{m_{k}}) \langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{m_{k}}\hat{u}} \rangle \bigr]\leq 0. \end{aligned}$$
(3.42)

Hence, from (3.39), we obtain

$$ d^{2}(x_{m_{k}+1},\hat{u})\leq (1-\alpha _{m_{k}})d^{2}(x_{m_{k}}, \hat{u})+\alpha _{m_{k}} \bigl(\alpha _{m_{k}}d^{2}(u, \hat{u}) + 2(1 - \alpha _{m_{k}})\langle \overrightarrow{u\hat{u}}, \overrightarrow{s_{n}\hat{u}}\rangle \bigr). $$

In addition, from (3.40), we have that

$$ d^{2}(x_{m_{k}},\hat{u})\leq d^{2}(x_{m_{k}+1}, \hat{u}), $$

which implies that

$$ \lim_{k\to \infty}d^{2}(x_{m_{k}},\hat{u})=0. $$

Thus, from Cases 1 and 2, we conclude that \(\{x_{n}\}\) converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\). This completes the proof. □

4 Numerical example

In this section, we present a numerical experiment to demonstrate the performance of our method. All codes were written in MATLAB 2020 on a Dell Core i5 PC.

Example 4.1

Let \(Y:=\{(x, e^{x}):x\in \mathbb{R}\}\) and \(X_{n}:=\{(n, y):y\geq e^{n}\}\) for each \(n \in \mathbb{Z}\). Set \(X:=Y\cup \bigcup_{n\in \mathbb{Z}}X_{n}\) equipped with a metric \(d:X\times X \to [0, \infty )\), defined for all \(x=(x_{1},x_{2})\), \(y=(y_{1},y_{2})\in X\) by

$$\begin{aligned} d(x, y)= \textstyle\begin{cases} \int _{x_{1}}^{y_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt + \vert x_{2} - e^{x_{1}} \vert + \vert y_{2} - e^{y_{1}} \vert &\text{if } x_{1}\neq y_{1}, \\ \vert x_{2} - y_{2} \vert &\text{if } x_{1}=y_{1}, \end{cases}\displaystyle \end{aligned}$$
(4.1)

where γ̇ is the derivative of the curve \(\gamma :\mathbb{R}\to X\) given as \(\gamma (t):=(t, e^{t})\) for each \(t\in \mathbb{R}\) (see [5]). Then, \((X, d)\) is a Hadamard space.

Now, let \(T:X\to CB(X)\) be defined by \(Tx = \{(-x_{1}, e^{-x_{1}}), (0, 0)\}\) for all \(x=(x_{1}, x_{2}) \in X\). Clearly, \(F(T) = \{(0,0)\}\) and also satisfies the end-point condition. We check that T is nonexpansive. Indeed, for each \((x_{1}, x_{2}), (y_{1}, y_{2})\in X\), we have

$$\begin{aligned} \operatorname{dist}\bigl(\bigl(x_{1}, e^{x_{1}}\bigr), Ty \bigr) = \inf \bigl\{ d\bigl(\bigl(x_{1}, e^{x_{1}}\bigr), \bigl(-y_{1}, e^{-y_{1}}\bigr)\bigr), d\bigl((0, 0), \bigl(x_{1}, e^{x_{1}}\bigr)\bigr)\bigr\} . \end{aligned}$$

However,

$$\begin{aligned} d \bigl(\bigl(x_{1}, e^{x_{1}}\bigr), \bigl(-y_{1}, e^{-y_{1}}\bigr) \bigr) &= \textstyle\begin{cases} \int _{-y_{1}}^{x_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt + \vert e^{-y_{1}} - e^{-y_{1}} \vert + \vert e^{x_{1}} - e^{x_{1}} \vert & \text{if } x_{1}\neq y_{1}, \\ \vert e^{-y_{1}} - e^{x_{1}} \vert & \text{if } x_{1}=y_{1} \end{cases}\displaystyle \\ &= \textstyle\begin{cases} \int _{-y_{1}}^{x_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt & \text{if } x_{1}\neq y_{1}, \\ \vert e^{-y_{1}} - e^{x_{1}} \vert & \text{if } x_{1}=y_{1}, \end{cases}\displaystyle \end{aligned}$$

and

$$\begin{aligned} d \bigl((0, 0), \bigl(x_{1}, e^{x_{1}}\bigr) \bigr) &= \textstyle\begin{cases} \int _{0}^{x_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt + \vert 0 - e^{0} \vert + \vert e^{x_{1}} - e^{x_{1}} \vert & \text{if } x_{1}\neq 0, \\ e^{x_{1}} & \text{if } x_{1}=0 \end{cases}\displaystyle \\ &= \textstyle\begin{cases} \int _{0}^{x_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt + 1 & \text{if } x_{1}\neq 0, \\ 1 & \text{if } x_{1}=0. \end{cases}\displaystyle \end{aligned}$$

Therefore,

$$\begin{aligned} \operatorname{dist}\bigl(\bigl(x_{1}, e^{x_{1}}\bigr), Ty \bigr) = d\bigl(\bigl(x_{1}, e^{x_{1}}\bigr), \bigl(-y_{1}, e^{-y_{1}}\bigr)\bigr). \end{aligned}$$

Also,

$$\begin{aligned} \operatorname{dist}\bigl((0, 0), Ty\bigr) &= \inf \bigl\{ d\bigl((0, 0), \bigl(-y_{1}, e^{-y_{1}}\bigr)\bigr), d\bigl((0, 0), (0, 0) \bigr)\bigr\} \\ &= d\bigl((0, 0), (0, 0)\bigr). \end{aligned}$$

Similarly,

$$\begin{aligned} \operatorname{dist}\bigl(\bigl(y_{1}, e^{y_{1}}\bigr), Tx \bigr) &= \inf \bigl\{ d\bigl(\bigl(y_{1}, e^{y_{1}}\bigr), \bigl(-x_{1}, e^{-x_{1}}\bigr)\bigr), d\bigl((0, 0), \bigl(y_{1}, e^{y_{1}}\bigr)\bigr)\bigr\} \\ &=d\bigl(\bigl(y_{1}, e^{y_{1}}\bigr), \bigl(-x_{1}, e^{-x_{1}}\bigr)\bigr) \end{aligned}$$

and

$$\begin{aligned} \operatorname{dist}\bigl((0, 0), Tx\bigr) &= \inf \bigl\{ d\bigl((0, 0), \bigl(-x_{1}, e^{-x_{1}}\bigr)\bigr), d\bigl((0, 0), (0, 0) \bigr)\bigr\} \\ &= d\bigl((0, 0), (0, 0)\bigr). \end{aligned}$$

Hence,

$$\begin{aligned} H(Tx, Ty) &=\max \Bigl\{ \sup_{a \in Tx}\operatorname{dist}(a, Ty), \sup_{b \in Ty}\operatorname{dist}(b, Tx) \Bigr\} \\ &= \max \bigl\{ \sup \bigl\{ d\bigl(\bigl(-x_{1}, e^{-x_{1}} \bigr), \bigl(-y_{1}, e^{-y_{1}}\bigr)\bigr),d\bigl((0, 0), (0, 0)\bigr)\bigr\} , \\ &\quad \sup \bigl\{ d\bigl(\bigl(-y_{1}, e^{-y_{1}}\bigr), \bigl(-x_{1}, e^{-x_{1}}\bigr) \bigr),d\bigl((0, 0), (0, 0)\bigr)\bigr\} \bigr\} \\ &=d\bigl(\bigl(-x_{1}, e^{-x_{1}}\bigr), \bigl(-y_{1}, e^{-y_{1}}\bigr)\bigr) \\ &= \textstyle\begin{cases} \int _{-x_{1}}^{-y_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt & \text{if } x_{1}\neq y_{1} (x_{1} > y_{1}), \\ \vert e^{-x_{1}} - e^{-y_{1}} \vert & \text{if } x_{1}=y_{1} \end{cases}\displaystyle \\ &= \textstyle\begin{cases} \int _{-y_{1}}^{-x_{1}} \Vert \dot{\gamma}(t) \Vert _{2}\,dt & \text{if } x_{1}\neq y_{1} (x_{1} < y_{1}), \\ \vert e^{-y_{1}} - e^{-x_{1}} \vert & \text{if } x_{1}=y_{1} \end{cases}\displaystyle \\ &\leq d(x, y). \end{aligned}$$

Therefore, T is a multivalued nonexpansive mapping.

Furthermore, let \(P(n,\mathbb{R})\) be the space of \((n\times n)\) positive symmetric definite matrices endowed with the Riemannian metric

$$ \langle A,B\rangle _{E}:=Tr\bigl(E^{-1}AE^{-1}B \bigr), $$

for all \(A,B\in T_{E}(P(n,\mathbb{R}))\) and every \(E\in P(n,\mathbb{R})\). The pair \((P(n,\mathbb{R}), \langle A,B\rangle _{E} )\) is a Hadamard space (see [27]). Let \(\mathbb{R}\mathbbm{^{+}}\) be the set of positive real numbers. Now, consider the space \(P(n,\mathbb{R})\) such that \(n=1\) with an inner product \(\langle a,b\rangle _{\lambda}= \frac{1}{\lambda ^{2}}ab\) for \(\lambda >0\) and \(a,b\in T_{\lambda}\mathbb{R}^{+}=\mathbb{R}\). Let \((X,d)\) be a metric space with \(X=\mathbb{R}^{+}\) and \(d:X\times X \to \mathbb{R}\) be defined by

$$ d(a,b)= \vert \ln{a} - \ln{b} \vert , $$

with the geodesic between \(a,b\in X\) defined as \(\gamma (\kappa )=a (\frac{b}{a} )^{\kappa}\). Therefore, the pair \((X, d)\) is a CAT(0) space with the geodesic between a and b given as

$$\begin{aligned} \ln \gamma (\kappa )& = \ln{a} \biggl(\frac{b}{a} \biggr)^{\kappa}=\ln{a}+ \kappa (\ln{b}-\ln{a})=(1-\kappa )\ln{a}+\kappa \ln{b}. \end{aligned}$$
(4.2)

Now, let \(f:X\times X\to \mathbb{R}\) be bifunctions defined by \(f(x,y)=\ln{x} (\ln \frac{y}{x} )\). From (4.2), we have that

$$\begin{aligned} f\bigl(x,\gamma (\kappa )\bigr)&=\ln{x} \biggl(\ln \frac{\gamma (\kappa )}{x} \biggr) = (1- \kappa )\ln{x} \biggl(\ln \frac{a}{x} \biggr) + \kappa \ln{x} \biggl(\ln \frac{b}{x} \biggr) \\ & = (1-\kappa )f(x,a)+\kappa f(x,b). \end{aligned}$$

Clearly, the bifunction f satisfies assumptions (A1) and (A2). Next, we show that f satisfies assumption (A3). Let \(x,y,z\in X\), then

$$\begin{aligned} f(x,y)+f(y,z)-f(x,z)& = \ln{x} \biggl(\ln \frac{y}{x} \biggr)+\ln{y} \biggl( \ln \frac{z}{y} \biggr)-\ln{x} \biggl(\ln \frac{z}{x} \biggr) \end{aligned}$$
(4.3)
$$\begin{aligned} & = \ln{x} \biggl(\ln \frac{y}{x}-\ln \frac{z}{x} \biggr)+ \ln{y} \biggl(\ln \frac{z}{y} \biggr) \end{aligned}$$
(4.4)
$$\begin{aligned} & = \ln{x} \biggl(\ln \frac{y}{z} \biggr)-\ln{y} \biggl(\ln \frac{y}{z} \biggr) \end{aligned}$$
(4.5)
$$\begin{aligned} & = \vert \ln{x}-\ln{y} \vert \vert \ln{y}-\ln{z} \vert \end{aligned}$$
(4.6)
$$\begin{aligned} & = d(x,y)d(y,z) \end{aligned}$$
(4.7)
$$\begin{aligned} & \geq -\frac{1}{2}d(x,y)d(y,z) \\ & \geq -\frac{1}{2}d(x,y)-\frac{1}{2}d(y,z). \end{aligned}$$
(4.8)

Hence, f satisfies the Lipschitz-type condition with Lipschitz constants \(c_{1}=c_{2}=\frac{1}{2}\). Moreover,

$$\begin{aligned} f(x,y)+f(y,x)& = \ln{x} \biggl(\ln \frac{y}{x} \biggr)+\ln{y} \biggl( \ln \frac{x}{y} \biggr) \end{aligned}$$
(4.9)
$$\begin{aligned} & = \ln{x} \biggl(\ln \frac{y}{x} \biggr)-\ln{y} \biggl(\ln \frac{y}{x} \biggr) \end{aligned}$$
(4.10)
$$\begin{aligned} & = \biggl(\ln \frac{x}{y} \biggr) \biggl(\ln \frac{y}{x} \biggr) \\ & = - \biggl(\ln \frac{x}{y} \biggr)^{2}\leq 0. \end{aligned}$$
(4.11)

Hence, f is monotone (and thus pseudomonotone).

For the sake of numerical computation, we choose \(\alpha _{n} = \frac{1}{n+1}\), \(\beta _{n} = \frac{2n}{5n+3}\), \(\lambda _{0} = 0.9\), \(\mu = 0.6\), \(u = \frac{\sqrt{3}}{3}\); for Algorithm EA, we choose \(y_{0} = \frac{1}{33}\), \(\lambda _{n} = \frac{1}{2c_{1}}\); for Algorithm EAL, we take \(\delta = 0.4\), \(\lambda _{n} = \frac{1}{2c_{1}}\), \(\alpha _{n} = \frac{1}{n+1}\), and in addition for Algorithm HEAL, we take \(\beta _{n} = \frac{2n}{5n+5}\). We compute the algorithms for three different points. The stopping criteria used for the algorithms is \(\mathrm{Err} = \|x_{n} - y_{n} \| < 10^{-6}\). The numerical results are shown in Table 1 and Figs. 13. The numerical computation shows that our proposed algorithm successfully approximates the common solution of the pseudomonotone equilibrium problem and the fixed point of a nonexpansive mapping. Furthermore, it performs better than the other methods in the literature in terms of number of iterations and CPU time taken for the computation.

Figure 1
figure 1

Example 4.1, Case I

Figure 2
figure 2

Example 4.1, Case II

Figure 3
figure 3

Example 4.1, Case III

Table 1 Computational result for Example 4.1

5 Conclusion

In this paper, we studied a self-adaptive extragradient algorithm for approximating a common solution of a pseudomonotone equilibrium problem and fixed-point problem for a multivalued nonexpansive mapping in Hadamard spaces. We proposed an algorithm and obtained strong convergence without prior knowledge of the Lipschitz constants of the pseudomonotone bifunction. Furthermore, we provide a numerical experiment to demonstrate the efficiency of our algorithm. Our result extends and complements recent results in the literature.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Aremu, K.O., Izuchukwu, C., Mebawondu, A.A., Mewomo, O.T.: A viscosity type proximal point algorithm for monotone equilibrium problem and fixed point problem in a Hadamard space. Asian-Eur. J. Math. 14, 2150058 (2021). https://doi.org/10.1142/S1793557121500583

    Article  MathSciNet  MATH  Google Scholar 

  2. Berg, I.D., Nikolaev, I.G.: Quasilinearization and curvature of Alexandrov spaces. Geom. Dedic. 133, 195–218 (2008)

    Article  MATH  Google Scholar 

  3. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  4. Bokodisa, A.T., Jolaoso, L.O., Aphane, M.: A parallel hybrid Bregman subgradient extragradient method for a system of pseudomonotone equilibrium and fixed point problems. Symmetry 3, 216 (2021). https://doi.org/10.3390/sym13020216

    Article  Google Scholar 

  5. Chaipunya, P., Kumam, P.: On the proximal point method in Hadamard spaces. Optimization 66, 1647–1665 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  6. Colao, V., Lopez, G., Marino, G., Martín-Márquez, V.: Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 388, 61–77 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dedieu, J.P., Priouret, P., Malajovich, G.: Newton’s method on Riemannian manifolds: covariant alpha theory. IMA J. Numer. Anal. 23(3), 395–419 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dehghan, H., Rooin, J.: Metric projection and convergence theorems for nonexpansive mapping in Hadamard spaces (2014). arXiv:1410.1137v1 [math.FA]

  9. Dhompongsa, S., Kaewkhao, A., Panyanak, B.: Lim’s theorem for multivalued mappings in CAT(0) spaces. J. Math. Anal. Appl. 312(2), 478–487 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dhompongsa, S., Kirk, W.A., Panyanak, B.: Nonexpansive set-valued mappings in metric and Banach spaces. J. Nonlinear Convex Anal. 8, 35–45 (2007)

    MathSciNet  MATH  Google Scholar 

  11. Dhompongsa, S., Kirk, W.A., Sims, B.: Fixed points of uniformly Lipschitzian mappings. Nonlinear Anal. 65(4), 762–772 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dhompongsa, S., Panyanak, B.: On -convergence theorems in CAT(0) spaces. Comput. Math. Appl. 56, 2572–2579 (2008)

    MathSciNet  MATH  Google Scholar 

  13. Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequalities, III, pp. 103–113. Academic Press, New York (1972)

    Google Scholar 

  14. Hieu, D.V.: Common solutions to pseudomonotone equilibrium problems. Bull. Iranian Math. Soc. 42(5), 1207–1219 (2016)

    MathSciNet  MATH  Google Scholar 

  15. Hieu, D.V., Muu, L.D., Ahn, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197–217 (2016). https://doi.org/10.1007/s11075-015-0092-5

    Article  MathSciNet  MATH  Google Scholar 

  16. Hieu, D.V., Thai, B.H., Kumam, P.: Parallel modified methods for pseudomonotone equilibrium problems and fixed point problems for quasi-nonexpansive mappings. Adv. Oper. Theory 5, 1684–1717 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  17. Isiogugu, F.O.: Demiclosedness principle and approximation theorems for certain classes of multivalued mappings in Hilbert spaces. Fixed Point Theory Appl. 2013, 61 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Iusem, A.N., Mohebbi, V.: Convergence analysis of the extragradient method for equilibrium problems in Hadamard spaces. Comput. Appl. Math. 39(2), 1–22 (2020). https://doi.org/10.1007/s40314-020-1076-1

    Article  MathSciNet  MATH  Google Scholar 

  19. Izuchukwu, C., Aremu, K.O., Mebawondu, A.A., Mewomo, O.T.: A viscosity iterative technique for equilibrium and fixed point problems in a Hadamard space. Appl. Gen. Topol. 20(1), 193–210 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  20. Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: An inertial extragradient method via viscoscity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization (2020). https://doi.org/10.1080/02331934.2020.1716752

    Article  MATH  Google Scholar 

  21. Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: A parallel combination extragradient method with Armijo line searching for finding common solutions of finite families of equilibrium and fixed point problems. Rend. Circ. Mat. Palermo, II Ser. 69, 711–735 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  22. Jolaoso, L.O., Aphane, M.: A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl. 2020, 9 (2020). https://doi.org/10.1186/s13663-020-00676-y

    Article  MathSciNet  MATH  Google Scholar 

  23. Jolaoso, L.O., Lukumon, G.A., Aphane, M.: Convergence theorem for system of pseudomonotone equilibrium and split common fixed point problems in Hilbert spaces. Boll. Unione Mat. Ital. (2021). https://doi.org/10.1007/s40574-020-00271-4

    Article  MathSciNet  MATH  Google Scholar 

  24. Jost, J.: Convex functionals and generalized harmonic maps into spaces of nonpositive curvature. Comment. Math. Helv. 70, 659–673 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kakavandi, B.A., Amini, M.: Duality and subdifferential for convex functions on complete CAT(0) metric spaces. Nonlinear Anal. 73, 3450–3455 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Khammahawong, K., Kumam, P., Chaipunya, P.: An extragadient algorithm for strongly pseudomonotone equilibrium problems on Hadamard manifolds. Thai J. Math. 18(1), 350–371 (2020)

    MathSciNet  MATH  Google Scholar 

  27. Khatibzadeh, H., Mohebbi, V.: Approximating solutions of equilibrium problems in Hadamard spaces. Miskolc Math. Notes 20, 281–297 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kimura, Y., Kishi, Y.: Equilibrium problems and their resolvents in Hadamard spaces. J. Nonlinear Convex Anal. 19(9), 1503–1513 (2018)

    MathSciNet  MATH  Google Scholar 

  29. Kirk, W.A., Panyanak, B.: A concept of convergence in geodesic spaces. Nonlinear Anal. 68(12), 3689–3696 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  30. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  31. Kumam, P., Chaipunya, P.: Equilibrium problems and proximal algorithms in Hadamard spaces (2018). arXiv:1807.10900v1 [math.oc]

  32. Li, C., Wang, J.H.: Newton’s method on Riemannian manifolds: Smale’s point estimation theory under the condition. IMA J. Numer. Anal. 26(2), 228–251 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  33. Lim, T.C.: Remarks on some fixed point theorems. Proc. Am. Math. Soc. 60, 179–182 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  34. Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  36. Ranjbar, S., Khatibzadeh, H.: Convergence and w-convergence of modified Mann iteration for a family of asymptotically nonexpansive type mappings in complete CAT(0) spaces. Fixed Point Theory 17, 151–158 (2016)

    MathSciNet  MATH  Google Scholar 

  37. Rehman, H., Kumam, P., Cho, Y.J., Yordsorn, P.: Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019, 282 (2019)

    Article  MATH  Google Scholar 

  38. Rehman, H., Kumam, P., Je Cho, Y., Suleiman, Y.I., Kumam, W.: Modified popovs explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 36(1), 82–113 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  39. Rehman, H.U., Kumam, P., Dong, Q.L., Peng, Y., Deebani, W.: A new Popovs subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 70(12), 2675–2710 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  40. Trans, D.Q., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)

    Article  MathSciNet  Google Scholar 

  41. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  42. Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179, 197–211 (2018)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors appreciate the support of their institutions.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

K.O. wrote the main manuscript, L.O. prepared the figures, and O.K. read through the prove. All authors reviewed the manuscript.

Corresponding author

Correspondence to Olawale Kazeem Oyewole.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aremu, K.O., Jolaoso, L.O. & Oyewole, O.K. A self-adaptive extragradient method for fixed-point and pseudomonotone equilibrium problems in Hadamard spaces. Fixed Point Theory Algorithms Sci Eng 2023, 4 (2023). https://doi.org/10.1186/s13663-023-00742-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13663-023-00742-1

MSC

Keywords