- Research
- Open Access
- Published:
A self-adaptive extragradient method for fixed-point and pseudomonotone equilibrium problems in Hadamard spaces
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2023, Article number: 4 (2023)
Abstract
In this work, we study a self-adaptive extragradient algorithm for approximating a common solution of a pseudomonotone equilibrium problem and fixed-point problem for multivalued nonexpansive mapping in Hadamard spaces. Our proposed algorithm is designed in such a way that its step size does not require knowledge of the Lipschitz-like constants of the bifunction. Under some appropriate conditions, we establish the strong convergence of the algorithm without prior knowledge of the Lipschitz constants. Furthermore, we provide a numerical experiment to demonstrate the efficiency of our algorithm. This result extends and complements recent results in the literature.
1 Introduction
In this article, we consider the problem of approximating a common solution of an equilibrium problem (EP) and a fixed-point problem for a multivalued mapping in a nonlinear space. Let C be a nonempty, closed, and convex subset of a metric space X. A multivalued nonlinear mapping \(T:X\to 2^{X}\) is said to have a fixed point \(x\in X\) if \(x\in Tx\). We denote the fixed-point set of T by \(F(T)\), i.e., \(F(T)=\{x\in X:x\in Tx\}\). Let \(f : C \times C \to \mathbb{R}\) be a bifunction, the EP is to find a point \(x^{*}\in C\) such that
where \(f(x,x)=0\) and \(f(x,\cdot )\) is convex on C. The set of solutions of (1.1) is denoted by \(\operatorname{EP}(f,C)\). The EP was first introduced in the linear setting by Ky Fan [13] and was later developed by Muu and Oettli [35] (see also Blum and Oettli [3]). The problem (1.1) is known to cover other important mathematical problems such as the minimization problem, the variational inequality problem, the saddle-point problem, Nash equilibrium involving noncooperative games, the fixed-point problem, etc. (see [3, 35]). It is well known that many real-life problems are characterized by phenomena that can be modeled as nonlinear, nonconvex, continuous optimization problems. Approximating the solution of such problems may actually be complicated in the framework of linear spaces. Dedieu et al. [7] (see also Li and Wang [32]) noted that these problems can be overcome in the nonlinear space settings. For this reason, Colao et al. [6] extended problem (1.1) to nonlinear spaces (in particular, a Riemannian manifold) and later in 2019 Khatibzadeh and Mohebbi [27] also studied EP in Hadamard spaces. These pioneering works have increased the interest of researchers in solving EP (1.1) in nonlinear spaces; see [1, 19, 28, 31]. We remark here that the aforementioned works were in the case when the bifunction f in (1.1) is monotone. Usually, the iterative methods employed in this instance adopt the popular regularization technique associated with a strongly monotone subproblem. Hoewever, in the case when f in (1.1) is nonmonotone, the regularization subproblem ceases to be strongly monotone and the iterative method becomes complex.
It is well known that pseudomonotone operators are obvious generalizations of the monotone operators and many real-life EPs can be modeled with pseudomonotone bifunctions. One of the notable methods used to solve the nonmonotone (in particular, the pseudomonotone) EP is the extragradient algorithm (EA). The EA was introduced by Koperlevich [30] and later redefined by Trans et al. [40]. Recently, Khammahawong et al. [26] employed an EA to approximate the solution of a strongly pseudomonotone EP in a Hadamard (complete Riemannian) manifold. Their proposed algorithm is as follows: Given \(x_{0}, y_{0}\in C\), the sequence \(\{y_{n}\}\) is generated as:
where \(\{\lambda _{n}\}\) is a positive sequence and f is a strongly pseudomonotone bifunction. They obtained the convergence result of (1.2) under some mild conditions. We note that algorithm (1.2) has been extensively studied in both Hilbert and Banach spaces, see [37–39] and other references therein. To obtain the convergence of the sequence generated by (1.2) (and in the work of Khatibzadeh and Mohebbi [27] in Hadamard spaces) there is need to know in advance the estimates of the Lipschitz-like constants of the bifunction. These are usually not easy to derive when the structure of the bifunction is not simple and may affect the efficiency of the algorithm.
In 2021, Iusem and Mohebbi [18] introduced two EA with a linesearch technique to solve (1.2) with a pseudomontone bifunction in Hadamard spaces. Their proposed algorithms are as follows:
Algorithm 1.1
(Extragradient Algorithm with Linesearch (EAL))
Initialization: Choose \(x_{0} \in C\subset X\). Take \(\delta \in (0,1), \hat{\lambda}, \bar{\lambda}\) satisfying \(0<\hat{\lambda}\leq \bar{\lambda}\), and \(\lambda _{n}\subseteq [\hat{\lambda},\bar{\lambda}]\).
Iterative Step: Given \(x_{n}\), define
If \(x_{n}=z_{n}\) stop. Otherwise, let
where
Take
where
Then,
Iusem and Mohebbi [18] obtained the Δ-convergence of EAL to an element in \(\operatorname{EP}(f,C)\). In addition, they proposed another algorithm to ensure strong convergence that was more desirable than the Δ-convergence in EAL. Precisely, the algorithm is as follows:
Algorithm 1.2
(Halpern Extragradient Algorithm with Linesearch (HEAL))
Initialization: Choose \(x_{0}\in C\), \(u \in X\). Consider a sequence \(\{\gamma _{n}\}\subset (0,1)\) such that \(\lim_{n\to \infty}\gamma _{n}=0\) and \(\sum_{n=0}^{\infty}\gamma _{n}=\infty \).
Iterative Step: Given \(x_{n}\), define
If \(x_{n}=z_{n}\) stop. Otherwise, let
where
Take
where
Then,
It is worth noting that Algorithm 1.1 and Algorithm 1.2 eliminate the challenges observed in (1.2) and [27] by introducing a linesearch technique. However, the presence of linesearch techniques in the algorithms to select step sizes will require more computation of nested iterations, which will be time consuming.
Motivated by the work of Iusem and Mohebbi [18] and other works in this direction, we introduce a self-adaptive extragradient algorithm in Hadamard spaces that does not need to have prior knowledge of Lipschitz-like constants or require a linesearch technique for execution. Our algorithm converges strongly to a common solution of the EP and the fixed point of a multivalued nonexpansive mapping. Furthermore, we give a numerical example in a Hadamard space to demonstrate the efficiency of our method. Our result extends and complements other recent results in this direction. Our contributions in this work are briefly highlighted as follows:
-
(i)
Our work extends step-adaptive algorithms for pseudomonotone EP from linear spaces (see, for example, [4, 14–16, 21, 23]) to nonlinear spaces.
-
(ii)
Our work improves the works of Khatibzadeh and Mohebbi [27], also that of Iusem and Mohebbi [18]. That is, our algorithm neither depends on the Lipschitz constants nor uses a linesearch technique.
2 Preliminaries
In this section, we present some notations, known definitions, and useful results that will be needed in the proof of our main result. Throughout this work, we use the notations “⇀” and “→” to denote Δ-convergence and strong convergence, respectively.
Let X be a metric space and \(x, y \in X\). A geodesic from x to y is a map (or a curve) c from the closed interval \([0, d(x, y)] \subset \mathbb{R}\) to X such that \(c(0) = x\), \(c(d(x, y)) = y\) and \(d(c(t), c(t')) = |t - t'|\) for all \(t, t' \in [0, d(x, y)]\). The image of c is called a geodesic segment joining x to y. When it is unique, this geodesic segment is denoted by \([x, y]\). The space \((X, d)\) is said to be a geodesic space if every two points of X are joined by a geodesic, and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each \(x, y \in X\). A subset C of a geodesic space X is said to be convex, if for any two points \(x, y \in C\), the geodesic joining x and y is contained in C, that is, if \(c : [0, d(x, y)] \to X\) is a geodesic such that \(x = c(0)\) and \(y = c(d(x, y))\), then \(c(t) \in C\ \forall t \in [0, d(x, y)]\). A geodesic triangle \(\Delta (x_{1},x_{2},x_{3})\) in a geodesic metric space \((X,d)\) consists of three vertices (points in X) with unparameterized geodesic segments between each pair of vertices. For any geodesic triangle there is a comparison (Alexandrov) triangle \(\bar{\Delta}\subset \mathbb{R}^{2}\), such that \(d(x_{i},x_{j})=d_{\mathbb{R}^{2}}(\bar{x}_{i},\bar{x}_{j})\), for \(i,j\in \{1,2,3\}\). A geodesic space X is a CAT(0) space if the distance between an arbitrary pair of points on a geodesic triangle Δ does not exceed the distance between its corresponding pair of points on its comparison triangle Δ̄. If Δ and Δ̄ are geodesic and comparison triangles in X, respectively, then Δ is said to satisfy the CAT(0) inequality for all points x, y of Δ and x̄, ȳ of Δ̄ if
Let x, y, z be points in X and \(y_{0}\) be the midpoint of the segment \([y,z]\), then the CAT(0) inequality implies
Berg and Nikolaev [2] introduced the notion of quasilinearization in a CAT(0) space as follows: Let a pair \((a, b) \in X \times X\) denoted by \(\overrightarrow{ab}\), be called a vector. Then, the quasilinearization map \(\langle \cdot ,\cdot \rangle :(X\times X)\times (X\times X) \rightarrow \mathbb{R}\) is defined by
It is easy to see that \(\langle \overrightarrow{ab}, \overrightarrow{ab}\rangle =d^{2}(a, b)\), \(\langle \overrightarrow{ba}, \overrightarrow{cd}\rangle =-\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle \), \(\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{ae}, \overrightarrow{cd}\rangle +\langle \overrightarrow{eb}, \overrightarrow{cd}\rangle \), and \(\langle \overrightarrow{ab}, \overrightarrow{cd}\rangle =\langle \overrightarrow{cd}, \overrightarrow{ab}\rangle \), for all \(a,b,c,d,e\in X\). Furthermore, a geodesic space X is said to satisfy the Cauchy–Schwartz inequality, if
for all \(a,b,c,d \in X\). It is well known that a geodesically connected space is a CAT(0) space if and only if it satisfies the Cauchy–Schwartz inequality [12]. Also, it is known that complete CAT(0) spaces are called Hadamard spaces.
Let \(\{x_{n}\}\) be a bounded sequence in a metric space X and \(r(\cdot ,\{x_{n}\}) : X \to [0, \infty )\) be a continuous functional defined by \(r(x,\{x_{n}\}) = \limsup_{n \to \infty} d(x, x_{n})\). The asymptotic radius of \(\{x_{n}\}\) is given by \(r(\{x_{n}\}) := \inf \{r(x, \{x_{n}\}) : x \in X\}\), while the asymptotic center of \(\{x_{n}\}\) is the set \(A(\{x_{n}\}) = \{x \in X : r(x, \{x_{n}\}) = r(\{x_{n}\})\}\). A sequence \(\{x_{n}\}\) in X is said to be Δ-convergent to a point \(x \in X\) if \(A(\{x_{n_{k}}\}) = \{x\}\) for every subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\). In this case, we say that x is the Δ-limit of \(\{x_{n}\}\) (see [11, 29]). The notion of Δ-convergence in metric spaces was introduced and studied by Lim [33], and it is known as the analog of the notion of weak convergence in Banach spaces.
Let X be a geodesic convex metric space and A be a nonempty subset of X. A subset A is called proximinal (see [9]), if for each \(x\in X\) there exists \(a\in A\) such that
It is well known that if A is proximinal, then A is closed. We denote the family of all nonempty proximinal subsets of X by \(P(X)\) and the family of closed and bounded subsets of X by \(CB(X)\), respectively. If A and B are nonempty subsets of X, then the Hausdorff metric H on \(P(X)\) is defined by
Let \(T:X\rightarrow CB(X)\) be a multivalued mapping. A point \(x\in X\) is called a strict fixed point of T if \(Tx=\{x\}\). In this case, T is said to satisfy the end point condition and denote the set of end points of the mapping T as \(E(T)\). The mapping T is said to be multivalued nonexpansive, if
Remark 2.1
Let \(T:C\to CB(C)\) be a multivalued mapping. If \(p\in C\) is an end point of the mapping T, then p is also a fixed point of T. That is, \(E(T)\subseteq F(T)\), in fact equality holds if T is single valued. However, the converse may not hold.
Example 2.2
Let \(X=\mathbb{R}\) and \(C=\{x:0\leq x \leq 1\}\) with the usual metric. For each \(x\in C\), let \(T:C\to CB(X)\) be defined as \(Tx=[0,x^{n}]\), \(1\leq n <\infty \). It is obvious that T is nonexpansive with \(E(T)=\{0\}\) and \(F(T)=\{0,1\}\).
Definition 2.3
Let X be a Hadamard space. A multivalued nonlinear mapping \(T : X \to 2^{X}\) is said to be demiclosed if for any bounded sequence \(\{x_{n}\}\) in X such that \(\Delta -\lim_{n \to \infty}x_{n} = x^{*}\) and \(\lim_{n \to \infty} d(x_{n}, z_{n}) = 0\), (where \(z_{n}\in Tx_{n}\)) we have that \(x^{*} \in F(T)\).
Lemma 2.4
([17])
Let X be a metric space and A, B are nonempty subsets in \(P(X)\). Then, for all \(a\in A\), there exists \(b\in B\) such that \(d(a,b)\leq H(A, B)\).
Definition 2.5
Let X be a Hadamard space. A function \(h:X\to (-\infty , \infty ]\) is said to be
-
(i)
convex, if
$$ h\bigl(\lambda x\oplus (1-\lambda ) y\bigr)\leq \lambda h(x)+(1-\lambda )h(y),\quad \forall x, y\in X, \lambda \in (0, 1), $$ -
(ii)
lower semicontinuous (or upper semicontinuous) at a point \(x\in C\), if
$$\begin{aligned} h(x)\leq \liminf_{n\to \infty} h(x_{n})\qquad \Bigl(\text{or } h(x)\geq \limsup_{n\to \infty} h(x_{n}) \Bigr), \end{aligned}$$(2.4)for each sequence \(\{x_{n}\}\) in C such that \(\underset{n\to \infty}{\lim}x_{n}=x\),
-
(iii)
lower semicontinuous (or upper semicontinuous) on C, if it is lower semicontinuous (or upper semicontinuous) at every point in C.
The convex programming associated with the proper, convex, and lower semicontinuous function h for all \(\lambda >0\) is given by:
for all \(y\in X\).
Remark 2.6
([24])
The subproblem (2.5) is well defined for all \(\lambda >0\).
Definition 2.7
Let C be a nonempty, closed, and convex subset of a Hadamard space X. A bifunction \(f:C\times C\to \mathbb{R}\) is said to be
-
(i)
monotone, if
$$ f(x, y) + f(y, x) \leq 0, \quad \forall x,y\in C; $$ -
(ii)
pseudomonotone if
$$ f(x, y)\geq 0 \quad \Rightarrow \quad f(y, x) \leq 0, \quad \forall x,y\in C. $$
It is well known that monotone bifunctions are pseudomonotone, but the converse is not true in general (see, for instance [20, 22]).
Definition 2.8
Let X be a Hadamard space. A bifunction f is said to satisfy the Lipschitz-like continuity if there exist two constants \(c_{1},c_{2}>0\) such that
To solve the EP, the following assumptions are important and necessary on the bifunction f in establishing our result:
-
(A1)
\(f(x,\cdot ):X\to \mathbb{R}\) is convex and lower semicontinuous for all \(x\in X\),
-
(A2)
\(f(\cdot ,y)\) is Δ-upper semicontinuous for all \(x\in X\),
-
(A3)
f satisfies the Lipschitz-type continuity condition,
-
(A4)
f is pseudomonotone.
Definition 2.9
Let C be a nonempty, closed, and convex subset of a Hadamard space X. The metric projection is a mapping \(P_{C}:X\rightarrow C\) that assigns to each \(x\in X\), the unique point \(P_{C}x\in C\) such that
Lemma 2.10
([12])
Every bounded sequence in a Hadamard space has a Δ-convergent subsequence.
Lemma 2.11
Let X be a Hadamard space, \(x,y,z\in X\), and \(t, s\in [0,1]\). Then,
-
(i)
\(d(t x\oplus (1-t)y,z)\leq t d(x,z)+(1-t)d(y,z)\) (see [12]).
-
(ii)
\(d^{2}(t x\oplus (1-t)y,z)\leq t d^{2}(x,z)+(1-t)d^{2}(y,z)-t(1-t)d^{2}(x,y)\) (see [12]).
-
(iii)
\(d^{2}(t x\oplus (1-t)y,z)\leq t^{2} d^{2}(x,z)+(1-t)^{2}d^{2}(y,z) +2t(1-t) \langle \overrightarrow{xz},\overrightarrow{yz}\rangle \) (see [8]).
Lemma 2.12
([25])
Let X be a Hadamard space, \(\{x_{n}\}\) be a sequence in X, and \(x \in X\). Then, \(\{x_{n}\}\) Δ-converges to x if and only if
Lemma 2.13
([8])
Let C be a nonempty, convex subset of a Hadamard space X, \(x\in X\) and \(u\in C\). Then, \(u=P_{C}x\) if and only if \(\langle \overrightarrow{ux},\overrightarrow{yu}\rangle \leq 0\), for all \(y\in C\).
Lemma 2.14
([10])
Let X be a Hadamard space and \(T:X\to X\) be a nonexpansive mapping. Then, T is Δ-demiclosed.
Lemma 2.15
([36])
Let X be a Hadamard space and \(\{x_{n}\}\) be a sequence in X. If there exists a nonempty subset C in which
-
(i)
\(\lim_{n\to \infty}d(x_{n},z)\) exists for every \(z \in C\), and
-
(ii)
if \(\{x_{n_{k}}\}\) is a subsequence of \(\{x_{n}\}\) that is Δ-convergent to x, then \(x \in C\),
then, there is a \(p \in C\) such that \(\{x_{n}\}\) is Δ-convergent to p in X.
Lemma 2.16
([41])
Let \(\{a_{n}\}\) be a sequence of nonnegative real numbers satisfying
where \(\{\alpha _{n}\}\), \(\{\delta _{n}\}\), and \(\{\gamma _{n}\}\) satisfy the following conditions:
-
(i)
\(\{\alpha _{n}\}\subset [0,1]\), \(\Sigma _{n=0}^{\infty}\alpha _{n}= \infty \),
-
(ii)
\(\limsup_{n\rightarrow \infty}\delta _{n}\leq 0\),
-
(iii)
\(\gamma _{n}\geq 0 (n\geq 0)\), \(\Sigma _{n=0}^{\infty}\gamma _{n}< \infty \).
Then, \(\lim_{n\rightarrow \infty}a_{n}=0\).
Lemma 2.17
([34])
Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{j}\}\) of \(\{n\}\) with \(a_{n_{j}}< a_{n_{j}+1}\) \(\forall j\in \mathbb{N}\). Then, there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\) such that \(m_{k}\to \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):
In fact, \(m_{k}=\max \{i\leq k :a_{i}< a_{i+1}\}\).
3 Main results
In this section, we present our algorithm and its convergence analysis for approximating the solutions of EP and the fixed point of a multivalued nonexpansive mapping in Hadamard spaces. In the following, let C be a nonempty, closed, convex subset of a Hadamard space \(X, f : X \times X \to \mathbb{R}\) be a bifunction satisfying (A1)–(A4) and let \(T : C \to CB(X)\) be a multivalued nonexpansive mapping such that \(Tx^{*}=\{x^{*}\}\). Suppose the solution set of the aforementioned problems is
Now, we present our algorithm as follows:
Algorithm 3.1
(Self-Adaptive Extragradient Algorithm (SAEA))
Initialization: Choose \(u,x_{0} \in C\), \(n \geq 0\), \(\lambda _{n}>0\), \(\mu \in (0,1)\).
Iterative steps: Take \(\{\alpha _{n}\}\subseteq (0,1)\) such that \(\lim_{n\to \infty}\alpha _{n}=0\), \(\sum_{n=0}^{ \infty}\alpha _{n}=\infty \) and \(\{\beta _{n}\}\subseteq (0,1)\) such that \(\liminf_{n \to \infty}(1-\alpha _{n})\beta _{n}>0\). Given the nth iterate, compute the \((n+1)\)th iterate via the following procedure:
- Step 1::
-
Compute
$$ y_{n} = \arg \min \biggl\{ f(x_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$(3.1)If \(x_{n} = y_{n}\), then stop. Otherwise go to Step 2.
- Step 2::
-
Compute
$$ w_{n} = \arg \min \biggl\{ f(y_{n},y)+ \frac{1}{2\lambda _{n}}d^{2}(x_{n},y): y \in C \biggr\} . $$(3.2) - Step 3::
-
Compute
$$\begin{aligned} x_{n+1} = \alpha _{n}u\oplus (1-\alpha _{n}) \bigl[\beta _{n}h_{n} \oplus (1-\beta _{n})w_{n} \bigr], \end{aligned}$$where \(h_{n}\in Tw_{n}\).
- Step 4::
-
Evaluate
$$\begin{aligned} \lambda _{n+1}= \textstyle\begin{cases} \min \{\lambda _{n}, \frac{\mu [d^{2}(x_{n},y_{n})+d^{2}(w_{n},y_{n})]}{2[f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n})]_{+}} \}, \\ \quad \text{if } f(x_{n},w_{n})-f(x_{n},y_{n})-f(y_{n},w_{n})>0, \\ \lambda _{0}, \quad \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$(3.3)Set \(n:=n+1\) and go back to Step 1.
We begin with the following lemma that is crucial to the nonincreasing monotonicity of sequence (3.3). The lemma has been proved by many authors in the framework of Hilbert and Banach spaces (see [22, 42] and other references therein). We state the lemma here in a Hadamard-space setting and give the proof for completeness.
Lemma 3.2
The sequence \(\{\lambda _{n}\}\) defined by (3.3) is monotonically nonincreasing and bounded and
Proof
It is obvious that the sequence \(\{\lambda _{n}\}\) is monotonic nonincreasing. By the Lipschitz-like property of f, we obtain that
Hence, the sequence \(\{\lambda _{n}\}\) is nonincreasing and has the lower bound \(\frac{\mu}{2\max \{c_{1},c_{2}\}}\). Hence, the limit \(\lim_{n\to \infty}\lambda _{n}=\lambda >0\) exists. □
The following lemma is vital in proving the convergence of our proposed Algorithm 3.1.
Lemma 3.3
Assume the bifunction f satisfies Assumption (A1). Suppose \(\{w_{n}\}\), \(\{x_{n}\}\), and \(\{y_{n}\}\) are generated as in Algorithm 3.1and \(y\in C\). Then,
Proof
Take \(y\in C\). Since \(w_{n}\) is a solution of (3.2), let \(v_{n}=t y\oplus (1-t)w_{n}\) such that \(t\in [0,1)\). Then, from Lemma 2.11(i), we have
The inequality (3.5) can be reduced to
If \(t\to 1^{-}\) in (3.6), we have
which implies that
□
Lemma 3.4
Suppose \(\{w_{n}\}\), \(\{x_{n}\}\), and \(\{y_{n}\}\) are sequences generated by Algorithm 3.1. Then,
Proof
From (3.2), we have
Since \(\lambda _{n}>0\), we obtain from (3.8) that
This implies from (3.6) and the quasilinearization properties that
Thus, from (3.9) and (3.10), we have
From (3.2) and Remark 2.6, we can obtain that
Hence, from (3.11) and (3.12), we have
which implies that
The following facts are obvious from the quasilinearization properties
Hence, from (3.13) and (3.14), we obtain
For each \(x^{*}\in \Gamma \), by Assumption (A4) on f and \(f(x^{*},y_{n})\geq 0\), it implies that \(f(y_{n},x^{*})\leq 0\). Hence, if \(x^{*}=y\) in (3.15) we have
□
Theorem 3.5
Let C be a nonempty, closed, and convex subset of a Hadamard space X. Suppose that \(f:C\times C\to \mathbb{R}\) is a bifunction satisfying conditions (A1)–(A4) and \(T:X\to CB(X)\) is a multivalued nonexpansive mapping such that \(Tx^{*}=\{x^{*}\}\). Suppose that the solution set \(\Gamma \neq \emptyset \). Then, the sequence \(\{x_{n}\}\) generated by Algorithm 3.1converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\).
Proof
We first show that the sequence \(\{x_{n}\}\) is bounded. Let \(\kappa \in (1-\mu )\) be some fixed number. From Lemma 3.2
Thus, there exists \(n\in \mathbb{N}\) such that
This implies that
Let \(x^{*}\in \Gamma \) and from Algorithm 3.1, Lemma 2.11(i), (3.17), and the multivalued nonexpansivity of T we obtain
Therefore, \(\{x_{n}\}\) is bounded. It follows also that \(\{w_{n}\}\) and \(\{y_{n}\}\) are bounded.
From Algorithm 3.1, Lemma 2.11(ii), Lemma 3.4, and (3.17), we obtain that
This implies from (3.19) that
We next divide the rest of the proof into two cases:
Case 1: Assume that \(\{d^{2}(x_{n},x^{*})\}\) is a monotone nondecreasing sequence. Then, \(\{d^{2}(x_{n},x^{*})\}\) is convergent and
Then, by this fact and the condition on \(\alpha _{n}\), we obtain that
From (3.19) we have
This implies that
We obtain from (3.23) that
Also, from (3.22) and (3.23) we obtain that
Again, from (3.22), (3.24), and (3.25), we have
By the nonexpansivity of T, (3.25), and (3.26), we obtain
From Algorithm 3.1 and Lemma 2.11(i), we obtain
Hence, from (3.24), (3.25), and the condition on \(\alpha _{n}\), we have
Since \(\{x_{n}\}\) is bounded, by Lemma 2.10 there exists a subsequence \(\{x_{n_{k}}\}\) of the sequence \(\{x_{n}\}\) such that Δ-\(\lim_{n\to \infty}x_{n_{k}}=z\) for some \(z\in C\). Then, it follows from (3.25) and the demiclosedness property of T that \(z\in F(T)\).
From Algorithm 3.1, \(w_{n}\) solves the subproblem (3.2). By letting \(v=t w_{n}\oplus (1-t)y\) such that \(t\in [0,1)\) and \(y\in C\), we have
By a similar approach as in (3.8)–(3.10), we have from (3.29) that
If \(t\to 1^{-}\), we obtain that
This implies from (3.31) that
which by quasilinearization properties is equivalent to
Thus, from (3.23), (3.24), and \(\lambda _{n}>0\) we have that \(f(y_{n},y)\geq 0\). Since \(\{x_{n}\}\) is Δ-convergent to z, by the fact that \(f(y_{n},y)\geq 0\) and Assumption (A3), we conclude that \(f(z,y)\geq 0\). Thus, \(z\in \operatorname{EP}(f,C)\). Hence, \(z\in \Gamma \). Now, let \(s_{n}=\beta _{n}h_{n}\oplus (1-\beta _{n})w_{n}\), then by Lemma 2.11(i), (3.24), and (3.25) we have that
Since \(\{x_{n}\}\) is bounded, we can choose a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) that is Δ-convergent to \({z}\in \Gamma \) such that
Then, by Lemma 2.13, we obtain
Furthermore, by quasilinearization properties, we have
Thus, from (3.34) and (3.35), we obtain that
Also, by the condition on \(\alpha _{n}\) and inequality (3.37), we obtain
Furthermore, we obtain from Algorithm 3.1, (3.18), Lemma 3.4, and Lemma 2.11(iii) that
Therefore, from (3.38) and (3.39), we conclude by Lemma 2.16 that \(\{x_{n}\}\) converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\).
Case 2: Suppose that \(\{d^{2}(x_{n},\hat{u})\}\) is not a monotone decreasing sequence, then there exists a subsequence \(\{d^{2}(x_{n_{k}},\hat{u})\}\) of \(\{d^{2}(x_{n},\hat{u})\}\) such that \(d^{2}(x_{n_{k}},\hat{u})\leq d^{2}(x_{n_{k}+1},\hat{u})\), \(\forall k \in \mathbb{N}\). Then, by Lemma 2.17, there exists a nondecreasing sequence \(\{m_{k}\}\subseteq \mathbb{N}\) such that \(m_{k}\to \infty \) and
Thus, from Algorithm 3.1, Lemma 2.11(i), and (3.18), we have
This implies that
From (3.34), this implies that \(\lim_{k\to \infty}d(s_{m_{k}},x_{m_{k}})=0\). Hence, by this fact and \(\alpha _{m_{k}}\to 0\), we have
Following the same argument as in Case 1, we obtain
and
Hence, from (3.39), we obtain
In addition, from (3.40), we have that
which implies that
Thus, from Cases 1 and 2, we conclude that \(\{x_{n}\}\) converges strongly to \(\hat{u}=P_{\Gamma}\hat{u}\). This completes the proof. □
4 Numerical example
In this section, we present a numerical experiment to demonstrate the performance of our method. All codes were written in MATLAB 2020 on a Dell Core i5 PC.
Example 4.1
Let \(Y:=\{(x, e^{x}):x\in \mathbb{R}\}\) and \(X_{n}:=\{(n, y):y\geq e^{n}\}\) for each \(n \in \mathbb{Z}\). Set \(X:=Y\cup \bigcup_{n\in \mathbb{Z}}X_{n}\) equipped with a metric \(d:X\times X \to [0, \infty )\), defined for all \(x=(x_{1},x_{2})\), \(y=(y_{1},y_{2})\in X\) by
where γ̇ is the derivative of the curve \(\gamma :\mathbb{R}\to X\) given as \(\gamma (t):=(t, e^{t})\) for each \(t\in \mathbb{R}\) (see [5]). Then, \((X, d)\) is a Hadamard space.
Now, let \(T:X\to CB(X)\) be defined by \(Tx = \{(-x_{1}, e^{-x_{1}}), (0, 0)\}\) for all \(x=(x_{1}, x_{2}) \in X\). Clearly, \(F(T) = \{(0,0)\}\) and also satisfies the end-point condition. We check that T is nonexpansive. Indeed, for each \((x_{1}, x_{2}), (y_{1}, y_{2})\in X\), we have
However,
and
Therefore,
Also,
Similarly,
and
Hence,
Therefore, T is a multivalued nonexpansive mapping.
Furthermore, let \(P(n,\mathbb{R})\) be the space of \((n\times n)\) positive symmetric definite matrices endowed with the Riemannian metric
for all \(A,B\in T_{E}(P(n,\mathbb{R}))\) and every \(E\in P(n,\mathbb{R})\). The pair \((P(n,\mathbb{R}), \langle A,B\rangle _{E} )\) is a Hadamard space (see [27]). Let \(\mathbb{R}\mathbbm{^{+}}\) be the set of positive real numbers. Now, consider the space \(P(n,\mathbb{R})\) such that \(n=1\) with an inner product \(\langle a,b\rangle _{\lambda}= \frac{1}{\lambda ^{2}}ab\) for \(\lambda >0\) and \(a,b\in T_{\lambda}\mathbb{R}^{+}=\mathbb{R}\). Let \((X,d)\) be a metric space with \(X=\mathbb{R}^{+}\) and \(d:X\times X \to \mathbb{R}\) be defined by
with the geodesic between \(a,b\in X\) defined as \(\gamma (\kappa )=a (\frac{b}{a} )^{\kappa}\). Therefore, the pair \((X, d)\) is a CAT(0) space with the geodesic between a and b given as
Now, let \(f:X\times X\to \mathbb{R}\) be bifunctions defined by \(f(x,y)=\ln{x} (\ln \frac{y}{x} )\). From (4.2), we have that
Clearly, the bifunction f satisfies assumptions (A1) and (A2). Next, we show that f satisfies assumption (A3). Let \(x,y,z\in X\), then
Hence, f satisfies the Lipschitz-type condition with Lipschitz constants \(c_{1}=c_{2}=\frac{1}{2}\). Moreover,
Hence, f is monotone (and thus pseudomonotone).
For the sake of numerical computation, we choose \(\alpha _{n} = \frac{1}{n+1}\), \(\beta _{n} = \frac{2n}{5n+3}\), \(\lambda _{0} = 0.9\), \(\mu = 0.6\), \(u = \frac{\sqrt{3}}{3}\); for Algorithm EA, we choose \(y_{0} = \frac{1}{33}\), \(\lambda _{n} = \frac{1}{2c_{1}}\); for Algorithm EAL, we take \(\delta = 0.4\), \(\lambda _{n} = \frac{1}{2c_{1}}\), \(\alpha _{n} = \frac{1}{n+1}\), and in addition for Algorithm HEAL, we take \(\beta _{n} = \frac{2n}{5n+5}\). We compute the algorithms for three different points. The stopping criteria used for the algorithms is \(\mathrm{Err} = \|x_{n} - y_{n} \| < 10^{-6}\). The numerical results are shown in Table 1 and Figs. 1–3. The numerical computation shows that our proposed algorithm successfully approximates the common solution of the pseudomonotone equilibrium problem and the fixed point of a nonexpansive mapping. Furthermore, it performs better than the other methods in the literature in terms of number of iterations and CPU time taken for the computation.
Example 4.1, Case I
Example 4.1, Case II
Example 4.1, Case III
5 Conclusion
In this paper, we studied a self-adaptive extragradient algorithm for approximating a common solution of a pseudomonotone equilibrium problem and fixed-point problem for a multivalued nonexpansive mapping in Hadamard spaces. We proposed an algorithm and obtained strong convergence without prior knowledge of the Lipschitz constants of the pseudomonotone bifunction. Furthermore, we provide a numerical experiment to demonstrate the efficiency of our algorithm. Our result extends and complements recent results in the literature.
Availability of data and materials
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Aremu, K.O., Izuchukwu, C., Mebawondu, A.A., Mewomo, O.T.: A viscosity type proximal point algorithm for monotone equilibrium problem and fixed point problem in a Hadamard space. Asian-Eur. J. Math. 14, 2150058 (2021). https://doi.org/10.1142/S1793557121500583
Berg, I.D., Nikolaev, I.G.: Quasilinearization and curvature of Alexandrov spaces. Geom. Dedic. 133, 195–218 (2008)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Bokodisa, A.T., Jolaoso, L.O., Aphane, M.: A parallel hybrid Bregman subgradient extragradient method for a system of pseudomonotone equilibrium and fixed point problems. Symmetry 3, 216 (2021). https://doi.org/10.3390/sym13020216
Chaipunya, P., Kumam, P.: On the proximal point method in Hadamard spaces. Optimization 66, 1647–1665 (2017)
Colao, V., Lopez, G., Marino, G., Martín-Márquez, V.: Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 388, 61–77 (2012)
Dedieu, J.P., Priouret, P., Malajovich, G.: Newton’s method on Riemannian manifolds: covariant alpha theory. IMA J. Numer. Anal. 23(3), 395–419 (2003)
Dehghan, H., Rooin, J.: Metric projection and convergence theorems for nonexpansive mapping in Hadamard spaces (2014). arXiv:1410.1137v1 [math.FA]
Dhompongsa, S., Kaewkhao, A., Panyanak, B.: Lim’s theorem for multivalued mappings in CAT(0) spaces. J. Math. Anal. Appl. 312(2), 478–487 (2005)
Dhompongsa, S., Kirk, W.A., Panyanak, B.: Nonexpansive set-valued mappings in metric and Banach spaces. J. Nonlinear Convex Anal. 8, 35–45 (2007)
Dhompongsa, S., Kirk, W.A., Sims, B.: Fixed points of uniformly Lipschitzian mappings. Nonlinear Anal. 65(4), 762–772 (2006)
Dhompongsa, S., Panyanak, B.: On △-convergence theorems in CAT(0) spaces. Comput. Math. Appl. 56, 2572–2579 (2008)
Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequalities, III, pp. 103–113. Academic Press, New York (1972)
Hieu, D.V.: Common solutions to pseudomonotone equilibrium problems. Bull. Iranian Math. Soc. 42(5), 1207–1219 (2016)
Hieu, D.V., Muu, L.D., Ahn, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197–217 (2016). https://doi.org/10.1007/s11075-015-0092-5
Hieu, D.V., Thai, B.H., Kumam, P.: Parallel modified methods for pseudomonotone equilibrium problems and fixed point problems for quasi-nonexpansive mappings. Adv. Oper. Theory 5, 1684–1717 (2020)
Isiogugu, F.O.: Demiclosedness principle and approximation theorems for certain classes of multivalued mappings in Hilbert spaces. Fixed Point Theory Appl. 2013, 61 (2013)
Iusem, A.N., Mohebbi, V.: Convergence analysis of the extragradient method for equilibrium problems in Hadamard spaces. Comput. Appl. Math. 39(2), 1–22 (2020). https://doi.org/10.1007/s40314-020-1076-1
Izuchukwu, C., Aremu, K.O., Mebawondu, A.A., Mewomo, O.T.: A viscosity iterative technique for equilibrium and fixed point problems in a Hadamard space. Appl. Gen. Topol. 20(1), 193–210 (2019)
Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: An inertial extragradient method via viscoscity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization (2020). https://doi.org/10.1080/02331934.2020.1716752
Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: A parallel combination extragradient method with Armijo line searching for finding common solutions of finite families of equilibrium and fixed point problems. Rend. Circ. Mat. Palermo, II Ser. 69, 711–735 (2020)
Jolaoso, L.O., Aphane, M.: A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl. 2020, 9 (2020). https://doi.org/10.1186/s13663-020-00676-y
Jolaoso, L.O., Lukumon, G.A., Aphane, M.: Convergence theorem for system of pseudomonotone equilibrium and split common fixed point problems in Hilbert spaces. Boll. Unione Mat. Ital. (2021). https://doi.org/10.1007/s40574-020-00271-4
Jost, J.: Convex functionals and generalized harmonic maps into spaces of nonpositive curvature. Comment. Math. Helv. 70, 659–673 (1995)
Kakavandi, B.A., Amini, M.: Duality and subdifferential for convex functions on complete CAT(0) metric spaces. Nonlinear Anal. 73, 3450–3455 (2010)
Khammahawong, K., Kumam, P., Chaipunya, P.: An extragadient algorithm for strongly pseudomonotone equilibrium problems on Hadamard manifolds. Thai J. Math. 18(1), 350–371 (2020)
Khatibzadeh, H., Mohebbi, V.: Approximating solutions of equilibrium problems in Hadamard spaces. Miskolc Math. Notes 20, 281–297 (2019)
Kimura, Y., Kishi, Y.: Equilibrium problems and their resolvents in Hadamard spaces. J. Nonlinear Convex Anal. 19(9), 1503–1513 (2018)
Kirk, W.A., Panyanak, B.: A concept of convergence in geodesic spaces. Nonlinear Anal. 68(12), 3689–3696 (2008)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747–756 (1976)
Kumam, P., Chaipunya, P.: Equilibrium problems and proximal algorithms in Hadamard spaces (2018). arXiv:1807.10900v1 [math.oc]
Li, C., Wang, J.H.: Newton’s method on Riemannian manifolds: Smale’s point estimation theory under the condition. IMA J. Numer. Anal. 26(2), 228–251 (2006)
Lim, T.C.: Remarks on some fixed point theorems. Proc. Am. Math. Soc. 60, 179–182 (1976)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)
Ranjbar, S., Khatibzadeh, H.: Convergence and w-convergence of modified Mann iteration for a family of asymptotically nonexpansive type mappings in complete CAT(0) spaces. Fixed Point Theory 17, 151–158 (2016)
Rehman, H., Kumam, P., Cho, Y.J., Yordsorn, P.: Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequal. Appl. 2019, 282 (2019)
Rehman, H., Kumam, P., Je Cho, Y., Suleiman, Y.I., Kumam, W.: Modified popovs explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 36(1), 82–113 (2021)
Rehman, H.U., Kumam, P., Dong, Q.L., Peng, Y., Deebani, W.: A new Popovs subgradient extragradient method for two classes of equilibrium programming in a real Hilbert space. Optimization 70(12), 2675–2710 (2021)
Trans, D.Q., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179, 197–211 (2018)
Acknowledgements
The authors appreciate the support of their institutions.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
K.O. wrote the main manuscript, L.O. prepared the figures, and O.K. read through the prove. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Aremu, K.O., Jolaoso, L.O. & Oyewole, O.K. A self-adaptive extragradient method for fixed-point and pseudomonotone equilibrium problems in Hadamard spaces. Fixed Point Theory Algorithms Sci Eng 2023, 4 (2023). https://doi.org/10.1186/s13663-023-00742-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-023-00742-1
MSC
- 65K15
- 47J25
- 65J15
- 90C33
Keywords
- Extragradient method
- Equilibrium problems
- Common fixed point
- Pseudomontone
- Hadamard spaces