- Research
- Open access
- Published:
New Tseng’s extragradient methods for pseudomonotone variational inequality problems in Hadamard manifolds
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2021, Article number: 5 (2021)
Abstract
We propose Tseng’s extragradient methods for finding a solution of variational inequality problems associated with pseudomonotone vector fields in Hadamard manifolds. Under standard assumptions such as pseudomonotone and Lipschitz continuous vector fields, we prove that any sequence generated by the proposed methods converges to a solution of variational inequality problem, whenever it exits. Moreover, we give some numerical experiments to illustrate our main results.
1 Introduction
The theory of variational inequality problems, which was firstly introduced by Stampacchia [1], has significant applications in numerous fields, for example, optimal control, boundary valued problems, network equilibrium problems, and so forth. It has been widely concentrated on finite- or infinite-dimensional linear spaces; see, for instance, [2–7] and the bibliography therein.
Let K be a nonempty, closed, and convex subset of a real Hilbert space H and \(T : K \to H\) a single-valued operator. The variational inequality problem (VIP) is to find \(x^{*} \in K\) such that
Many researches have been proposed and analyzed iterative algorithms for approximating the solution of variational inequality problem (1), such as the simplest projection method [4, 8], the extragradient method [9], and the subgradient extragradient method [10–12].
One popular method is the well-known Tseng’s extragradient method presented by Tseng [13]. The algorithm is described as follows:
The weak convergence of this method was established in [13] under the additional assumption that T is monotone Lipschitz continuous, and further studied in [14] for the case when T is pseudomonotone Lipschitz continuous. Recently, Tseng’s extragradient method has been studied by many authors; see, for instance, [6, 7, 15–18] and the references therein.
Recently, several nonlinear issues appeared in fixed point, variational inclusion, equilibrium, and optimization problems trying to adapt the theory from linear spaces to nonlinear systems because certain problems cannot be posed in a linear space and require a manifold structure. The extension of the concept, techniques, as well as methods from linear spaces to Riemannian manifolds has some significant advantages. For example, some constrained optimization problems can be viewed as unconstrained ones from the Riemannian geometry perspective, another advantage is that some optimization problems with nonconvex objective functions become convex through the introduction of an appropriate Riemannian metric. The investigation of extensions and development of nonlinear problem techniques has got a lot of consideration. Therefore, many authors have focused on the Riemannian framework; see, for example, [19–28] and the references therein.
Let M be an Hadamard manifold, TM the tangent bundle of M, K a nonempty, closed, geodesic convex subset of M and exp an exponential mapping. In 2003, Németh [22] introduced the variational inequality problem on an Hadamard manifold which is to find \(x^{*} \in K\) such that
where \(T : K \to TM\) is a single-valued vector field. The author generalized some basic existence and uniqueness theorems of the classical theory of variational inequality problems from Euclidean spaces to Hadamard manifolds. We use \(VIP(T,K)\) to denote the set of solutions of the variational inequality problem (2). Inspired by [22], many authors further studied this problem in Riemannian manifolds. See, for instance, Li et al. [29] who studied the variational inequality problems on general Riemannian manifolds. Tang et al. [30] introduced the proximal point method for variational inequalities with pseudomonotone vector fields. Ferreira et al. [31] suggested an extragradient-type algorithm for solving variational inequality problem (2) on Hadamard manifolds. The Korpelevich’s method to solve variational inequality problems was presented by Tang and Huang [32]. Tang et al. [33] extended a projection-type method for variational inequalities from Euclidean spaces to Hadamard manifolds. In 2019, Junfeng et al. [23] purposed two Tseng’s extragradient methods to solve variational inequality problem (2) in Hadamard manifolds. Under the assumption that T is pseudomonotone Lipschitz continuous, the authors proved that a sequences generated by the proposed methods converge to solutions of variational inequality problems on Hadamard manifolds. The step sizes in the first algorithm are obtained by using a line search, and in the second they are found just by using two previous iterates, so it is unnecessary to know the Lipschitz constants.
Motivated by the above results, in this article we purpose three effective Tseng’s extragradient methods for solving variational inequality (2) on Hadamard manifolds. The step sizes in the first algorithm depend on the Lipschitz constant, for the second we obtain them by using a line search, and for the last one only we get them by using two previous iterates. For the last two algorithms, the Lipschitz constant need not be known. Under appropriate assumptions, we prove that any sequence generated by the proposed methods converges to a solution of variational inequality (2).
The rest of the paper is as follows: In Sect. 2, we present some vital ideas of geometry and nonlinear examination in Riemannian manifolds which can be discovered in any standard book on manifolds, such as [34–37], and will be needed in the sequel. In Sect. 3, our three algorithms based on Tseng’s extragradient method for variational inequality are presented, and we analyze their convergence on Hadamard manifolds. In Sect. 4, we provide numerical examples to show the efficiency of the proposed algorithms. In Sect. 5, we give remarks and conclusions.
2 Preliminaries
Let M be a connected finite-dimensional manifold. For \(p \in M\), let \(T_{p}M\) be the tangent space of M at p, which is a vector space of the same dimension as M. The tangent bundle of M is denoted by \(TM = \bigcup_{p \in M}T_{p}M\). A smooth mapping \(\langle \cdot , \cdot \rangle : TM \times TM \to \mathbb{R}\) is said to be a Riemannian metric on M if \(\langle \cdot , \cdot \rangle _{p} : T_{p}M \times T_{p}M \to \mathbb{R}\) is an inner product for all \(x \in M\). We denote by \(\Vert \cdot \Vert _{p}\) the norm corresponding to the inner product \(\langle \cdot , \cdot \rangle _{p}\) on \(T_{p}M\). If there is no confusion, we omit the subscript p. A differentiable manifold M endowed with a Riemannian metric \(\langle \cdot , \cdot \rangle \) is said to be a Riemannian manifold.
The length of a piecewise smooth curve \(\omega : [a,b] \to M\) joining \(\omega (a) = p\) to \(\omega (b) = q\), through \(L(\omega ) = \int _{a}^{b} \Vert \omega {'}(t)\Vert \,dt\), where \(\omega {'}(t)\) is the tangent vector at \(\omega (t)\) in the tangent space \(T_{\omega (t)}M\). Minimizing this length functional over the set of all such curves, we obtain a Riemannian distance \(d(p,q)\) which induces the original topology on M.
Let ∇ be a Levi-Civita connection associated with the Riemannian manifold M. Given a smooth curve ω, a smooth vector field X along ω is said to be parallel if \(\nabla _{\omega {'}}X ={\mathbf{0}}\) where 0 denotes the zero section of TM. If \(\omega {'}\) itself is parallel, we say that ω is a geodesic, and in this case \(\Vert \omega {'}\Vert \) is a constant. When \(\Vert \omega {'}\Vert = 1\), ω is said to be normalized. A geodesic joining p to q in M is said to be a minimizing geodesic if its length equals to \(d(p,q)\).
The parallel transport \({\mathrm{P}}_{{\omega },{\omega (b)},{\omega (a)}}: T_{\omega (a)}M \to T_{ \omega (b)}M\) on the tangent bundle TM along \(\omega : [a,b] \to \mathbb{R}\) with respect to ∇ is defined by
where V is the unique vector field such that \(\nabla _{\omega {'}(t)}V = {\mathbf{0}}\) for all \(t \in [a,b]\) and \(V(\omega (a)) = v\). If ω is a minimizing geodesic joining p to q, then we write \({\mathrm{P}}_{q,p}\) instead of \({\mathrm{P}}_{\omega ,q,p}\). Note that, for every \(a,b,b_{1},b_{2} \in \mathbb{R}\), we have
Also \({\mathrm{P}}_{\omega (b),\omega (a)}\) is an isometry from \(T_{\omega (a)}M\) to \(T_{\omega (b)}M\), that is, the parallel transport preserves the inner product
A Riemannian manifold M is said to be complete if for all \(p \in M\), all geodesics emanating starting from p are defined for all \(t \in \mathbb{R}\). Hopf–Rinow theorem asserts that if M is complete then any pair of points in M can be joined by a minimizing geodesic. Moreover, \((M,d)\) is a complete metric space, every bounded closed subset is compact. If M is a complete Riemannian manifold, then the exponential map \(\exp _{p} : T_{p}M \to M\) at \(p \in M\) is defined by
where \(\omega _{\nu }(\cdot ,p)\) is the geodesic starting from p with velocity ν (i.e., \(\omega _{\nu }(0,p) = p\) and \(\omega {'}_{\nu }(0,p) = \nu \)). Then, for any value of t, we have \(\exp _{p}t\nu =\omega _{\nu }(t,p)\) and \(\exp _{p}{\mathbf{0}} =\omega _{\nu }(0,p) =p\). Note that the mapping \(\exp _{p}\) is differentiable on \(T_{p}M\) for every \(p \in M\). The exponential map has inverse \(\exp ^{-1}_{p} : M \to T_{p}M\). Moreover, for any \(p,q \in M\), we have \(d(p,q) = \Vert \exp _{p}^{-1}q\Vert \).
A complete simply connected Riemannian manifold of nonpositive sectional curvature is said to be an Hadamard manifold. In the remaining part of this paper, M will denote a finite-dimensional Hadamard manifold.
The following proposition is outstanding and will be helpful.
Proposition 1
([34])
Let \(p \in M\). The exponential mapping \(\exp _{p} : T_{p}M \to M\) is a diffeomorphism, and for any two points \(p,q \in M\) there exists a unique normalized geodesic joining p to q, which is can be expressed by the formula
A geodesic triangle \(\triangle (p_{1},p_{2},p_{3})\) of a Riemannian manifold M is a set consisting of three points \(p_{1}\), \(p_{2}\), and \(p_{3}\), and three minimizing geodesics joining these points.
Proposition 2
([34])
Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M. Then
and
Moreover, if α is the angle at \(p_{1}\), then we have
The following relation between geodesic triangles in Riemannian manifolds and triangles in \(\mathbb{R}^{2}\) can be found in [37].
Lemma 1
([37])
Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M. Then, there exists a triangle \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) for \(\triangle (p_{1},p_{2},p_{3})\) such that \(d(p_{i},p_{i+1}) = \Vert \overline{p_{i}} - \overline{p_{i+1}}\Vert \), with the indices taken modulo 3; it is unique up to an isometry of \(\mathbb{R}^{2}\).
The triangle \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) in Lemma 1 is said to be a comparison triangle for \(\triangle (p_{1},p_{2},p_{3})\). The points \(\overline{p_{1}}\), \(\overline{p_{2}}\), \(\overline{p_{3}}\) are called comparison points to the points \(p_{1}\), \(p_{2}\), \(p_{3}\), respectively.
Lemma 2
Let \(\triangle (p_{1},p_{2},p_{3})\) be a geodesic triangle in M and \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\) be its comparison triangle.
-
(i)
Let \(\alpha _{1}\), \(\alpha _{2}\), \(\alpha _{3}\) (respectively, \(\overline{\alpha _{1}}\), \(\overline{\alpha _{2}}\), \(\overline{\alpha _{3}}\)) be the angles of \(\triangle (p_{1},p_{2},p_{3})\) (respectively, \(\triangle (\overline{p_{1}}, \overline{p_{2}}, \overline{p_{3}})\)) at the vertices \(p_{1}\), \(p_{2}\), \(p_{3}\) (respectively, \(\overline{p_{1}}\), \(\overline{p_{2}}\), \(\overline{p_{3}}\)). Then
$$ \alpha _{1} \leq \overline{\alpha _{1}}, \quad \quad \alpha _{2} \leq \overline{\alpha _{2}}, \quad \textit{and} \quad \alpha _{3} \leq \overline{\alpha _{3}}. $$ -
(ii)
Let q be a point on the geodesic joining \(p_{1}\) to \(p_{2}\) and q̅ its comparison point in the interval \([\overline{p_{1}}, \overline{p_{2}}]\). If \(d(p_{1},q) = \Vert \overline{p_{1}} -\overline{q} \Vert \) and \(d(p_{2},q) = \Vert \overline{p_{2}} - \overline{q} \Vert \), then \(d(p_{3},q) \leq \Vert \overline{p_{3}} - \overline{q}\Vert \).
Definition 1
A subset K in an Hadamard manifold M is called geodesic convex if for all p and q in K, and for any geodesic \(\omega : [a,b] \to M\), \(a,b \in \mathbb{R}\) such that \(p =\omega (a)\) and \(q =\omega (b)\), one has \(\omega ((1-t)a + tb) \in K\), for all \(t \in [0,1]\).
Definition 2
A function \(f: M \to \mathbb{R}\) is called geodesic convex if for any geodesic ω in M, the composition function \(f \circ \omega : [a,b] \to \mathbb{R}\) is convex, that is,
The following remarks and lemma will be helpful in the sequel.
Remark 1
([25])
If \(x,y \in M\) and \(v \in T_{x}M\), then
Remark 2
([23])
Let \(x,y,z \in M\) and \(v \in T_{x}M\). By using (5) and Remark 1,
Lemma 3
([25])
Let \(x_{0} \in M\) and \(\{x_{n}\} \subset M\) with \(x_{n} \to x_{0}\). Then the following assertions hold:
-
(i)
For any \(y \in M\), we have \(\exp ^{-1}_{x_{n}}y \to \exp ^{-1}_{x_{0}}y\) and \(\exp ^{-1}_{y}x_{n} \to \exp ^{-1}_{y}x_{0}\);
-
(ii)
If \(v_{n} \in T_{x_{n}}M\) and \(v_{n} \to v_{0}\), then \(v_{0} \in T_{x_{0}}M\);
-
(iii)
Let \(u_{n}, v_{n} \in T_{x_{n}}M\) and \(u_{0}, v_{0} \in T_{x_{0}}M\), if \(u_{n} \to u_{0}\) and \(v_{n} \to v_{0}\), then \(\langle u_{n}, v_{n}\rangle \to \langle u_{0},v_{0}\rangle \);
-
(iv)
For every \(u \in T_{x_{0}}M\), the function \(V: M \to TM\), defined by \(V(x) = {\mathrm{P}}_{x,x_{0}}u\) for all \(x \in M\), is continuous on M.
Next, we present some concept of monotonicity and Lipschitz continuity of a single-valued vector field. Let K be a nonempty subset of M and \(\mathcal{X}(K)\) denote the set of all single-valued vector fields \(T : K \to TM\) such that \(Tx \in T_{x}M\), for each \(x \in K\).
Definition 3
A vector field \(T \in \mathcal{X}(K)\) is called
-
(i)
monotone if
$$ \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle + \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq 0, \quad \forall x,y \in K; $$ -
(ii)
pseudomonotone if
$$ \bigl\langle Tx, \exp ^{-1}_{x}y \bigr\rangle \geq 0 \quad \Rightarrow \quad \bigl\langle Ty, \exp ^{-1}_{y}x \bigr\rangle \leq 0, \quad \forall x,y \in K; $$ -
(iii)
Γ-Lipschitz continuous if there is \(\Gamma > 0\) such that
$$ \Vert {\mathrm{P}}_{x,y}Ty-Tx \Vert \leq \Gamma d(x,y), \quad \forall x,y \in K. $$
Let us end this section with the following results which are essential in establishing our main convergence theorems.
Definition 4
([19])
Let K be a nonempty subset of M and \(\{x_{n}\}\) be a sequence in M. Then \(\{x_{n}\}\) is said to be Fejér monotone with respect to K if for all \(p \in K\) and \(n \in \mathbb{N}\),
Lemma 4
([19])
Let K be a nonempty subset of M and \(\{x_{n}\} \subset M\) be a sequence in M such that \(\{x_{n}\}\) is Fejér monotone with respect to K. Then the following hold:
-
(i)
For every \(p \in K\), \(d(x_{n},p)\) converges;
-
(ii)
\(\{x_{n}\}\) is bounded;
-
(iii)
Assume that every cluster point of \(\{x_{n}\}\) belongs to K, then \(\{x_{n}\}\) converges to a point in K.
3 Main results
In this section, we discuss three algorithms for solving pseudomonotone variational problems. Throughout the remainder of this paper, unless explicitly stated otherwise, K always denotes a nonempty, closed, geodesic convex subset of an Hadamard manifold M. Consider a vector field \(T \in \mathcal{X}(K)\). In order to solve the variational inequality problem (2), we consider the following assumptions:
-
(H1)
\(VIP(T,K)\) is nonempty.
-
(H2)
The vector field \(T \in \mathcal{X}(K)\) is pseudomonotone and Γ-Lipschitz continuous.
First, we introduced a Tseng’s extragradient method for the variational inequality (2) on Hadamard manifolds. The step sizes in this algorithm are obtained employing the Lipschitz constant. The algorithm is described as Algorithm 1.
The following remark gives us a stopping criterion.
Remark 3
If \(x_{n} = y_{n}\), then \(x_{n}\) is a solution. In view of (8), we get
then \(x_{n} \in VIP(T,K)\).
To prove the convergence of Algorithm 1, we need the following lemma.
Lemma 5
Suppose that assumptions (H1)–(H2) hold. Let \(\{x_{n}\}\) be a sequence generated by Algorithm 1. Then
Proof
Let \(x \in VIP(T,K)\), then from (8), we obtain
that is,
As \(x \in VIP(T,K)\), this implies \(\langle Tx,\exp ^{-1}_{x}y_{n}\rangle \geq 0\). Since T is pseudomonotone, then we get \(\langle Ty_{n},\exp ^{-1}_{y_{n}}x\rangle \leq 0\). Now,
Fix \(n \in \mathbb{N}\). Let \(\triangle (y_{n},x_{n},x) \subseteq M\) be a geodesic triangle with vertices \(y_{n}\), \(x_{n}\), and x, and \(\triangle (\overline{y_{n}},\overline{x_{n}},\overline{x}) \subseteq \mathbb{R}^{2}\) be the corresponding comparison triangle. Then, we have
Again, letting \(\triangle (x_{n+1},y_{n},x) \subseteq M\) be a geodesic triangle with vertices \(x_{n+1}\), \(y_{n}\), and x, and \(\triangle (\overline{x_{n+1}},\overline{y_{n}},\overline{x}) \subseteq \mathbb{R}^{2}\) be the corresponding comparison triangle, one obtains
Now,
If α, α̅ are the angles at the vertices \(y_{n}\) \(\overline{y_{n}}\), in view of Lemma 2, we get \(\alpha \leq \overline{\alpha }\). In addition, by Proposition 2, we have
Repeating the same argument as above yields
and
Substituting (14), (15), and (16) into (13), we get
It follows from Remark 2 that the last inequality becomes
From the definition of \(x_{n+1} = \exp _{y_{n}}\mu _{n} ({\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n})\), we get \(\exp _{y_{n}}^{-1}x_{n+1} = \mu _{n} ({\mathrm{P}}_{y_{n},x_{n}}Tx_{n} - Ty_{n})\). From the above inequality, we obtain
Substituting (11) and (12) into (17) and using the fact that T is Γ-Lipschitz continuous, we deduce that
Thus,
Therefore, the proof is completed. □
In the light of the above lemma, we have the following result on the convergence of Algorithm 1.
Theorem 1
Suppose that assumptions (H1)–(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 1 converges to a solution of the variational inequality problem (2).
Proof
Since \(0 < \mu {'} \leq \mu _{n} \leq \mu {''} < \frac{1}{\Gamma }\), we deduce that \(0 < \Gamma \mu _{n} < 1\). This implies that \(0 < 1 - \Gamma ^{2} \mu _{n}^{2} < 1\). Let \(x^{*} \in VIP(T,K)\). In view of (10), we obtain
Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).
Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). By rearranging (18), and using \(\mu _{n} \in [\mu {'}, \mu {''} ]\), we have
Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), by (i) of Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. Letting \(n \to \infty \) in (19), we obtain
From the fact that the sequence \(\{x_{n}\}\) is Fejér monotone and by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). In view of (20), we get \(y_{n_{k}} \to u\) as \(k \to \infty \). Next, we show that \(u \in VIP(T,K)\). From (8), we get
and we further have
Noting Remarks 2 and 3 in the last inequality gives
It follows from (21) that
Since \(\mu _{n} \in [\mu {'}, \mu {''} ]\), and hence
Utilizing Lemma 3 and letting \(k \to \infty \), we get
which implies that \(u \in VIP(T,K)\). By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 1 converges to a solution of the problem (2). This completes the proof. □
The step sizes in Algorithm 1 rely upon the Lipschitz constants. Unfortunately, these constants are often unknown or difficult to approximate. Next, we present Tseng’s extragradient method when the Lipschitz constant is unknown. Then the algorithm reads as Algorithm 2.
To prove the convergence of Algorithm 2, we need the following lemmas.
Lemma 6
([23])
The Armijo-like search rule (22) is well defined and
Lemma 7
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 2. Then
Proof
Let \(x \in VIP(T,K)\). Then, according to the proof of Lemma 5, we can obtain the following inequality:
Using (22), one obtains
Therefore, the proof is completed. □
Based on the above two lemmas, we have the following result on the convergence of Algorithm 2.
Theorem 2
Suppose that assumptions (H1)–(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges to a solution of the variational inequality problem (2).
Proof
Let \(x^{*} \in VIP(T,K)\). Since \(\tau \in (0,1)\), by (23), we get
Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).
Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). In view of (25) and \(\tau \in (0,1)\), we have
and we further have
Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), using Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. By letting \(n \to \infty \) in (26), we obtain
Since the sequence \(\{x_{n}\}\) is Fejér monotone, by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). Now, using an argument similar to the proof of the fact that \(u \in VIP(T,K)\) in Theorem 1, we have \(u \in VIP(T,K)\), as required. By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 2 converges to a solution of the problem (2). Therefore, the proof is completed. □
Next, we present a modified Tseng’s extragradient method to solve the variational inequality (2). The step sizes in this algorithm are obtained by simple updating, as opposed to utilizing the line search, which brings about a lower computational cost. More precisely, the algorithm is designed as Algorithm 3.
To prove the convergence of Algorithm 3, we need the following results.
Lemma 8
([23])
The sequence \(\{\mu _{n}\}\) generated by Algorithm 3 is monotonically decreasing with lower bound \(\min \left \{ \frac{\tau }{\Gamma },\mu _{0} \right \} \).
Remark 4
([23])
By Lemma 8, the limit of \(\{\mu _{n}\}\) exists. We denote \(\mu =\lim_{n \to \infty }\mu _{n}\). Then \(\mu >0\) and \(\lim_{n \to \infty } \left(\frac{\mu _{n}}{\mu _{n+1}} \right ) =1 \).
Lemma 9
Let \(\{x_{n}\}\) be a sequence generated by Algorithm 3. Then
Proof
It is easy to see that, by the definition of \(\{\mu _{n}\}\), we have
Indeed, if \(Tx_{n} = Ty_{n}\) then the inequality (28) holds. Otherwise, we have
This implies that
Therefore, the inequality (28) holds when \(Tx_{n} = Ty_{n}\) and \(Tx_{n} \neq Ty_{n}\).
Letting \(x \in VIP(T,K)\), similarly as in the proof of Lemma 5 we can deduce the following inequality:
Substituting (31) into the above inequality, we get
Therefore, the proof is completed. □
From the above results, now we ready to prove the convergence of Algorithm 3.
Theorem 3
Suppose that assumptions (H1)–(H2) hold. Then the sequence \(\{x_{n}\}\) generated by Algorithm 3 converges to a solution of the variational inequality problem (2).
Proof
Let \(x^{*} \in VIP(T,K)\). Then by (27) we have
From Remark 4, we have
that is, for some \(N \geq 0\), for all \(n \geq N\), such that \(1 - \mu _{n}^{2} \frac{\tau ^{2}}{\mu _{n+1}^{2}} >0\). It suggests that \(d(x_{n+1},x^{*}) \leq d(x_{n},x^{*})\) for all \(n \geq N\). Therefore, \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\).
Next, we show that \(\lim_{n \to \infty }d(x_{n},y_{n}) = 0\). From (32), we have
and we further have
Since \(\{x_{n}\}\) is Fejér monotone with respect to \(VIP(T,K)\), using Lemma 4, \(\lim_{n \to \infty }d(x_{n},x^{*})\) exists. By letting \(n \to \infty \) in (26), we obtain
Since the sequence \(\{x_{n}\}\) is Fejér monotone, by (ii) of Lemma 4, \(\{x_{n}\}\) is bounded. Hence, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) which converges to a cluster point u of \(\{x_{n}\}\). Now, using an argument similar to the proof of the fact that \(u \in VIP(T,K)\) in Theorem 1, we have \(u \in VIP(T,K)\), as required. By (iii) of Lemma 4, the sequence \(\{x_{n}\}\) generated by Algorithm 3 converges to a solution of the problem (2). Therefore, the proof is completed. □
4 Numerical experiments
In this section, we present three numerical examples in the framework of Hadamard manifolds to represent convergence of Algorithms 1, 2, and 3. All programs are written in Matlab R2016b and computed on PC Intel(R) Core(TM) i7 @1.80 GHz, with 8.00 GB RAM.
Let \(M := \mathbb{R}^{++} = \{x \in \mathbb{R} : x >0\}\) and \((\mathbb{R}^{++},\langle \cdot , \cdot \rangle )\) be the Riemannian manifold, and \(\langle \cdot , \cdot \rangle \) the Riemannian metric defined by
for all vectors \(u,v \in T_{x}M\). The tangent space at \(x \in M\) is denoted by \(T_{x}M\). For \(x \in M \), the tangent space \(T_{x}M\) at x equals \(\mathbb{R}\). In addition, the parallel transport is the identity mapping. The Riemannian distance \(d: M \times M \to \mathbb{R}^{+} \) is defined by
see, for instance, [35]. Then, \((\mathbb{R}^{++},\langle \cdot , \cdot \rangle )\) is an Hadamard manifold, and the unique geodesic \(\omega : \mathbb{R} \to M\) starting from \(\omega (0) = x\) with \(v = \omega {'}(0) \in T_{x}M\) is defined by \(\omega (t) := x e^{(vt/x)}\). Therefore,
The inverse exponential map is defined by
Example 1
Let \(K = [1,+\infty )\) be a geodesic convex subset of \(\mathbb{R}^{++}\), and \(T: K \to \mathbb{R}\) be a single-valued vector field defined by
Let \(x,y \in K\) and \(\langle Tx, \exp ^{-1}_{x}y \rangle \geq 0\), then we have
Hence, T is pseudomonotone. Next, we show that T is Lipschitz continuous. Given \({x,y \in K}\),
and thus, T is \(1/2\)-Lipschitz continuous. So, T is pseudomonotone and \(1/2\)-Lipschitz continuous. Clearly, the variational inequality has a unique solution. Hence,
We deduce that \(VIP(T,K) = \{1\}\). Choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). With the initial point \(x_{0} = 2\), the numerical results of Algorithms 1, 2, and 3 are shown in Table 1 and Fig. 1.
Example 2
Let \(K = [1,2]\) be a geodesic convex subset of \(\mathbb{R}^{++}\), and \(T: K \to \mathbb{R}\) be a single-valued vector field defined by
Let \(x,y \in K\) and \(\langle Tx, \exp ^{-1}_{x}y \rangle \geq 0\). Then we have
Hence, T is pseudomonotone, and it is easy to see that T is 1-Lipschitz continuous. Thereby, T is pseudomonotone and 1-Lipschitz continuous. Clearly, the variational inequality has a unique solution. Hence,
We deduce that \(VIP(T,K) = \{2\}\). Choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). With the initial point \(x_{0} = 1\), the numerical results of Algorithms 1, 2, and 3 are shown in Table 2 and Fig. 2.
Example 3
Following [38], let \(\mathbb{R}^{++}_{n}\) is the product space of \(\mathbb{R}^{++}\), that is, \(\mathbb{R}^{++}_{n} = \{x = (x_{1},x_{2},\ldots ,x_{n})\in \mathbb{R}^{n} : x_{i} >0, \ i = 1,\ldots ,n\}\). Let \(M = (\mathbb{R}^{++}_{n}, \langle \cdot , \cdot \rangle )\) with the metric defined by \(\langle u, v \rangle := u^{T}V(x)v\), for \(x \in \mathbb{R}_{n}^{++}\) and \(u,v \in T_{x}\mathbb{R}_{n}^{++} \) where \(V(x)\) is a diagonal metric defined by \(V(x) = \operatorname{diag} (x_{1}^{-2},x_{2}^{-2},\ldots ,x_{n}^{-2} )\). In addition, the Riemannian distance is defined by \(d(x,y) := \sqrt{\sum_{i=1}^{n}\ln ^{2}\frac{x_{i}}{y_{i}}} \), for all \(x, y \in \mathbb{R}_{n}^{++}\).
Let \(K = \{x = (x_{1},x_{2},\ldots ,x_{n}) : 1 \leq x_{i} \leq 10, i = 1,2,\ldots ,n\} \) be a closed, geodesic convex subset of \(\mathbb{R}_{n}^{++}\) and \(T: K \to \mathbb{R}^{n}\) be a single-valued vector field defined by
This vector filed is a monotone and 1-Lipschitz continuous on \(\mathbb{R}_{n}^{++}\), see [39, Example 1]. We choose \(\eta = l = \tau =0.5\) and let \(\mu _{n} = \frac{1}{2} - \frac{1}{n+3}\). The starting points are randomly generated by the MATLAB built-in function rand:
The terminal criterion is \(d(x_{n},y_{n}) \leq \epsilon \). For the numerical experiment, we take \(\epsilon = 10^{-5}\), and \(n = 50, 100, 500\). The number of iterations (Iteration) and the computing time (Time) measured in seconds are described in Table 3.
The aforementioned results have delineated that Algorithm 1 is much quicker than Algorithms 2 and 3. In particular, if the Lipschitz-type constants are known, Algorithm 1 works well. Regardless of whether the Lipschitz constants are needed or not, we observe that Algorithm 2 is much quicker than Algorithm 3. However, Algorithm 3 has a lower computational cost than Algorithm 2.
5 Conclusions
In this paper, we focus on the variational inequality problem in Hadamard manifolds. Three Tseng’s-type methods are purposed to solve pseudomonotone variational inequality problems. The convergence of the proposed algorithms is established under standard conditions. Moreover, numerical experiments are supplied to illustrate the effectiveness of our algorithms.
Availability of data and materials
Not applicable.
References
Stampacchia, G.: Formes bilinéaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Classics in Applied Mathematics, vol. 31. SIAM, Philadelphia (2000). https://doi.org/10.1137/1.9780898719451. Reprint of the 1980 original
Iusem, A.N., Nasri, M.: Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 50(1), 59–76 (2011). https://doi.org/10.1007/s10898-010-9613-x
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Vol. II. Springer Series in Operations Research. Springer, New York (2003)
Hu, Y.H., Song, W.: Weak sharp solutions for variational inequalities in Banach spaces. J. Math. Anal. Appl. 374(1), 118–132 (2011). https://doi.org/10.1016/j.jmaa.2010.08.062
Thong, D.V., Hieu, D.V.: Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 341, 80–98 (2018). https://doi.org/10.1016/j.cam.2018.03.019
Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 78(4), 1045–1060 (2018). https://doi.org/10.1007/s11075-017-0412-z
Khanh, P.D., Vuong, P.T.: Modified projection method for strongly pseudomonotone variational inequalities. J. Glob. Optim. 58(2), 341–350 (2014). https://doi.org/10.1007/s10898-013-0042-5
Korpelevič, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12(4), 747–756 (1976)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011). https://doi.org/10.1007/s10957-010-9757-3
Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26(4–5), 827–845 (2011). https://doi.org/10.1080/10556788.2010.551536
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9), 1119–1132 (2012). https://doi.org/10.1080/02331934.2010.539689
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000). https://doi.org/10.1137/S0363012998338806
Boţ, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 287(1), 49–60 (2020). https://doi.org/10.1016/j.ejor.2020.04.035
Boţ, R.I., Csetnek, E.R.: An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 171(2), 600–616 (2016). https://doi.org/10.1007/s10957-015-0730-z
Wang, F., Xu, H.K.: Weak and strong convergence theorems for variational inequality and fixed point problems with Tseng’s extragradient method. Taiwan. J. Math. 16(3), 1125–1136 (2012). https://doi.org/10.11650/twjm/1500406682
Thong, D.V., Van Hieu, D.: Strong convergence of extragradient methods with a new step size for solving variational inequality problems. Comput. Appl. Math. 38(3), Article ID 136 (2019). https://doi.org/10.1007/s40314-019-0899-0
Thong, D.V., Van Hieu, D.: New extragradient methods for solving variational inequality problems and fixed point problems. J. Fixed Point Theory Appl. 20(3), Article ID 129 (2018). https://doi.org/10.1007/s11784-018-0610-x
Ferreira, O.P., Oliveira, P.R.: Proximal point algorithm on Riemannian manifolds. Optimization 51(2), 257–270 (2002). https://doi.org/10.1080/02331930290019413
Li, C., López, G., Martín-Márquez, V., Wang, J.H.: Resolvents of set-valued monotone vector fields in Hadamard manifolds. Set-Valued Var. Anal. 19(3), 361–383 (2011). https://doi.org/10.1007/s11228-010-0169-1
Ansari, Q.H., Babu, F., Li, X.B.: Variational inclusion problems in Hadamard manifolds. J. Nonlinear Convex Anal. 19(2), 219–237 (2018)
Németh, S.Z.: Variational inequalities on Hadamard manifolds. Nonlinear Anal. 52(5), 1491–1498 (2003). https://doi.org/10.1016/S0362-546X(02)00266-3
Chen, J., Liu, S., Chang, X.: Tseng’s extragradient methods for variational inequality on Hadamard manifolds. Appl. Anal. (2019). https://doi.org/10.1080/00036811.2019.1695783
Ansari, Q.H., Babu, F.: Existence and boundedness of solutions to inclusion problems for maximal monotone vector fields in Hadamard manifolds. Optim. Lett. 14(3), 711–727 (2020). https://doi.org/10.1007/s11590-018-01381-x
Li, C., López, G., Martín-Márquez, V.: Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 79(3), 663–683 (2009). https://doi.org/10.1112/jlms/jdn087
Wang, J.H., López, G., Martín-Márquez, V., Li, C.: Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 146(3), 691–708 (2010). https://doi.org/10.1007/s10957-010-9688-z
Li, C., Yao, J.C.: Variational inequalities for set-valued vector fields on Riemannian manifolds: convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 50(4), 2486–2514 (2012). https://doi.org/10.1137/110834962
Fan, J., Qin, X., Tan, B.: Tseng’s extragradient algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Appl. Anal., 1–14 (2020)
Li, S.L., Li, C., Liou, Y.C., Yao, J.C.: Existence of solutions for variational inequalities on Riemannian manifolds. Nonlinear Anal. 71(11), 5695–5706 (2009). https://doi.org/10.1016/j.na.2009.04.048
Tang, Gj., Zhou, Lw., Huang, Nj.: The proximal point algorithm for pseudomonotone variational inequalities on Hadamard manifolds. Optim. Lett. 7(4), 779–790 (2013). https://doi.org/10.1007/s11590-012-0459-7
Ferreira, O.P., Pérez, L.R.L., Németh, S.Z.: Singularities of monotone vector fields and an extragradient-type algorithm. J. Glob. Optim. 31(1), 133–151 (2005). https://doi.org/10.1007/s10898-003-3780-y
Tang, Gj., Huang, Nj.: Korpelevich’s method for variational inequality problems on Hadamard manifolds. J. Glob. Optim. 54(3), 493–509 (2012). https://doi.org/10.1007/s10898-011-9773-3
Tang, Gj., Wang, X., Liu, Hw.: A projection-type method for variational inequalities on Hadamard manifolds and verification of solution existence. Optimization 64(5), 1081–1096 (2015). https://doi.org/10.1080/02331934.2013.840622
Sakai, T.: Riemannian Geometry. Translations of Mathematical Monographs, vol. 149. Am. Math. Soc., Providence (1996). Translated from the 1992 Japanese original by the author
do Carmo, M.Pa.: Riemannian Geometry. Mathematics: Theory & Applications. Birkhäuser Boston, Boston (1992). https://doi.org/10.1007/978-1-4757-2201-7. Translated from the second Portuguese edition by Francis Flaherty
Udrişte, C.: Convex Functions and Optimization Methods on Riemannian Manifolds. Mathematics and Its Applications, vol. 297. Kluwer Academic, Dordrecht (1994). https://doi.org/10.1007/978-94-015-8390-9
Bridson, M.R., Haefliger, A.: Metric Spaces of Non-positive Curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. Springer, Berlin (1999). https://doi.org/10.1007/978-3-662-12494-9
Da Cruz Neto, J.X., Ferreira, O.P., Pérez, L.R.L., Németh, S.Z.: Convex- and monotone-transformable mathematical programming problems and a proximal-like point method. J. Glob. Optim. 35(1), 53–69 (2006). https://doi.org/10.1007/s10898-005-6741-9
Ansari, Q.H., Babu, F.: Proximal point algorithm for inclusion problems in Hadamard manifolds with applications. Optim. Lett. (2019). https://doi.org/10.1007/s11590-019-01483-0
Acknowledgements
This research is supported by Postdoctoral Fellowship from King Mongkut’s University of Technology Thonburi (KMUTT), Thailand. Moreover, this research project is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005. The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT.
Funding
Postdoctoral Fellowship from King Mongkut’s University of Technology Thonburi (KMUTT), Thailand. Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005. The Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT, Thailand.
Author information
Authors and Affiliations
Contributions
All authors contributed equally in writing this article. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Khammahawong, K., Kumam, P., Chaipunya, P. et al. New Tseng’s extragradient methods for pseudomonotone variational inequality problems in Hadamard manifolds. Fixed Point Theory Algorithms Sci Eng 2021, 5 (2021). https://doi.org/10.1186/s13663-021-00689-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-021-00689-1