Let Γ and Λ denote the following classes of functions:
-
\(\Gamma=\{\eta\colon[0, \infty) \longrightarrow[0, \infty), \eta \mbox{ is continuous and monotonic increasing}\}\);
-
\(\Lambda=\{\xi\colon[0, \infty) \longrightarrow[0, \infty), \xi \mbox{ is bounded on any bounded interval in } [0, \infty)\}\).
Now, we discuss some properties of some special type of functions in Λ.
Let \(\Theta= \{\theta\in\Lambda: \underline{\lim}\, \theta(z_{n}) > 0, \mbox{whenever } \{z_{n}\}\mbox{ is any sequence of nonnegative real}\mbox{ }\mbox{numbers converging to }l> 0\}\).
We note that Θ is nonempty. For an illustration, we define \(\theta_{1}\) on \([0,\infty)\) by \(\theta_{1}(x)=e^{2x}\), \(x\in[0, \infty)\). Then \(\theta_{1}\in\Theta\). Here we observe that \(\theta_{1}(0)=1 > 0\). On the other hand, if \(\theta_{2}(x)=x^{3}\), \(x\in[0, \infty)\), then \(\theta_{2}\in\Theta\) and \(\theta_{2}(0)= 0\).
Also, for any \(\theta\in\Theta\), it is clear that \(\theta(x)> 0\) for \(x>0\); and \(\theta(0)\) need not be equal to 0.
Let \(\Upsilon= \{\varphi\in\Lambda: \overline{\lim}\, \varphi (z_{n}) < l, \mbox{whenever } \{z_{n}\}\mbox{ is any sequence of nonnegative real}\mbox{ }\mbox{numbers converging to }l > 0\}\).
It follows from the definition that, for any \(\varphi\in\Upsilon\), \(\varphi(y) < y\) for all \(y > 0\).
Theorem 2.1
Let
\((X, d)\)
be a complete metric space and ⪯ be a partial order on
X. Let
\((A, B)\)
be a pair of nonempty subsets of
X
such that
\(A_{0}\)
is nonempty and closed. Let
\(S \colon A \longrightarrow B\)
be a mapping with the properties that
\(S(A_{0})\subseteq B_{0}\)
and
S
is proximally increasing on
\(A_{0}\). Assume that there exist
\(\eta\in\Gamma\)
and
\(\xi, \theta\in\Lambda\)
such that
-
(i)
for
\(x, y\in[0, \infty)\), \(\eta(x)\leq\xi(y) \Longrightarrow x\leq y\),
-
(ii)
\(\eta(z) - \overline{\lim}\, \xi(z_{n}) + \underline{\lim}\, \theta(z_{n}) > 0\), whenever
\(\{z_{n}\}\)
is any sequence of nonnegative real numbers converging to
\(z> 0\),
-
(iii)
for all
\(x, y, u, v \in A_{0}\)
$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow\quad\eta \bigl(d(u, v)\bigr) \leq\xi\bigl(M(x, y, u, v)\bigr) - \theta \bigl(M(x, y, u, v) \bigr), $$
where
\(M(x, y, u, v) = \max \{d(x, y), \frac{d(x, u) + d(y, v)}{2}, \frac{d(y, u) + d(x, v)}{2} \}\).
Suppose either
S
is continuous or
X
is regular. Also, suppose that there exist elements
\(x_{0}, x_{1} \in A_{0}\)
for which
\(d(x_{1}, Sx_{0}) = d(A, B)\)
and
\(x_{0} \preceq x_{1}\). Then
S
has a best proximity point in
\(A_{0}\).
Proof
It follows from the definition of \(A_{0}\) and \(B_{0}\) that for every \(x\in A_{0}\) there exists \(y\in B_{0}\) such that \(d(x, y) = d(A, B)\) and conversely, for every \(y'\in B_{0}\) there exists \(x'\in A_{0}\) such that \(d(x', y') = d(A, B)\). Since \(S(A_{0})\subseteq B_{0}\), for every \(x \in A_{0}\) there exists a \(y \in A_{0}\) such that \(d(y, Sx) = d(A, B)\).
By the hypothesis of the theorem there exist \(x_{0}, x_{1} \in A_{0}\) for which \(x_{0} \preceq x_{1}\) and
$$ d(x_{1}, Sx_{0}) = d(A, B). $$
(2.1)
Now, \(x_{1}\in A_{0}\) and \(S(A_{0}) \subseteq B_{0}\) imply the existence of a point \(x_{2} \in A_{0}\) such that
$$ d(x_{2}, Sx_{1}) = d(A, B). $$
(2.2)
As S is proximally increasing on \(A_{0}\), we get \(x_{1} \preceq x_{2}\). In this way we obtain a sequence \(\{x_{n}\}\) in \(A_{0}\) such that for all \(n \geq0\),
$$ x_{n} \preceq x_{n + 1} $$
(2.3)
and
$$ d(x_{n + 1}, Sx_{n}) = d(A, B). $$
(2.4)
By the hypothesis (iii), \(x_{n} \preceq x_{n + 1}\), \(d(x_{n + 1}, Sx_{n}) = d(A, B)\) and \(d(x_{n + 2}, Sx_{n + 1}) = d(A, B)\) imply that
$$ \eta\bigl(d(x_{n + 1}, x_{n + 2})\bigr) \leq\xi \bigl(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) \bigr) - \theta\bigl(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2})\bigr), $$
(2.5)
where
$$\begin{aligned} &M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) \\ &\quad= \max \biggl\{ d(x_{n}, x_{n + 1}), \frac{d(x_{n}, x_{n + 1}) + d(x_{n + 1}, x_{n + 2})}{2}, \frac{d(x_{n + 1}, x_{n + 1}) + d(x_{n}, x_{n + 2})}{2} \biggr\} \\ &\quad= \max \biggl\{ d(x_{n}, x_{n + 1}), \frac{d(x_{n}, x_{n + 1}) + d(x_{n + 1}, x_{n + 2})}{2}, \frac{ d(x_{n}, x_{n + 2})}{2} \biggr\} . \end{aligned}$$
By the triangular inequality, \(\frac{d(x_{n}, x_{n+2})}{2}\leq\frac {d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2}\). So, it follows that
$$ M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2})=\max \biggl\{ d(x_{n}, x_{n+1}), \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2} \biggr\} . $$
Let \(U_{n} = d(x_{n}, x_{n + 1})\), for all \(n \geq0\).
Case 1: \(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) = d(x_{n}, x_{n +1})\). Then by (2.5),
$$ \eta\bigl(d(x_{n + 1}, x_{n + 2})\bigr) \leq\xi \bigl(d(x_{n}, x_{n+1})\bigr) - \theta \bigl(d(x_{n}, x_{n+1})\bigr), $$
that is,
$$ \eta(U_{n+1})\leq\xi(U_{n})- \theta(U_{n}), $$
(2.6)
which implies that \(\eta(U_{n+1})\leq\xi(U_{n})\). Then it follows by the hypothesis (i) of the theorem that \(U_{n+1} \leq U_{n}\), for all \(n \geq0\).
Case 2: \(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) = \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2} = \frac{U_{n} + U_{n + 1}}{2} = V_{n}\) (say). Then it follows from (2.5) that
$$ \eta(U_{n+1})\leq\xi(V_{n})- \theta(V_{n}), $$
(2.7)
which implies that \(\eta(U_{n+1})\leq\xi(V_{n})= \xi (\frac {U_{n+1} + U_{n}}{2} )\). Again by the hypothesis (i) the theorem it follows that \(U_{n+1} \leq \frac{U_{n} + U_{n + 1}}{2}\), that is, \(U_{n+1} \leq U_{n}\), for all \(n \geq0\).
From Case 1 and Case 2, we conclude that \(\{U_{n}\}\) is a monotone decreasing sequence of nonnegative real numbers. As \(\{U_{n}\}\) is bounded below by zero, there exists an \(t \geq0\) such that
$$ \lim_{n\rightarrow\infty} U_{n} = d(x_{n}, x_{n + 1}). $$
(2.8)
Then it follows that
$$ \lim_{n\rightarrow\infty} V_{n} = \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2}. $$
(2.9)
Taking the limit supremum in both sides of the inequality (2.6), using (2.8), the continuity of η, and the property of ξ and θ, we obtain
$$ \eta(t) \leq \overline{\lim}\, \xi(U_{n}) + \overline{\lim}\, \bigl(- \theta(U_{n})\bigr). $$
Since \(\overline{\lim}\, (- \theta(U_{n})) = - \underline{\lim}\, \theta(U_{n})\), it follows that
$$ \eta(t) \leq \overline{\lim}\, \xi(U_{n}) - \underline{\lim}\, \theta(U_{n}), $$
that is,
$$ \eta(t) - \overline{\lim}\, \xi(U_{n}) + \underline{\lim}\, \theta(U_{n}) \leq0, $$
which, by the hypothesis (ii) and (2.8), is a contradiction unless \(t = 0\).
Arguing similarly as above, from (2.7) and (2.8), we have
$$ \eta(t) - \overline{\lim}\, \xi(V_{n}) + \underline{\lim}\, \theta(V_{n}) \leq0, $$
which, by the hypothesis (ii) and (2.9), is a contradiction unless \(t = 0\). Hence
$$ U_{n} = d(x_{n}, x_{n + 1}) \longrightarrow0 \quad\mbox{as } n\longrightarrow\infty. $$
(2.10)
Next we show that \(\{x_{n}\}\) is a Cauchy sequence.
Suppose that \(\{x_{n}\}\) is not a Cauchy sequence. Then there exist \(\delta> 0\) and two sequences \(\{m(k)\}\) and \(\{n(k)\}\) of positive integers such that for all positive integers k, \(n(k) > m(k) > k\) and \(d(x_{m(k)}, x_{n(k)})\geq\delta\). Assuming that \(n(k)\) is the smallest such positive integer, we get
$$n(k) > m(k) > k,\quad d(x_{m(k)}, x_{n(k)})\geq\delta \quad\mbox{and} \quad d(x_{m(k)}, x_{n(k)-1})< \delta. $$
Now,
$$\delta\leq d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{n(k)-1}) + d(x_{n(k)-1}, x_{n(k)}) < \delta+ d(x_{n(k)-1}, x_{n(k)}). $$
From the above inequality and (2.10), it follows that
$$ \lim_{k\rightarrow\infty} d(x_{m(k)}, x_{n(k)})= \delta. $$
(2.11)
Again,
$$d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{m(k)+1}) + d(x_{m(k)+1}, x_{n(k)+1})+ d(x_{n(k)+1}, x_{n(k)}) $$
and
$$d(x_{m(k)+1}, x_{n(k)+1})\leq d(x_{m(k)+1}, x_{m(k)})+ d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$
The above two inequalities imply that
$$\begin{aligned} &d(x_{m(k)}, x_{n(k)}) - d(x_{m(k)}, x_{m(k)+1}) - d(x_{n(k)+1}, x_{n(k)})\\ &\quad \leq d(x_{m(k)+1}, x_{n(k)+1}) \leq d(x_{m(k)+1}, x_{m(k)})+ d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). \end{aligned}$$
From the above inequality, (2.10) and (2.11), we have
$$ \lim_{k\rightarrow\infty} d(x_{m(k)+1}, x_{n(k)+1}) = \delta. $$
(2.12)
Again,
$$d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{n(k)+1})+ d(x_{n(k)+1}, x_{n(k)}) $$
and
$$d(x_{m(k)}, x_{n(k)+1})\leq d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$
The above two inequalities imply that
$$d(x_{m(k)}, x_{n(k)}) - d(x_{n(k)+1}, x_{n(k)}) \leq d(x_{m(k)}, x_{n(k)+1})\leq d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$
From the above inequality, (2.10) and (2.11), we have
$$ \lim_{k\rightarrow\infty} d(x_{m(k)}, x_{n(k)+1})= \delta. $$
(2.13)
Similarly, we can prove that
$$ \lim_{k\rightarrow\infty}d(x_{n(k)}, x_{m(k)+1}) = \delta. $$
(2.14)
By the construction of the sequence \(\{x_{n}\}\), we have
$$x_{m(k)} \preceq x_{n(k)},\quad d(x_{m(k) + 1}, Sx_{m(k)}) = d(A, B) \quad\mbox{and}\quad d(x_{n(k) + 1}, Sx_{n(k)}) = d(A, B), $$
which, by the hypothesis (iii), imply that
$$\begin{aligned} &\eta\bigl(d(x_{m(k)+1}, x_{n(k)+1})\bigr) \\ &\quad\leq\xi \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)- \theta\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr), \end{aligned}$$
(2.15)
where
$$\begin{aligned} &M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\\ &\quad=\max \biggl\{ d(x_{m(k)}, x_{n(k)}), \frac{d(x_{m(k)}, x_{m(k)+1}) + d(x_{n(k)}, x_{n(k)+1})}{2},\\ &\qquad{} \frac{d(x_{n(k)}, x_{m(k)+1}) + d(x_{m(k)}, x_{n(k)+1})}{2} \biggr\} . \end{aligned}$$
From (2.10), (2.11), (2.13), and (2.14), it follows that
$$ \lim_{k\rightarrow\infty} M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})= \delta. $$
(2.16)
Taking the limit supremum in both sides of the inequality (2.15), using (2.12), (2.16), the continuity of η, and the property of ξ and θ, we obtain
$$ \eta(\delta) \leq \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) + \overline{\lim }\, \bigl(- \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)\bigr). $$
As \(\overline{\lim}\, (- \theta(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}))) = - \underline{\lim}\, \theta(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}))\), it follows that
$$ \eta(\delta) \leq \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) - \underline{\lim}\, \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr), $$
that is,
$$ \eta(\delta) - \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) + \underline{\lim}\, \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)\leq0, $$
which, by the hypothesis (ii) and (2.16), is a contradiction. Therefore, \(\{x_{n}\}\) is a Cauchy sequence in \(A_{0}\). Since \(A_{0}\) is a closed subset of complete metric space \((X, d)\), there exists \(a \in A_{0} \) such that
$$ \lim_{n\rightarrow\infty} x_{n} = a; \quad \mbox{that is}, \lim_{n\rightarrow\infty} d( x_{n}, a) = 0. $$
(2.17)
Taking \(n\longrightarrow\infty\) in (2.4) and using the continuity of S, we have \(d(a, Sa) = d(A, B)\); that is, a is a best proximity point of S.
By (2.3) and (2.17), we have
$$ x_{n} \preceq a \quad\mbox{for all } n \geq0. $$
(2.18)
Now \(a \in A_{0}\) and \(S(A_{0}) \subseteq B_{0}\) imply the existence of a point \(p \in A_{0}\) for which
$$ d(p, Sa) = d(A, B). $$
(2.19)
By (2.4), (2.18) and (2.19), we have
$$ x_{n} \preceq a,\quad d(x_{n+1}, Sx_{n}) = d(A, B) \quad \mbox{and} \quad d(p, Sa) = d(A, B), $$
which, by the hypothesis (iii) of the theorem, imply that
$$ \eta\bigl(d(x_{n + 1}, p)\bigr) \leq\xi \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) - \theta \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr), $$
(2.20)
where
$$ M(x_{n}, a, x_{n + 1}, p)=\max \biggl\{ d(x_{n}, a), \frac{d(x_{n}, x_{n + 1})+ d(a, p)}{2}, \frac{d(a, x_{n + 1}) + d(x_{n}, p)}{2} \biggr\} . $$
From (2.17), it follows that
$$ \lim_{n\rightarrow\infty} M(x_{n}, a, x_{n + 1}, p) = \frac{d(a, p)}{2}. $$
(2.21)
Taking the limit supremum in both sides of the inequality (2.20), using (2.17), (2.21), the properties of η, and the property of ξ and θ, we obtain
$$ \eta\biggl(\frac{d(a, p)}{2}\biggr)\leq\eta\bigl(d(a, p)\bigr) \leq \overline { \lim}\, \xi\bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) + \overline{\lim }\, \bigl(- \theta\bigl(M(x_{n}, a, x_{n + 1}, p)\bigr)\bigr). $$
Arguing similarly as discussed above, we have
$$ \eta \biggl(\frac{d(a, p)}{2} \biggr) - \overline{\lim}\, \xi \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) + \underline{\lim}\, \theta \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr)\leq0, $$
which, by the hypothesis (ii) and (2.21), is a contradiction unless \(d(a, p) = 0\); that is, \(p = a\). Then by (2.19) we have \(d(a, Sa) = d(A, B)\); that is, a is a best proximity point of S. □
Theorem 2.2
In addition to the hypotheses of Theorem
2.1, suppose that for every
\(x, y \in A_{0}\)
there exists
\(u \in A_{0}\)
such that
u
is comparable to
x
and
y. Then
S
has a unique best proximity point.
Proof
By Theorem 2.1, the set of best proximity points of S is nonempty. Suppose \(x, y\in A_{0}\) are two best proximity points of S; that is,
$$ d(x, Sx) = d(A, B) \quad\mbox{and}\quad d(y, Sy) = d(A, B). $$
(2.22)
By the assumption, there exists \(u \in A_{0}\), which is comparable with x and y.
Put \(u_{0} = u\). Suppose that
$$ u_{0} \preceq x \quad\mbox{(in the other case the proof is similar)}. $$
(2.23)
\(S(A_{0}) \subseteq B_{0}\) and \(u_{0} = u\in A_{0}\) imply the existence of a point \(u_{1} \in A_{0}\) for which
$$ d(u_{1}, Su_{0}) = d(A, B). $$
(2.24)
Since S is proximally increasing on \(A_{0}\), from (2.22), (2.23), and (2.24) we have
$$ u_{1} \preceq x. $$
(2.25)
Following this process, we obtain a sequence \(\{u_{n}\}\) in \(A_{0}\) such that for all \(n \geq0\),
$$ d(u_{n+1}, Su_{n}) = d(A, B) \quad\mbox{and} \quad u_{n} \preceq x. $$
(2.26)
By (2.22) and (2.26), we have
$$ u_{n} \preceq x,\quad d(u_{n+1}, Su_{n}) = d(A, B) \quad\mbox{and}\quad d(x, Sx) = d(A, B), $$
which, by the hypothesis (iii) of Theorem 2.1, imply that
$$ \eta\bigl(d(u_{n+1}, x )\bigr) \leq\xi \bigl(M(u_{n}, x, u_{n + 1}, x)\bigr) - \theta \bigl(M(u_{n}, x, u_{n + 1}, x)\bigr), $$
(2.27)
where
$$\begin{aligned} M(u_{n}, x, u_{n + 1}, x) &= \max \biggl\{ d(u_{n}, x), \frac{d(u_{n}, u_{n + 1}) + d(x, x)}{2}, \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} \\ & = \max \biggl\{ d(u_{n}, x), \frac{d(u_{n}, u_{n + 1})}{2}, \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} . \end{aligned}$$
By the triangular inequality \(\frac{d(u_{n}, u_{n+1})}{2}\leq\frac {d(x, u_{n + 1}) + d(u_{n}, x)}{2}\). Then it follows that
$$M(u_{n}, x, u_{n + 1}, x) =\max \biggl\{ d(u_{n}, x), \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} . $$
Let \(Q_{n} = d(u_{n}, x)\), for all \(n \geq0\).
Arguing similarly as in the proof of Theorem 2.1 (Case 1 and Case 2), we can prove that \(\{Q_{n}\}\) is a monotone decreasing sequence of nonnegative real numbers and
$$ \lim_{n\rightarrow\infty}Q_{n} = \lim _{n\rightarrow\infty}d(u_{n}, x) = 0. $$
(2.28)
Similarly, we show that
$$ \lim_{n\rightarrow\infty}d(u_{n}, y) = 0. $$
(2.29)
By the triangle inequality, and using (2.28) and (2.29), we have
$$ 0 \leq d(x, y)\leq\bigl[d(x, u_{n}) + d(u_{n}, y)\bigr] \longrightarrow0 \quad\mbox{as } n\longrightarrow\infty, $$
which implies that \(d(x, y) = 0\); that is, \(x = y\); that is, the best proximity point of S is unique. □
With the help of P-property we have the following theorem which is obtained by an application of Theorem 2.1.
Theorem 2.3
Let
\((X, d)\)
be a complete metric space and ⪯ be a partial order on
X. Let
\((A, B)\)
be a pair of nonempty and closed subsets of
X
such that
\(A_{0}\)
is nonempty and
\((A, B)\)
satisfies the
P-property. Let
\(S \colon A \longrightarrow B\)
be a mapping with the properties that
\(S(A_{0})\subseteq B_{0}\)
and
S
is proximally increasing on
\(A_{0}\). Assume that there exist
\(\eta\in \Gamma\)
and
\(\xi, \theta\in\Lambda\)
such that
-
(i)
for
\(x, y\in[0, \infty)\), \(\eta(x)\leq\xi(y) \Longrightarrow x\leq y\),
-
(ii)
\(\eta(z) - \overline{\lim}\, \xi(z_{n}) + \underline{\lim}\, \theta(z_{n}) > 0\), whenever
\(\{z_{n}\}\)
is any sequence of nonnegative real numbers converging to
\(z> 0\),
-
(iii)
for all
\(x, y, u, v \in A_{0}\)
$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow \quad\eta \bigl(d(Sx, Sy)\bigr) \leq\xi\bigl(M(x, y, u, v)\bigr) - \theta \bigl(M(x, y, u, v)\bigr), $$
where
\(M(x, y, u, v) = \max \{d(x, y), \frac{d(x, u) + d(y, v)}{2}, \frac{d(y, u) + d(x, v)}{2} \}\).
Suppose either
S
is continuous or
X
is regular. Also, suppose that there exist elements
\(x_{0}, x_{1} \in A_{0}\)
for which
\(d(x_{1}, Sx_{0}) = d(A, B)\)
and
\(x_{0} \preceq x_{1}\). Then
S
has a best proximity point in
\(A_{0}\).
Proof
By Lemma 1.1, \(A_{0}\) is nonempty and closed. Since \((A, B)\) satisfies the P-property, \(d(u, Sx)= d(A, B)\) and \(d(v, Sy)= d(A, B)\) imply that \(d(u, v)= d(Sx, Sy)\). Then condition (iii) of the theorem reduces to the condition (iii) of Theorem 2.1. Therefore, all the conditions of the Theorem 2.1 are satisfied and hence we have the required proof. □