- Research
- Open access
- Published:
The Wiener-Hopf Equation Technique for Solving General Nonlinear Regularized Nonconvex Variational Inequalities
Fixed Point Theory and Applications volume 2011, Article number: 86 (2011)
Abstract
In this paper, we introduce and study some new classes of extended general nonlinear regularized non-convex variational inequalities and the extended general nonconvex Wiener-Hopf equations, and by the projection operator technique, we establish the equivalence between the extended general nonlinear regularized nonconvex variational inequalities and the fixed point problems as well as the extended general nonconvex Wiener-Hopf equations. Then by using this equivalent formulation, we discuss the existence and uniqueness of solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. We apply the equivalent alternative formulation and a nearly uniformly Lipschitzian mapping S for constructing some new p-step projection iterative algorithms with mixed errors for finding an element of set of the fixed points of nearly uniformly Lipschitzian mapping S which is unique solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. We also consider the convergence analysis of the suggested iterative schemes under some suitable conditions.
Mathematics Subject Classification (2010)
Primary 47H05; Secondary 47J20, 49J40
1 Introduction
The theory of variational inequalities, which was initially introduced by Stampacchia [1] in 1964, is a branch of the mathematical sciences dealing with general equilibrium problems. It has a wide range of applications in economics, optimizations research, industry, physics, and engineering sciences. Many research papers have been written lately, both on the theory and applications of this field. Important connections with main areas of pure and applied sciences have been made, see for example [2, 3] and the references cited therein. The development of variational inequality theory can be viewed as the simultaneous pursuit of two different lines of research. On the one hand, it reveals the fundamental facts on the qualitative aspects of the solution to important classes of problems; on the other hand, it also enables us to develop highly efficient and powerful new numerical methods to solve, for example, obstacle, unilateral, free, moving and the complex equilibrium problems. One of the most interesting and important problems in variational inequality theory is the development of an efficient numerical method. There is a substantial number of numerical methods including projection method and its variant forms, Wiener-Holf (normal) equations, auxiliary principle, and descent framework for solving variational inequalities and complementarity problems. For the applications on physical formulations, numerical methods and other aspects of variational inequalities, see [1–52] and the references therein.
Projection method and its variant forms represent important tool for finding the approximate solution of various types of variational and quasi-variational inequalities, the origin of which can be traced back to Lions and Stampacchia [31]. The projection type methods were developed in 1970's and 1980's. The main idea in this technique is to establish the equivalence between the variational inequalities and the fixed point problems using the concept of projection. This alternate formulation enables us to suggest some iterative methods for computing the approximate solution. Shi [50, 51] and Robinson [48] considered the problem of solving a system of equations which are called the Wiener-Hopf equations or normal maps. Shi [50] and Robinson [48] proved that the variational inequalities and the Wiener-Hopf equations are equivalent by using the projection technique. It turned out that this alternative equivalent formulation is more general and flexible. It has shown in [48–53] that the Wiener-Hopf equations provide us a simple, elegant and convenient device for developing some efficient numerical methods for solving variational inequalities and complementarity problems.
It should be pointed that almost all the results regarding the existence and iterative schemes for solving variational inequalities and related optimizations problems are being considered in the convexity setting. Consequently, all the techniques are based on the properties of the projection operator over convex sets, which may not hold in general, when the sets are nonconvex. It is known that the uniformly prox-regular sets are nonconvex and include the convex sets as special cases, for more details, see for example [23, 28, 29, 46]. In recent years, Bounkhel et al. [23], Noor [36, 41] and Pang et al. [45] have considered variational inequalities in the context of uniformly prox-regular sets.
On the other hand, related to the variational inequalities, we have the problem of finding the fixed points of the nonexpansive mappings, which is the subject of current interest in functional analysis. It is natural to consider a unified approach to these two different problems. Motivated and inspired by the research going in this direction, Noor and Huang [43] considered the problem of finding the common element of the set of the solutions of variational inequalities and the set of the fixed points of the nonexpansive mappings. Noor [38] suggested and analyzed some three-step iterative algorithms for finding the common elements of the set of the solutions of the Noor variational inequalities and the set of the fixed points of nonexpansive mappings. He also discussed the convergence analysis of the suggested iterative algorithms under some conditions.
Recently, Qin and Noor [47] established the equivalence between general variational inequalities and general Wiener-Hopf equations. They proposed and analyzed a new iterative method for solving variational inequalities and related optimization problems. They also considered the problem of finding a comment element of fixed points of nonexpansive mappings and the set of solution of the general variational inequalities.
It is well known that every nonexpansive mapping is a Lipschitzian mapping. Lipschitzian mappings have been generalized by various authors. Sahu [53] introduced and investigated nearly uniformly Lipschitzian mappings as generalization of Lipschitzian mappings.
Motivated and inspired by the above works, at the present paper, some new classes of the extended general nonlinear regularized nonconvex variational inequalities and the extended general nonconvex Wiener-Hopf equations are introduced and studied, and by the projection technique, the equivalence between the extended general nonlinear regularized nonconvex variational inequalities and the fixed point problems as well as the extended general nonconvex Wiener-Hopf equations is proved. Then by using this equivalent formulation, the existence and uniqueness of solution of the problem of extended general nonlinear regularized nonconvex variational inequalities are discussed. Applying the equivalent alternative formulation and a nearly uniformly Lipschitzian mapping S, some new p-step projection iterative algorithms with mixed errors for finding an element of the set of fixed points of nearly uniformly Lipschitzian mapping S which is a unique solution of the problem of extended general nonlinear regularized nonconvex variational inequalities are defined. The convergence analysis of the suggested iterative schemes under some suitable conditions is discussed. Some remarks about established statements by Noor [38], Noor et al. [44] and Qin and Noor [47] are presented. Also, this fact that their statements are special cases of our results is shown. The results obtained in this paper may be viewed as an refinement and improvement of the previously known results.
2 Preliminaries and basic results
Throughout this article, we will let be a real Hilbert space which is equipped with an inner product 〈.,.〉 and corresponding norm ||cdot|| and K be a nonempty convex subset of . We denote by d K (·) or d(., K) the usual distance function to the subset K, i.e., . Let us recall the following well-known definitions and some auxiliary results of nonlinear convex analysis and nonsmooth analysis [27–29, 46].
Definition 2.1. Let is a point not lying in K. A point v ∈ K is called a closest point or a projection of u onto K if d K (u) = ||u - v||. The set of all such closest points is denoted by P K (u), i.e.,
Definition 2.2. The proximal normal cone of K at a point with u ∉ K is given by
Clarke et al. [28], in Proposition 1.1.5, give a characterization of as follows:
Lemma 2.3. Let K be a nonempty closed subset in . Then if and only if there exists a constant α = α (ξ, u) > 0 such that 〈ξ, v - u〉 ≤ α||v - u||2 for all v ∈ K.
The above inequality is called the proximal normal inequality. The special case in which K is closed and convex is an important one. In Proposition 1.1.10 of [28], the authors give the following characterization of the proximal normal cone the closed and convex subset :
Lemma 2.4. Let K be a nonempty closed and convex subset in . Then if and only if 〈ξ, v - u〉 ≤ 0 for all v ∈ K.
Definition 2.5. Let X is a real Banach space and f : X → ℝ be Lipschitzian with constant τ near a given point x ∈ X, that is, for some ε > 0, we have |f(y) - f(z)| ≤ τ||y - z|| for all y, z ∈ B(x; ε), where B(x; ε) denotes the open ball of radius r > 0 and centered at x. The generalized directional derivative of f at x in the direction v, denoted by f°(x; v), is defined as follows:
where y is a vector in X and t is a positive scalar.
The generalized directional derivative defined earlier can be used to develop a notion of tangency that does not require K to be smooth or convex.
Definition 2.6. The tangent cone T K (x) to K at a point x in K is defined as follows:
Having defined a tangent cone, the likely candidate for the normal cone is the one obtained from T K (x) by polarity. Accordingly, we define the normal cone of K at x by polarity with T K (x) as follows:
Definition 2.7. The Clarke normal cone, denoted by , is given by , where means the closure of the convex hull of S. It is clear that one always has . The converse is not true in general. Note that is always closed and convex cone, whereas is always convex, but may not be closed (see [27, 28, 46]).
In 1995, Clarke et al. [29] introduced and studied a new class of nonconvex sets called proximally smooth sets; subsequently, Poliquin et al. in [46] investigated the aforementioned sets, under the name of uniformly prox-regular sets. These have been successfully used in many nonconvex applications in areas such as optimizations, economic models, dynamical systems, differential inclusions, etc. For such as applications see [20–22, 24]. This class seems particularly well suited to overcome the difficulties which arise due to the nonconvexity assumptions on K. We take the following characterization proved in [29] as a definition of this class. We point out that the original definition was given in terms of the differentiability of the distance function (see [29]).
Definition 2.8. For any r ∈ (0, +∞], a subset K r of is called normalized uniformly prox-regular (or uniformly r-prox-regular [29]) if every nonzero proximal normal to K r can be realized by an r-ball.
This means that for all and with ||ξ|| = 1,
Obviously, the class of normalized uniformly prox-regular sets is sufficiently large to include the class of convex sets, p-convex sets, C1,1 submanifolds (possibly with boundary) of , the images under a C1,1 diffeomorphism of convex sets and many other nonconvex sets, see [25, 29].
Lemma 2.9. [29]A closed set is convex if and only if it is proximally smooth of radius r for every r > 0.
If r = +∞, then in view of Definition 2.8 and Lemma 2.9, the uniform r-prox-regularity of K r is equivalent to the convexity of K r , which makes this class of great importance. For the case of that r = +∞, we set K r = K.
The following proposition summarizes some important consequences of the uniform prox-regularity needed in the sequel. The proof of this results can be found in [29, 46].
Proposition 2.10. Let r > 0 and K r be a nonempty closed and uniformly r-prox-regular subset of . Set . Then the following statements hold:
-
(a)
For all x ∈ U(r), one has ;
-
(b)
For all r' ∈ (0, r), is Lipschitzian continuous with constant on ;
-
(c)
The proximal normal cone is closed as a set-valued mapping.
As a direct consequent of part (c) of Proposition 2.10, we have . Therefore, we will define for such a class of sets.
In order to make clear the concept of r-prox-regular sets, we state the following concrete example: The union of two disjoint intervals [a, b] and [c, d] is r-prox-regular with . The finite union of disjoint intervals is also r-prox-regular and r depends on the distances between the intervals.
Definition 2.11. Let be two single-valued operators. Then the operator T is said to be:
-
(a)
monotone if
-
(b)
r-strongly monotone if there exists a constant r > 0 such that
-
(c)
κ-strongly monotone with respect to g if there exists a constant κ > 0 such that
-
(d)
(ξ, ς)-relaxed co-coercive if there exist constants ξ, ς > 0 such that
-
(e)
γ-Lipschitzian continuous if there exists a constant γ > 0 such that
In the next definitions, several generalizations of the nonexpansive mappings which have been introduced by various authors in recent years are stated.
Definition 2.12. A nonlinear mapping is said to be:
-
(a)
nonexpansive if
-
(b)
L-Lipschitzian if there exists a constant L > 0 such that
-
(c)
generalized Lipschitzian if there exists a constant L > 0 such that
-
(d)
generalized (L, M)-Lipschitzian [53] if there exist two constants L, M > 0 such that
-
(e)
asymptotically nonexpansive [54] if there exists a sequence {k n } ⊆ [1, ∞) with such that, for each n ∈ ℕ,
-
(f)
pointwise asymptotically nonexpansive [55] if, for each integer n ∈ ℕ,
where α n → 1 pointwise on X;
-
(g)
uniformly L-Lipschitzian if there exists a constant L > 0 such that, for each n ∈ ℕ,
Definition 2.13. [53] A nonlinear mapping is said to be:
-
(a)
nearly Lipschitzian with respect to the sequence {a n } if for each n ∈ ℕ, there exists a constant k n > 0 such that
(2.1)
where {a n } is a fix sequence in [0, ∞) with a n → 0 as n → ∞.
The infimum of constants k n in (2.1) is called nearly Lipschitz constant, which is denoted by η(T n). Notice that
A nearly Lipschitzian mapping T with the sequence {(a n , η(T n))} is said to be:
-
(b)
nearly nonexpansive if η(Tn) = 1 for all n ∈ ℕ, that is,
-
(c)
nearly asymptotically nonexpansive if η(Tn) ≤ 1 for all n ∈ ℕ and , in other words, k n ≥ 1 for all n ∈ ℕ with ;
-
(d)
nearly uniformly L-Lipschitzian if η(Tn) ≤ L for all n ∈ ℕ, in other words, k n = L for all n ∈ ℕ.
Remark 2.14. It should be pointed that
-
(1)
Every nonexpansive mapping is a asymptotically nonexpansive mapping and every asymptotically non-expansive mapping is a pointwise asymptotically nonexpansive mapping. Also, the class of Lipschitzian mappings properly includes the class of pointwise asymptotically nonexpansive mappings.
-
(2)
It is obvious that every Lipschitzian mapping is a generalized Lipschitzian mapping. Furthermore, every mapping with a bounded range is a generalized Lipschitzian mapping. It is easy to see that the class of generalized (L, M)-Lipschitzian mappings is more general that the class of generalized Lipschitzian mappings.
-
(3)
Clearly, the class of nearly uniformly L-Lipschitzian mappings properly includes the class of generalized (L, M)-Lipschitzian mappings and that of uniformly L-Lipschitzian mappings. Note that every nearly asymptotically nonexpansive mapping is nearly uniformly L-Lipschitzian.
Now, we present some new examples to investigate relations between these mappings.
Example 2.15. Let and define a mapping as follow:
where γ > 1 is a constant real number. Evidently, the mapping T is discontinuous at the points x = 0, γ. Since every Lipschitzian mapping is continuous, it follows that T is not Lipschitzian. For each n ∈ ℕ, take . Then,
Since for all z ∈ ℝ and n ≥ 2, it follows that for all x, y ∈ ℝ and n ≥ 2,
Hence T is a nearly nonexpansive mapping with respect to the sequence .
The following example shows that the nearly uniformly L-Lipschitzian mappings are not necessarily continuous.
Example 2.16. Let , where b ∈ (0, 1] is an arbitrary constant real number, and the self-mapping T of be defined as below:
where γ ∈ (0, 1) is also an arbitrary constant real number. It is plain that the mapping T is discontinuous in the point b. Hence T is not a Lipschitzian mapping. For each n ∈ ℕ, take a n = γn-1. Then, for all n ∈ ℕ and x, y ∈ [0, b), we have
If x ∈ [0, b) and y = b, then, for each n ∈ ℕ, we have Tnx = γnx and Tny = 0. Since 0 <|x - y| ≤ b ≤ 1, it follows that, for all n ∈ N,
Hence T is a nearly uniformly γ-Lipschitzian mapping with respect to the sequence {a n } = {γn-1}.
Obviously, every nearly nonexpansive mapping is a nearly uniformly Lipschitzian mapping. In the following example, we show that the class of nearly uniformly Lipschitzian mappings properly includes the class of nearly nonexpansive mappings.
Example 2.17. Let and the self-mapping T of be defined as follow:
Evidently, the mapping T is discontinuous in the points x = 0, 1, 2. Hence T is not a Lipschitzian mapping. Take for each n ∈ N, . Then T is not a nearly nonexpansive mapping with respect to the sequence , because taking x = 1 and , we have Tx = 2, and
However,
and for all n ≥ 2,
since for all z ∈ ℝ and n ≥ 2. Hence, for each L ≥ 4, T is a nearly uniformly L-Lipschitzian mapping with respect to the sequence .
It is clear that every uniformly L-Lipschitzian mapping is a nearly uniformly L-Lipschitzian mapping. In the next example, we show that the class nearly uniformly L-Lipschitzian mappings properly includes the class of uniformly L-Lipschitzian mappings.
Example 2.18. Let and let the self-mapping of be defined the same as in Example 2.17. Then T is not a uniformly 4-Lipschitzian mapping. In fact, if x = 1 and , then we have |Tx - Ty| > 4|x - y| because . But, in view of Example 2.17, T is a nearly uniformly 4-Lipschitzian mapping.
The following example shows that the class of generalized Lipschitzian mappings properly includes the class of Lipschitzian mappings and that of mappings with bounded range.
Example 2.19. [26] Let and a mapping be defined by
Then T is a generalized Lipschitzian mapping which is not Lipschitzian and whose range is not bounded.
3 Extended general regularized nonconvex variational inequality
In this section, we introduce a new problem of extended general nonlinear regularized nonconvex variational inequality and some special cases of the problem in Hilbert spaces and investigate their relations.
Let be three nonlinear single-valued operators such that . We consider the problem of finding such that g(u) ∈ K r and
where ρ > 0 is a constant. The problem (3.1) is called the extended general nonlinear regularized nonconvex variational inequality involving three different nonlinear operators (EGNRNVID).
Proposition 3.1. If K r is a uniformly prox-regular set, then the problem (3.1) is equivalent to that of finding such that g(u) ∈ K r and
where denotes the P-normal cone of K r at s in the sense of nonconvex analysis.
Proof. Let with g(u) ∈ K r be a solution of the problem (3.1). If ρT(u) + g(u) - f(u) = 0, because the vector zero always belongs to any normal cone, we have . If ρT(u) + g(u) - f(u) ≠ 0, then for all with f(v) ∈ K r , one has
Now, by using Lemma 2.3 conclude that and so
Conversely, if with g(u) ∈ K r is a solution of the problem (3.2), then Definition 2.8 guarantees that with g(u) ∈ K r is a solution of the problem (3.1). This completes the proof.
The problem (3.2) is called the extended general nonconvex variational inclusion associated with EGNRNVID problem.
Some special cases of the problem (3.1) are as follows:
-
(1)
If g ≡ I (: the identity operator), then the problem (3.1) collapses to the following problem: Find u ∈ K r such that
(3.3)
which is a new problem of general nonlinear regularized nonconvex variational inequality involving two nonlinear operators (GNRNVID).
-
(2)
If f = g, then the problem (3.1) reduces to the following problem: Find such that g(u) ∈ K r and
(3.4)
which is also a new problem of general nonlinear regularized nonconvex variational inequality involving two nonlinear operators (GNRNVID).
-
(3)
If g ≡ I, then the problem (3.4) collapses to the following problem: Find u ∈ K r such that
(3.5)
which is a new problem of nonlinear regularized nonconvex variational inequality (NRNVI).
-
(4)
If r = ∞, i.e., K r = K, the convex set in , then the problem (3.1) changes into that of finding such that g(u) ∈ K and
(3.6)
The inequality of type (3.6) is introduced and studied by Noor [33, 39].
-
(5)
If r = ∞, then the problem (3.3) is equivalent to the problem: Find u ∈ K such that
(3.7)
The problem (3.7) is introduced and studied by Noor [34].
-
(6)
If r = ∞, then the problem (3.4) reduces to the following problem: Find such that g(u) ∈ K and
(3.8)
which is known as the general nonlinear variational inequality introduced and studied by Noor [37] in 1988.
-
(7)
If r = ∞, then the problem (3.5) changes into the problem: Find u ∈ K such that
(3.9)
The inequality of type (3.9) is called variational inequality, which was introduced and studied by Stampacchia [1] in 1964.
Now, we prove the existence and uniqueness theorem for solution of the problem of extended general nonlinear regularized nonconvex variational inequality (3.1). For this end, we need to the following lemma in which by using the projection operator technique, we verify the equivalence between the problem (3.1) and the fixed point problem.
Lemma 3.2. Let T, f, g and ρ > 0 be the same as in the problem (3.1). Then with g(u) ∈ K r is a solution of the problem (3.1) if and only if
where is the projection of onto K r .
Proof. Let with g(u) ∈ K r be a solution of the problem (3.1). Then, by using Proposition 3.1, we have
where I is identity operator and we have used the well-known fact that .
Theorem 3.3. Let T, f, g and ρ be the same as in the problem (3.1) such that
-
(a)
T is κ-strongly monotone with respect to f and σ-Lipschitz continuous;
-
(b)
g is τ-strongly monotone and ι-Lipschitz continuous;
-
(c)
f is ϖ-Lipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.1) admits a unique solution.
Proof. Define the mapping by
Now, we establish that ϕ is a contraction mapping. Let with g(x), be given. It follows from Proposition 2.10 that
(3.13)
By using τ-strongly monotonicity and ι-Lipschitzian continuity of g, we have
Since T is κ-strongly monotone with respect to f and σ-Lipschitzian continuous, and f is ϖ-Lipschitzian continuous, we gain
Substituting (3.14) and (3.15) for (3.13), we obtain
where
In view of the condition (3.11), we note that 0 ≤ γ < 1 and so from (3.16) conclude that the mapping ϕ is contraction. According to Banach fixed point theorem, ϕ has a unique fixed point in , that is, there exists a unique point with g(u) ∈ K r such that ϕ(u) = u. It follows from (3.12) that . Now, Lemma 3.2 guarantees that with g(u) ∈ K r is a solution of the problem (3.1). This completes the proof.
As in the proof of Theorem 3.3, one can prove the existence and uniqueness theorem for solution of the problems (3.3)-(3.5) and we omit their proofs.
Theorem 3.4. Assume that T, f and ρ are the same as in the problem (3.3) such that
-
(a)
T is κ-strongly monotone with respect to f and σ-Lipschitz continuous;
-
(b)
f is ϖ-Lipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.3) admits a unique solution.
Theorem 3.5. Let T, g and ρ be the same as in the problem (3.4) such that
-
(a)
T is κ-strongly monotone with respect to f and σ-Lipschitz continuous;
-
(b)
g is τ-strongly monotone and ι-Lipschitz continuous.
If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.4) admits a unique solution.
Theorem 3.6. Suppose that T and ρ are the same as in the problem (3.5) such that T is κ-strongly monotone and σ-Lipschitz continuous. If the constant ρ > 0 satisfies the following condition:
where r' ∈ (0, r), then the problem (3.5) admits a unique solution.
4 Nearly uniformly Lipschitzian mappings and finite step projection iterative algorithms
In this section, applying a nearly uniformly Lipschitzian mapping S and by using the fixed point formulation (3.10), we suggest and analyze some new p-step projection iterative algorithms with mixed errors for finding an element of set of the fixed points of nearly uniformly Lipschitzian mapping S which is unique solution of the problem of extended general nonlinear regularized nonconvex variational inequality (3.1).
Let S : K r → K r be a nearly uniformly Lipschitzian mapping. We denote the set of all the fixed points of S by Fix(S) and the set of all the solutions of the problem (3.1) by EGNRNVID(K r , T, f, g). We now characterize the problem. If u ∈ Fix(S) ∩ EGNRNVID(K r , T, f, g), then it follows from Lemma 3.2 that, for each n ≥ 0,
The fixed point formulation (4.1) enables us to define the following p-step projection iterative algorithms with mixed errors for finding a common element of two different sets of solutions of the fixed points of the nearly uniformly Lipschitzian mappings and the extended general nonlinear regularized nonconvex variational inequalities (3.1).
Algorithm 4.1. Let T, f, g and ρ be the same as in the problem (3.1). For arbitrary chosen initial point x0 ∈ K r , compute the iterative sequence by the iterative process
where
S : K r → K r is a nearly uniformly Lipschitzian mapping, , are 2p sequences in interval [0,1] such that , , and , , are 3p sequences in to take into account a possible inexact computation of the resolvent operator point satisfying the following conditions: are p bounded sequences in and , are 2p sequences in such that
Algorithm 4.2. Assume that T, f and ρ are the same as in the problem (3.3). For arbitrary chosen initial point x0 ∈ K r , compute the iterative sequence by the iterative process
where S, , , , , are the same as in Algorithm 4.1.
Algorithm 4.3. Let T, g and ρ be the same as in the problem (3.4). For arbitrary chosen initial point x0 ∈ K r , compute the iterative sequence as follows:
where
and S, , , , , are the same as in Algorithm 4.1.
Algorithm 4.4. Let T and ρ be the same as in the problem (3.5). For arbitrary chosen initial point x0 ∈ K r , compute the iterative sequence by using
where S, , , , , are the same as in Algorithm 4.1.
Remark 4.5. It should be pointed out that
-
(1)
If en,i= rn,i= 0, for all n ≥ 0 and i = 1, 2,..., p, then Algorithms 4.1-4.4 change into the perturbed iterative process with mean errors.
-
(2)
When en,i= ln,i= rn,i= 0, for all n ≥ 0 and i = 1, 2,..., p, then Algorithms 4.1-4.4 reduce to the perturbed iterative process without error.
Remark 4.6. Algorithms 2.1-2.6 in [38] and Algorithm 2.1 in [44] are special cases of Algorithms 4.1-4.4. In brief, for a suitable and appropriate choice of the operators T, f, g, the constant ρ, and the sequences , , , , , one can obtain a number of new and previously known iterative schemes for solving the problems (3.1) and (3.3)-(3.5) and related problems. This clearly shows that Algorithms 4.1-4.4 are quite general and unifying.
Now, we discuss the convergence analysis of the suggested iterative Algorithms 4.1-4.4 under some suitable conditions. For this end, we need to the following lemma:
Lemma 4.7. Let -a n }, -b n } and -c n } be three nonnegative real sequences satisfying the following condition: there exists a natural number n0 such that
where t n ∈ [0, 1], , lim n→∞ b n = 0, . Then limn→0a n = 0.
Proof. The proof directly follows from Lemma 2 in Liu [32].
Theorem 4.8. Let T, f, g and ρ be the same as in Theorem 3.3 such that the conditions (a)-(c) and (3.11) in Theorem 3.3 hold. Assume that S : K r → K r is a nearly uniformly L-Lipschitzian mapping with the sequence such that Fix(S) ∩ EGNRNVID(K r , T, f, g) ≠ ∅. Further, let Lγ < 1, where γ is the same as in (3.17). If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 4.1 converges strongly to the only element of Fix(S) ∩ EGNRNVID(K r ,. T, f, g).
Proof. According to Theorem 3.3, the problem (3.1) has a unique solution with g(x*) ∈ K r . Hence, in view of Lemma 3.2, . Since EGNRNVID(K r , T, f, g) is a singleton set, it follows from Fix(S) ∩ EGNRNVID(K r , T, f, g) ≠ ∅ that x* ∈ Fix(S). Accordingly, for each n ≥ 0 and i ∈ {1, 2,..., p}, we can write
where the sequences and are the same as in Algorithm 4.1. Let Γ = supn≥0{||ln,i- x*||: i = 1, 2,. .., p}. It follows from (4.2), (4.4), Proposition 2.10 and the assumptions that
Since T is κ-strongly monotone with respect to f and σ-Lipschitz continuous, g is τ-strongly monotone and ι-Lipschitz continuous, in similar way to the proofs (3.14) and (3.15), we can prove that
and
Substituting (4.6) and (4.7) for (4.5), we obtain
Like in the proofs of (4.5)-(4.8), we can establish that, for each i ∈ {1, 2,..., p - 2},
and
By using (4.9) and (4.10), we get
As in the proof of (4.11), applying (4.9) and (4.11), we have
Continuing this procedure in (4.10)-(4.12), we gain
It follows from (4.8) and (4.13) that
Since Lγ < 1 and limn→∞b n = 0, in view of (4.3), it is clear that all the conditions of Lemma 4.7 are satisfied and so Lemma 4.7 and (4.14) guarantee that x n → x*, as n → ∞. Thus the sequence generated by Algorithm 4.1 converges strongly to the only element of Fix(S) ∩ EGNRNVID(K r , T, f, g). This completes the proof.
As in the proof of Theorem 3.5, one can prove the convergence of iterative sequences generated by Algorithms 4.2-4.4 and we omit their proofs.
Theorem 4.9. Assume that T, f and ρ are the same as in Theorem 3.4 such that the conditions (a), (b) and (3.18) in Theorem 3.4 hold. Let S : K r → K r be a nearly uniformly L-Lipschitzian mapping with the sequence such that Fix(S) ∩ GNRNVID (K r , T, f ) ≠ ∅, where GNRNVID (K r , T, f ) is the set of the solutions of the problem (3.3). Further, let Lθ < 1, where . If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 4.2 converges strongly to the only element of Fix(S) ∩ GNRNVID (K r , T, f ).
Theorem 4.10. Suppose that T, g and ρ are the same as in Theorem 3.5 such that the conditions (a), (b) and (3.19) in Theorem 3.5 hold. Let S : K r → K r be a nearly uniformly L-Lipschitzian mapping with the sequence such that Fix(S) ∩ GNRNVID (K r , T, f ) ≠ ∅, where GNRNVID (K r , T, g) is the set of the solutions of the problem (3.4). Further, let , where
If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 4.3 converges strongly to the only element of Fix(S) ∩ GNRNVID (K r , T, g).
Theorem 4.11. Let T and ρ be the same as in Theorem 3.6 such that the condition (3.20) in Theorem 3.6 holds. Assume that S : K r → K r is a nearly uniformly L-Lipschitzian mapping with the sequence such that Fix(S) ∩ NRNVI (K r , T ) ≠ ∅, where NRNVI (K r , T ) is the set of the solutions of the problem (3.5). Moreover, let Lη < 1, where . Then the iterative sequence generated by Algorithm 4.4 converges strongly to the only element of Fix(S) ∩ NRNVI (K r , T ).
5 Extended general nonconvex Wiener-Hopf equations
In this section, we introduce a new class of extended general nonconvex Wiener-Hopf equations and some new special cases from it, and by using the projection method, we establish that the aforesaid class is equivalent with the class of extended general nonlinear regularized nonconvex variational inequalities (3.1).
Let T, f, g and ρ be the same as in the problem (3.1) and suppose that the inverse of the operator g exists. Associated with the problem (3.1), the problem of finding such that
Where with I the identity operator and the projection operator is considered.
The problem (5.1) is called the extended general nonconvex Wiener-Hopf equation (EGNWHE) associated with the problem of extended general nonlinear regularized nonconvex variational inequality (3.1). Next, we denote by EGNWHE(K r , T, f, g) the set of the solutions of the extended general nonconvex Wiener-Hopf equation (5.1).
Now, we state some special cases of the problem (5.1) as follows:
-
(1)
If g ≡ I, then the problem (5.1) is equivalent to the following problem: Find such that
(5.2)
where and is called the general nonconvex Wiener-Hopf equation (GNWHE) associated with the problem of general nonlinear regularized nonconvex variational inequality (3.3). We denote by GNWHE(K r , T, f ) the set of the solutions of the general nonconvex Wiener-Hopf equation (5.2).
-
(2)
If f = g, then the problem (5.1) changes into the following problem: Find such that
(5.3)
where and is called also the general nonconvex Wiener-Hopf equation (GNWHE) associated with the problem of general nonlinear regularized nonconvex variational inequality (3.4). We denote by GNWHE (K r , T, g) the set of the solutions of the general nonconvex Wiener-Hopf equation (5.2).
-
(3)
If f = g ≡ I, then the problem (5.1) collapses to the following problem: Find such that
(5.4)
where is the same as in Eq. (5.3). The equation of the type (5.4) is called the nonconvex Wiener-Hopf equation (NWHE) associated with the problem of nonlinear regularized nonconvex variational inequality (3.4).
We denote by NWHE (K r , T ) the set of the solutions of the nonconvex Wiener-Hopf equation (5.2).
-
(4)
If r = ∞, that is K r = K, then the problem (5.1) reduces to the following problem: Find such that
(5.5)
Where Q K = I - f g -1 P K . The equations of the type (5.5) were introduced and studied by Noor [39].
-
(5)
If r = ∞, then the problem (5.2) is equivalent to the following problem: Find such that
(5.6)
where Q K = I - P K . The problem (5.6) is introduced and studied by Noor [35].
-
(6)
If r = ∞, then the problem (5.3) changes into the following problem: Find such that
(5.7)
where Q K = I - P K . The equations of the type (5.7) were introduced and studied by Noor [40].
-
(7)
If r = ∞, then the problem (5.4) reduces to the following problem: Find such that
(5.8)
where Q K is the same as in (5.7). The equation (5.8) is the original Wiener-Hopf equation mainly due to Shi [50].
Remark 5.1. It has been shown that the Wiener-Hopf equations have played an important and significant role in developing several numerical techniques for solving variational inequalities and related optimizations problems (see, for example, [30, 31, 35, 39, 50] and references therein).
The following lemma shows that the extended general nonlinear regularized nonconvex variational inequality (3.1) and the extended general nonconvex Wiener-Hopf equation (5.1) are equivalent.
Lemma 5.2. Let T, f, g and ρ be the same as in the problem (3.1) and suppose that the inverse of the operator g exists. Then with g(u) ∈ K r is a solution of the problem (3.1) if and only if the extended general nonconvex Wiener-Hopf equation (5.1) has a solution satisfying the following:
Proof. Let with g(u) ∈ K r be a solution of the problem (3.1). Then, from Lemma 3.2, it follows that
Taking z = f (u) - ρT (u) in (5.9), we have , which leads to
Applying (5.10) and this fact that z = f (u) - ρT (u), we have
Evidently, the above equality is equivalent to the following:
where is the same as in (5.1). Now, (5.11) guarantees that is a solution of the extended general nonconvex Wiener-Hopf equation (5.1).
Conversely, if is a solution of the problem (5.1) satisfying the following:
then it follows from Lemma 3.2 that with g (u) ∈ K r is a solution of the problem (3.1). This completes the proof.
In similar way to the proof of Lemma 5.2, one can prove the following statements.
Lemma 5.3. Let T, f and ρ be the same as in the problem (3.3). Then u ∈ K r is a solution of the problem (3.3) if and only if the general nonconvex Wiener-Hopf equation (5.2) has a solution satisfying
Lemma 5.4. Suppose that T, g and ρ are the same as in the problem (3.4) and let the inverse of the operator g exists. Then with g(u) ∈ K r is a solution of the problem (3.4) if and only if the general nonconvex Wiener-Hopf equation (5.3) has a solution satisfying the following:
Lemma 5.5. Assume that T and ρ are the same as in the problem (3.5). Then u ∈ K r is a solution of the problem (3.5) if and only if the nonconvex Wiener-Hopf equation (5.4) has a solution satisfying the following:
6 Some new perturbed p-step projection iterative methods
In this section, by using the problems (5.1)-(5.4) and four Lemmas 5.2-5.5, we get some fixed point formulations for constructing a number of the new perturbed p-step projection iterative algorithms with mixed errors for solving the problems (3.1) and (3.3)-(3.5).
(I) By using (5.1) and Lemma 5.2, we have
This fixed point formulation enables us to define the following p-step projection iterative algorithm with mixed errors for solving the problem (3.1).
Algorithm 6.1. Let T, f, g and ρ be the same as in the problem (3.1) such that g be an onto operator. For arbitrary chosen initial point , compute the iterative sequence by
where S, , , , , , are the same as in Algorithm 4.1.
(II) From (5.1) and Lemma 5.2, it follows that
By using this fixed point formulation, we can construct the following p-step projection iterative algorithm with mixed errors for solving the problem (3.1).
Algorithm 6.2. Assume that T, f, g and ρ are the same as in Algorithm 6.1. For arbitrary chosen initial point , compute the iterative sequence as follows:
where
and S, , , , , are the same as in Algorithm 4.1.
(III) Like in the proof (I), by using (5.2) and Lemma 5.3, we have
This fixed point formulation allows us to construct the following p-step projection iterative algorithm with mixed errors for solving the problem (3.3).
Algorithm 6.3. Let T, f and ρ be the same as in the problem (3.3). For arbitrary chosen initial point , compute the iterative sequence by
where S, , , , , are the same as in Algorithm 4.1.
(IV) In similar way, from (5.3) and Lemma 5.4, it follows that
By using the above fixed point formulation, we can define the following p-step projection iterative algorithm with mixed errors for solving the problem (3.4).
Algorithm 6.4. Let T, g and ρ be the same as in the problem (3.4) such that g be an onto operator. For arbitrary chosen initial point , compute the iterative sequence by
where S, , , , , are the same as in Algorithm 4.1.
(V) Similarly, by using (5.4) and Lemma 5.5, we have
This fixed point formulation enables us to construct the following p-step projection iterative algorithm with mixed errors for solving the problem (3.5).
Algorithm 6.5. Let T and ρ be the same as in the problem (3.5). For arbitrary chosen initial point , compute the iterative sequence by the iterative scheme
where S, , , , , are the same as in Algorithm 4.1.
Remark 6.6. In similar to Remark 4.5, for a suitable and appropriate choice of the sequences , and , Algorithms 6.1-6.5 reduce to algorithms with mean errors and without errors.
Remark 6.7. Algorithm 3.1 in [42] is a special case of Algorithms 6.1 and 6.4. Algorithm 3.2 in [42] is a special case of Algorithm 6.2. Also, Algorithms 3.1-3.3 in [44] and Algorithms 2.1-2.3 in [47] are special cases of Algorithms 6.1, 6.2 and 6.4.
Now, we discuss the convergence analysis of iterative sequences generated by perturbed projection iterative Algorithms 6.1-6.5.
Theorem 6.8. Let T, f, g and ρ be the same as in the problem (3.1) and suppose that all the conditions of Theorem 3.3 hold. Assume that S : K r → K r is a nearly uniformly L-Lipschitzian mapping with the sequence such that, for any u ∈ EGNRNVID (K r , T, f, g), g(u) ∈ Fix(S). Further, assume that Lγ < 1, where γ is the same as in (3.17). If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 6.1 converges strongly to the only element of EGNWHE(K r , T, f, g).
Proof. Theorem 3.3 guarantees the existence a unique solution with g(u*) ∈ K r for the problem (3.1). Hence, in view of Lemma 5.2, there exists a unique point satisfying the following:
Since g(u*) ∈ Fix(S), it follows from (6.2) that, for each n ≥ 0,
Let Γ = supn≥0{||ln,i- z||, ||z - u*||: i = 1, 2,..., p}. By using (6.1), (6.2) and the assumptions, we have
In similar way to the proof (6.4), for each i ∈ {1, 2,..., p - 2}, we can get
and
Now, we make an estimation for ||u n - u*||. Applying (6.1), (6.3) and Proposition 2.10, we find that
which leads to
By using (6.6) and (6.7), we conclude that
where In view of the condition (3.11), we have ϑ < 1. From r' ∈ (0, r) and (6.8), we have
Since and r' ∈ (0, r), deduce that
By using (6.10), the inequality (6.5), for each i = 1, 2,..., p - 2, can be written as follows:
Thus it follows from (6.9) and (6.11) that
Similarly, by using (6.11) and (6.12), we obtain
Continuing the same procedures, we get
Thus, applying (6.4) and (6.13), one has
If L ≥ 1, then Lγ < 1, where γ is the same as in (3.17), implies that
whence deduce that Lϑ < 1. For the case that L < 1, it is plain that Lϑ < 1. In view of (4.3), we note that all the conditions of Lemma 4.7 hold and so, (6.14) and Lemma 4.7 guarantee that the sequence generated by Algorithm 6.1 converges strongly to the solution of the problem (5.1) and there is nothing to prove. This completes the proof.
As in the proof of Theorem 6.8, we can prove the convergence of iterative sequences generated by Algorithms 6.2-6.5 and we omit their proofs.
Theorem 6.9. Suppose that T, f, g, ρ and S are the same as in Theorem 6.8 and let all the conditions of Theorem 6.8 hold. If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 6.2 converges strongly to the only element of EGNWHE(K r , T, f, g).
Theorem 6.10. Let T, f, ρ and S be the same as in Theorem 4.9 and let all the conditions of Theorem 4.9 hold. If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 6.3 converges strongly to the only element of GNWHE(K r , T, f ).
Theorem 6.11. Let T, g and ρ be the same as in Theorem 3.5 and suppose that the conditions (a), (b) and (3.19) in Theorem 3.5 hold. Let S : K r → K r be a nearly uniformly L-Lipschitzian mapping with the sequence such that for any u ∈ GNRNVID(K r , T, g), g(u) ∈ Fix(S). Further, let , where is the same as in Theorem 4.10. If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 6.4 converges strongly to the only element of GNWHE(K r , T, g).
Theorem 6.12. Assume that T, ρ and S are the same as in Theorem 4.11 and let all the conditions of Theorem 4.11 hold. If there exists a constant α > 0 such that for each n≥ 0, then the iterative sequence generated by Algorithm 6.5 converges strongly to the only element of NWHE(K r , T ).
7 Some remarks
In view of Definition 2.11, we note that the condition relaxed cocoercivity of the operator T is weaker than the condition strongly monotonicity of T. In other words, the class of relaxed cocoercive operators is more general than the class of strongly monotone operators. In the present section, we shall show that unlike claims of Noor [38], Noor et al. [44], Qin and Noor [47], they studied the convergence analysis of the proposed iterative algorithms under the condition of strongly monotonicity, not the mild condition relaxed cocoercivity.
Noor [38] proposed the following three-step iterative algorithm and its special forms for finding a common element of two different sets of solutions of the fixed points of the nonexpansive mappings and the general variational inequalities (3.9) and studied convergence analysis of the suggested iterative algorithm under some conditions.
Algorithm 7.1. (Algorithm 2.1 [38]) For any , compute the approximate solution x n by the iterative schemes
where a n , b n , c n ∈ [0, 1] for all n ≥ 0 and S is the nonexpansive operator.
Theorem 7.2. (Theorem 3.1 [38]) Let K be a closed convex subset of a real Hilbert space . Let T be a relaxed(γ, r)-cocoercive and μ -Lipschitzian mapping of K into . Let g be a relaxed(γ1, r1) -cocoercive and μ1 -Lipschitzian mapping of K into and S be a nonexpansive mapping of K into K such that F(S)∩GVI(K, T, g) ≠ ∅. Let {x n } be a sequence defined by Algorithm 7.1, for any initial point x0 ∈ K, with the following conditions:
where
a n , b n , c n ∈ [0, 1] and , then x n obtained from Algorithm 7.1 converges strongly to x* ∈ F(S) ∩ GVI(K, T, g).
From , it follows that . Accordingly, the condition should be added to the conditions of Theorem 7.2. On the other hand, the conditions and k < 1 imply that r > γμ2. Since T is (γ, r)-relaxed cocoercive and μ-Lipschitz continuous, the condition r > γμ2 guarantees that the operator T is (r - γμ2)-strongly monotone. Therefore, one can rewrite Theorem 7.2 as follows.
Theorem 7.3. Let K be a closed convex subset of a real Hilbert space and let T be a ξ-strongly monotone and μ-Lipschitzian mapping of K into . Let g be a ξ1-strongly monotone and μ1-Lipschitzian mapping of K into and S be a nonexpansive mapping of K into K such that F(S) ∩ GVI(K, T, g) ≠ ∅. If the constant ρ satisfies the following condition:
and the sequence {a n } satisfies , then the iterative sequence {x n } generated by Algorithm 7.1 converges strongly to the only element of F(S) ∩ GVI(K, T, g).
Remark 7.4. Theorem 3.2 in [38] is stated with the condition relaxed cocoercivity of the operators T and g. Similarly, by using the conditions of Theorem 3.2 [38], we note that the operators T and g are in fact strongly monotone. Hence, Theorem 3.2 [38] is proved with the condition strongly monotonicity of the operators T and g instead of the mild condition relaxed cocoercivity.
Noor et al. [44] presented the following iterative scheme and its special forms for finding the common element of the solution sets of the general variational inequalities (3.9) and the nonexpansive mappings.
Algorithm 7.5. (Algorithm 3.1 [44]) For a given and some sequence{a n }, a n ∈ [0, 1], compute the approximate solution zn+1by the iterative schemes
where S is a non-expansive operator.
They also studied convergence analysis of the suggested iterative algorithm under some conditions as follows:
Theorem 7.6. (Theorem 3.1 [44]) Let K be a closed convex subset of a real Hilbert space . Let T be a relaxed(γ, r)-cocoercive and μ-Lipschitzian mapping. Let g be a relaxed (γ1, r1)-cocoercive and μ1-Lipschitzian mapping of K into and S be a nonexpansive mapping of into such that . Let{z n } be a sequence defined by Algorithm 7.5, for any initial point z0 ∈ K, with the following conditions:
where
a n ∈ [0,1] and , then z n obtained from Algorithm 7.5 converges strongly to .
As in Theorem 7.2, since , it follows that . Hence the condition should be added to Theorem 7.6. Moreover, by using the conditions and k < 1, (γ, r)-relaxed cocoercivity and μ-Lipschitz continuity of the operator T and (γ1, r1)-relaxed cocoercivity and μ1-Lipschitz continuity of the operator g, we note that the operators T and g are (r - γμ2)-strongly monotone and -strongly monotone, respectively. Therefore, Theorem 7.6 collapses to the following theorem.
Theorem 7.7. Let K be a closed convex subset of a real Hilbert space and let be a ξ-strongly monotone and μ-Lipschitzian mapping. Let be a ξ1-strongly monotone and μ1-Lipschitzian mapping and be a nonexpansive mapping such that F(S) ∩ GVI(K, T, g) ≠ ∅. If the constant ρ satisfies the following condition:
and the sequence {a n } satisfies , then the iterative sequence{z n } generated by Algorithm 7.5 converges strongly to the only element of .
Therefore, Noor et al. [44] proved the strongly convergence of the iterative sequence {z n } generated by Algorithm 7.5, under the condition strongly monotonicity of the operators T and g, not under the mild condition relaxed cocoercivity.
Qin and Noor [47] proposed the following iterative algorithm and its special forms for solving the general variational inequalities (3.9).
Algorithm 7.8. (Algorithm 2.1 [47]) For any z0 ∈ K, compute the sequence {z n } by the iterative processes
where {α n } is a sequence in [0, 1] and S is a nonexpansive mapping.
They studied convergence analysis of the suggested iterative algorithm under some conditions as follows.
Theorem 7.9. (Theorem 3.1 [47]) Let K be a closed convex subset of a real Hilbert space . Let be a relaxed(u1, v1)-cocoercive and μ1-Lipschitz continuous mapping, be a relaxed(u2, v2)-cocoercive and μ2-Lipschitz continuous mapping and S be a nonexpansive mapping from K into itself such that F(S) ≠ ∅. Let {z n }, {u n } and {g(u n )} be the sequences generated by Algorithm 7.8 and {α n } be a sequence in [0, 1]. Assume that the following conditions are satisfied:
(C1) 2θ1 + θ2 < 1, where and ;
(C2) .
Then the sequences {z n }, {u n } and {g(u n )} converge strongly to , u* ∈ VI(K, A) and g(u*) ∈ F(S), respectively.
From the condition (C1), it follows that and . Therefore, these conditions should be added to Theorem 7.9. On the other hand, the condition (C1) implies that , for i = 1, 2. Because g is (u1, v1)-relaxed cocoercive and μ1-Lipschitz continuous, the condition guarantees -strongly monotonicity of the operator g. Similarly, from (u2, v2)-relaxed cocoercivity and μ2-Lipschitz continuity of the operator A and the condition , it follows that the operator A is -strongly monotone. Hence Theorem 7.9 reduces to the following theorem:
Theorem 7.10. Let K be a closed convex subset of a real Hilbert space and let be a ξ1-strongly monotone and μ1-Lipschitz continuous mapping, be ξ2-strongly monotone and μ2-Lipschitz continuous mapping and let S be a nonexpansive mapping from K into itself such that F(S) ≠ ∅. Let {z n }, {u n } and {g(u n )} be sequences generated by Algorithm 7.8. If the following conditions hold:
(C1) 2θ1 + θ2 < 1, where and ;
(C2) , and ,
then the iterative sequences {z n }, {u n } and {g(u n )} generated by Algorithm 7.5 converge strongly to , u* ∈ VI(K, A) and g(u*) ∈ F(S), respectively.
Remark 7.11. (1) Qin and Noor in Remark 3.2 [47] claimed that Theorem 7.9 is obtained under the mild condition relaxed cocoercivity of the operators g and A. But, in view of the above facts, their results are obtained under the condition strongly monotonicity of the operators g and A not under the mild condition relaxed cocoercivity.
-
(2)
The operators A and g in Corollaries 3.3 and 3.4 [47] are relaxed cocoercive. But we note that the conditions of the aforesaid corollaries guarantee that the operators A and g in these corollaries are in fact strongly monotone. Accordingly, Corollaries 3.3 and 3.4 in [47] are stated with the condition strongly monotonicity of the operators A and g instead of the mild condition relaxed cocoercivity.
Remark 7.12. In view of the above facts, we note that Theorems 4.8 and 4.10 generalize and improve Theorem 3.1 in [38]. Theorems 4.8, 4.10 and 4.11 generalize and improve Theorem 3.2 in [38]. Theorems 6.8-6.11 improve and generalize Theorem 3.2 in [42], Theorem 3.1 in [44] and [47] and Corollaries 3.3 and 3.4 in [47].
8 Conclusion
In this paper, we have introduced and considered some new classes of extended general nonlinear regularized nonconvex variational inequalities and the extended general nonconvex Wiener-Hopf equations involving three different nonlinear operators. By the projection operator technique, we have established the equivalence between the extended general nonlinear regularized nonconvex variational inequalities and the fixed point problems as well as the extended general nonconvex Wiener-Hopf equations. Then by this equivalent formulation, we have discussed the existence and uniqueness theorem for solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. This equivalence and a nearly uniformly Lipschitzian mapping S are used to suggest and analyze some new perturbed p-step projection iterative algorithms with mixed errors for finding an element of the set of the fixed points of the nearly uniformly Lipschitzian mapping S which is unique solution of the problem of extended general nonlinear regularized nonconvex variational inequalities. We have presented some remarks about established statements by Noor [38], Noor et al. [44], Qin and Noor [47] and also have shown that their statements are special cases of our results. Several special cases are also discussed. It is expected that the results proved in this paper may simulate further research regarding the numerical methods and their applications in various fields of pure and applied sciences.
References
Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C R Acad Sci Paris 1964, 258: 4413–4416. 1, 3
Bensoussan A, Lions JL: Application des Inéquations variationelles en control et en Stochastiques. Dunod, Paris 1978. 1
Harker PT, Pang JS: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithm and applications. Math Program 1990, 48: 161–220. 1 10.1007/BF01582255
Alimohammady M, Balooee J, Cho YJ, Roohi M: A new system of nonlinear fuzzy variational inclusions involving ( A , η )-accretive mappings in uniformly smooth Banach spaces. J Inequal Appl 2009., 2009: Article ID 806727, 33 pages doi:10.1155/2010/806727
Alimohammady M, Balooee J, Cho YJ, Roohi M: Generalized nonlinear random equations with random fuzzy and relaxed cocoercive mappings in Banach spaces. Advan in Nonlinear Variat Inequal 2010, 13: 37–58.
Alimohammady M, Balooee J, Cho YJ, Roohi M: Iterative algorithms for a new class of extended general nonconvex set-valued variational inequalities. Nonlinear Anal 2010, 73: 3907–3923. 10.1016/j.na.2010.08.022
Alimohammady M, Balooee J, Cho YJ, Roohi M: New perturbed finite step iterative algorithms for a system of extended generalized nonlinear mixed quasi-variational inclusions. Comput Math Appl 2010, 60: 2953–2970. 10.1016/j.camwa.2010.09.055
Agarwal RP, Cho YJ, Petrot N: Systems of general nonlinear set-valued mixed variational inequalities problems in Hilbert spaces. Fixed Point Theory Appl 2011., 2011: 2011:31 doi:10.1186/1687–1812–2011–31
Cho YJ, Lan HY: A new class of generalized nonlinear multi-valued quasi-variational-like-inclusions with H -monotone mappings. Math Inequal Appl 2007, 10: 389–401.
Cho YJ, Qin X, Shang MJ, Su YF: Generalized nonlinear variational inclusions involving ( A , η )-monotone mappings in Hilbert spaces. Fixed Point Theory and Appl 2007., 2007: Article ID 29653, 6 pages
Cho YJ, Lan HY: Generalized nonlinear random ( A , η )-accretive equations with random relaxed cocoercive mappings in Banach spaces. Comput Math Appl 2008, 55: 2173–2182. 10.1016/j.camwa.2007.09.002
Cho YJ, Qin X: Systems of generalized nonlinear variational inequalities and its projection methods. Nonlinear Anal 2008, 69: 4443–4451. 10.1016/j.na.2007.11.001
Cho YJ, Qin X: Generalized systems for relaxed cocoercive variational inequalities and projection methods in Hilbert spaces. Math Inequal Appl 2009, 12: 365–375.
Cho YJ, Petrot N: Approximate solvability of a system of nonlinear relaxed cocoercive variational inequalities and Lipschitz continuous mappings in Hilbert spaces. Advan in Nonlinear Variat Inequal 2010, 13: 91–101.
Cho YJ, Argyros IK, Petrot N: Approximation methods for common solutions of generalized equilibrium, systems of nonlinear variational inequalities and fixed point problems. Comput Math Appl 2010, 60: 2292–2301. 10.1016/j.camwa.2010.08.021
Lan HY, Kang JI, Cho YJ: Nonlinear ( A , η )-monotone operator inclusion systems involving non-monotone set-valued mappings. Taiwan J Math 2007, 11: 683–701.
Qin X, Kang JI, Cho YJ: On quasi-variational inclusions and asymptotically pseudo-contractions. J Nonlinear Convex Anal 2010, 11: 441–453.
Yao Y, Cho YJ, Liou Y: Algorithms of common solutions for variational inclusions, mixed equilibrium problems and fixed point problems. Europ J Operat Res 2011, 212: 242–250. 10.1016/j.ejor.2011.01.042
Bnouhachem A, Noor MA: Numerical methods for general mixed variational inequalities. Appl Math Comput 2008, 204: 27–36. 10.1016/j.amc.2008.05.134
Bounkhel M: Existence results of nonconvex differential inclusions. Port Math (NS) 2002,59(3):283–309. 2
Bounkhel M: General existence results for second order nonconvex sweeping process with unbounded perturbations. Port Math (NS) 2003,60(3):269–304. 2
Bounkhel M, Azzam L: Existence results on the second order nonconvex sweeping processes with perturbations. Set-valued Anal 2004,12(3):291–318. 2
Bounkhel M, Tadji L, Hamdi A: Iterative schemes to solve nonconvex variational problems. J Inequal Pure Appl Math 2003, 4: 1–14. 1
Bounkhel M, Thibault L: Further characterizations of regular sets in Hilbert spaces and their applications to nonconvex sweeping process. In Preprint, Centro de Modelamiento Matematico (CMM). Universidad de Chile; 2000. 2
Canino A: On p -convex sets and geodesics. J Diff Equ 1988, 75: 118–157. 2 10.1016/0022-0396(88)90132-5
Chang SS, Cho YJ, Zhou H: Iterative Methods for Nonlinear Operator Equations in Banach Spaces. Nova Science Publishers Inc., Huntington, NY; 2002:xiv+459.
Clarke FH: Optimization and Nonsmooth Analysis. Wiley, New York 1983. 2, 2.7
Clarke FH, Ledyaev YuS, Stern RJ, Wolenski PR: Nonsmooth Analysis and Control Theory. Springer, New York 1998. 1, 2, 2, 2, 2.7
Clarke FH, Stern RJ, Wolenski PR: Proximal smoothness and the lower C2property. J Convex Anal 1995,2(1/2):117–144. 1, 2, 2, 2.8, 2, 2.9, 2
Lions PL, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J Numer Anal 1979, 16: 964–979. 5.1 10.1137/0716071
Lions JL, Stampacchia G: Variational inequalities. Comm Pure Appl Math 1967, 20: 493–512. 1, 5.1 10.1002/cpa.3160200302
Liu LS: Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J Math Anal Appl 1995, 194: 114–125. 4 10.1006/jmaa.1995.1289
Noor MA: Auxiliary principle technique for extended general variational inequalities. Banach J Math Anal 2008, 2: 33–39. 3
Noor MA: Differentiable nonconvex functions and general variational inequalities. Appl Math Comput 2008, 199: 623–630. 3 10.1016/j.amc.2007.10.023
Noor MA: Iterative methods for general nonconvex variational inequalities. Albanian J Math 2009,3(1):117–127. 5, 5.1
Noor MA: Iterative schemes for nonconvex variational inequalites. J Optim Theory Appl 2004, 121: 385–395. 1
Noor MA: General variational inequalities. Appl Math Lett 1988,1(2):119–122. 3 10.1016/0893-9659(88)90054-7
Noor MA: General variational inequalities and nonexpansive mappings. J Math Anal Appl 2007, 331: 810–822. 1, 4.6, 7, 7.1, 7.2, 7.4, 7.12, 8 10.1016/j.jmaa.2006.09.039
Noor MA: Sensitivity analysis of extended general variational inequalities. Appl Math E-Notes 2009, 9: 17–26. 3, 5, 5.1
Noor MA: Some developments in general variational inequalities. Appl Math Comput 2004, 152: 199–277. 5 10.1016/S0096-3003(03)00558-7
Noor MA: Variational Inequalities and Applications. Lecture Notes, Mathematics Department, COMSATS Institute of information Technology, Islamabad, Pakistan 2007–2009. 1
Noor MA: Wiener-Hopf equations and variational inequalities. J Optim Theory Appl 1993,79(1):197–206. 6.7, 7.12 10.1007/BF00941894
Noor MA, Huang Z: Three-step iterative methods for nonexpansive mappings and variational inequalities. Appl Math Comput 2007, 187: 680–685. 1 10.1016/j.amc.2006.08.088
Noor MA, Zainab S, Yaqoob H: General Winer-Hope equations and nonexpansive mappings. J Math Inequal 2008,2(2):215–227. 1, 4.6, 6.7, 7, 7, 7.5, 7.6, 7, 7.12, 8
Pang LP, Shen J, Song HS: A modified predictor-corrector algorithm for solving nonconvex generalized variational inequalities. Comput Math Appl 2007, 54: 319–325. 1 10.1016/j.camwa.2006.07.010
Poliquin RA, Rockafellar RT, Thibault L: Local differentiability of distance functions. Trans Am Math Soc 2000, 352: 5231–5249. 1, 2, 2.7, 2, 2 10.1090/S0002-9947-00-02550-2
Qin X, Noor MA: General Wiener-Hopf equation technique for nonexpansive mappings and general variational inequalities in Hilbert spaces. Appl Math Comput 2008, 201: 716–722. 1, 6.7, 7, 7, 7.8, 7.9, 7.11, 7.12, 8 10.1016/j.amc.2008.01.007
Robinson SM: Normal maps induced by linear transformations. Math Oper Res 1992, 17: 691–714. 1 10.1287/moor.17.3.691
Robinson SM: Sensitivity analysis of variational inequalities by normal-map techniques. In Variational Inequalities and Network Equilibrium Problems. Edited by: Giannessi F, Maugeri A. Plenum Press, New York; 1995:257–276.
Shi P: Equivalence of variational inequalities with Wiener-Hopf equations. Proc Am Math Soc 1991, 111: 339–346. 1, 5, 5.1 10.1090/S0002-9939-1991-1037224-3
Shi P: An iterative method for obstacles problems via Green's functions. Nonlinear anal 1990, 15: 339–344. 1 10.1016/0362-546X(90)90142-4
Sellami H, Robinson SM: Implementation of a continuation method for normal maps. Math Program 1997, 76: 563–578.
Sahu DR: Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment Math Univ Carolin 2005,46(4):653–666. 1, 2.12, 2.13
Goebel K, Kirk WA: A fixed point theorem for asymptotically nonexpansive mappings. Proc Am Math Soc 1972, 35: 171–174. 2.12 10.1090/S0002-9939-1972-0298500-3
Kirk WA, Xu HK: Asymptotic pointwise contractions. Nonlinear Anal 2008, 69: 4706–4712. 2.12 10.1016/j.na.2007.11.023
Acknowledgements
The authors are thankful to the referees for their helpful corrections, comments and valuable suggestions in preparation of the paper. The second author was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant Number: 2011-0021821).
Author information
Authors and Affiliations
Corresponding authors
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Balooee, J., Cho, Y.J. & Kang, M.K. The Wiener-Hopf Equation Technique for Solving General Nonlinear Regularized Nonconvex Variational Inequalities. Fixed Point Theory Appl 2011, 86 (2011). https://doi.org/10.1186/1687-1812-2011-86
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2011-86