- Research
- Open access
- Published:
An improved method for solving multiple-sets split feasibility problem
Fixed Point Theory and Applications volume 2012, Article number: 168 (2012)
Abstract
The multiple-sets split feasibility problem (MSSFP) has a variety of applications in the real world such as medical care, image reconstruction and signal processing. Censor et al. proposed solving the MSSFP by a proximity function, and then developed a class of simultaneous methods for solving split feasibility. In our paper, we improve a simultaneous method for solving the MSSFP and prove its convergence.
1 Introduction
Throughout this paper, let H be a Hilbert space, denote the inner product and denote the corresponding norm. The multiple-sets split feasibility problem (MSSFP) is a generalization of the split feasibility problem (SFP) and the convex feasibility problem (CFP); see [1]. Let and be closed convex sets in the N-dimensional and M-dimensional Euclidean spaces, respectively. The MSSFP is to find a vector satisfying
where A is a matrix of , and are integers. When , the problem becomes to find a vector , such that , which is just the two-sets split feasibility problem (SFP) introduced in [2]. The MSSFP has many applications in our real life such as image restoration, signal processing and medical care (e.g., [3–5]). In order to solve the MSSFP, Censor et al. considered the MSSFP in the following form:
is a nonempty closed convex set. In fact, (2) is equivalent to (1). Many methods have been developed to solve the SFP or MSSFP. The basic CQ algorithm was proposed by Byrne [6], then it was generalized to MSSFP by Censor [4]. The relaxed CQ algorithm was proposed by Yang [7], the half-space relaxation projection method was proposed by Qu and Xiu [8], and the variable Krasnosel’skii-Mann algorithm was proposed by Xu [9]. These algorithms first converted the problem to an equivalent optimization problem and then solved it via some technique from numerical optimization. It is easy to see that the MSSFP (1) is equivalent to the minimization problem
where and denote the orthogonal projections onto C and Q, respectively. The projections of a point onto C and Q are difficult to compute. In practical applications, however, the projections onto individual sets are more easily calculated than the projection onto the intersection C. For this purpose, Censor et al. [4] defined a proximity function to measure the distance of a point to all sets
where , for all i and j, respectively, and
With the proximity function (3), they proposed an optimization model
to approach the (2) and exerted the projection gradient method to solve it with
where ∇p denotes the gradient of and can be shown as follows (see [4]):
In this paper, we continue the algorithmic improvement on the constrained MSSFP. More specifically, the constrained MSSFP [10] is to find such that
By the same idea of approaching (2) via the model (4), we define and as follows:
Then we get the following optimization model which can solve (5):
It is easy to see that model (8) is nonnegative and with the minimal value zero. So, we can further reformulate (8) into the following separable form:
2 Preliminaries
In this section, we present some concepts and properties of the MSSFP.
Let M be a positive definite matrix. We denote the M-norm by . In particular, is the Euclidean norm of the vector .
Lemma 1 Let S be a nonempty closed convex subset of . We denote as the projection onto S, i.e.,
Then the following properties hold:
-
(1)
;
-
(2)
, and ;
-
(3)
, ;
-
(4)
, and ;
-
(5)
, .
Proof See Facchinei and Pang [11, 12]. □
Definition 1 Let F be a mapping from into , then
-
(a)
F is called monotone on S if
-
(b)
F is called strongly monotone on S if there is a such that
-
(c)
F is called co-coercive (or ν-inverse strongly monotone) on S if there is a such that
-
(d)
F is called pseudo-monotone on S if
-
(e)
F is called Lipschitz continuous on S if there exists a constant such that
and F is called nonexpansive iff .
Remark 1 From Lemma 1 and the above definition, we can infer that a monotone mapping is a pseudo-monotone mapping. An inverse strongly monotone mapping is monotone and Lipschitz continuous. A Lipschitz continuous and strongly monotone mapping is a strongly monotone mapping. The projection operator is 1-ism and nonexpansive.
Lemma 2 A mapping F is 1-ism if and only if the mapping is 1-ism, where I is the identity operator.
Proof See [[3], Lemma 2.3]. □
Remark 2 If F is an inverse strongly monotone mapping, then F is a nonexpansive mapping.
Definition 2 Let S be a nonempty closed convex subset of H and be a sequence in H, then the sequence is called Fejér monotone with respect to S if
Lemma 3 Let and be defined in (6)-(7), then and are both Lipschitz continuous and inverse strongly monotone on X and Y, respectively.
Proof From the definition (6), is differentiable on X and
Since the projection operator is 1-ism (from Remark 1), then from Lemma 2, the operator is 1-ism and is also nonexpansive. So, we have
therefore, is Lipschitz continuous on X, and the Lipschitz constant is . It also follows from [[13], Corollary 10] that is -ism. Similarly, we can prove that is Lipschitz continuous on Y, and the Lipschitz constant is , furthermore, it is -ism. □
For notational convenience, let
where τ, σ and β are given positive scalars.
Furthermore, we let . Suppose that is an optimal solution of the problem (9). Then the constrained MSSFP (5) is equivalent to finding such that for any , we have
3 Main results
In this section, we will present our method for solving the MSSFP and prove its convergence. Our algorithm is defined as follows:
Algorithm 3.1 Step 1. Give arbitrary , , , , , and , , . Let be the error tolerance for an approximate solution and set .
Step 2.
-
(1)
Find the smallest nonnegative integer such that and
(13)
which satisfies
-
(2)
Find the smallest nonnegative integer such that and
(14)
which satisfies
-
(3)
Then we define by
(15)
Step 3.
Let , then we get the new iterate via
where
Step 4.
If , stop. Otherwise, set and go to Step 2.
Remark 3 In fact, from Lemma 3, we know that is Lipschitz continuous with a constant , then the left-hand side of (13′) satisfies
So, (13′) holds as long as . Since , it has denoted by . On the other hand, by a similar analysis as in [[14], Lemma 3.3], it indicates that , so we have
Similarly, we can also have
Next, we analyze the convergence of Algorithm 3.1:
Lemma 4 Suppose and are generated by Algorithm 3.1, and is a solution of (12). Then there exits for any such that
Proof First, we prove .
From the property of the projection operator in Lemma 1,
Combining it with (13), we have
Multiplying by , we get
And from the definitions of , in (10) and (11), it is equivalent to
Similarly, from (14) and (15), we can also get
and
Using the notation defined above, from (20)-(22), we have
namely
Note that is monotone on W because of the monotonicity of and . From (12), we have
Consequently,
where the first inequality follows from (24), the second equality follows from the definition of , and .
Setting in (23), since is a solution, we get
Then
where the last inequality follows from (25).
Next, we prove
From the definition of and , we obtain
where the first inequality follows from (13′) and (14′).
Note that
and
Substituting them into (26), we have
For the sequences and are bounded from (18) and (19), let
therefore,
This completes the proof. □
Next, we prove the sequence is Fejér monotone.
Theorem 1 Suppose and are generated by Algorithm 3.1, and is a solution of (12). Then there exits for any such that
Proof From (16), we have
where the inequalities follow from Lemma 4 and (17).
Because and , and
we have
Let , then we can get .
So, from Lemma 4, we can yield
Therefore,
 □
Theorem 2 The sequence generated by Algorithm 3.1 converges to a solution of (8).
Proof Suppose is a solution of (8). It follows from Theorem 1 that
which means that the sequence is bounded. Thus, it has at least a cluster point.
Furthermore, Theorem 1 also shows that
Summing both sides for all k, we obtain
which means that
So, and have the same cluster points. Without loss of generality, let be a cluster point of and , and be the cluster points of and , respectively. Let , , , be the subsequences converging to them. Then, by taking limits over the subsequences in (13), (14), (15), we have
It then follows from [15] that is a solution of (12).
Because of the arbitrary , we can take in (27) and obtain
Therefore, the whole sequence converges to . This completes the proof. □
Remark 4 Our iteration method is simpler in the form and is an improvement of the corresponding result of [10].
4 Applications
The multiple-sets split feasibility problem (MSSFP) requires to find a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It serves as a model for real-word inverse problems where constraints are imposed on the solution in the domain of a linear operator as well as in the operator’s range.
In this paper, our algorithm converges to a solution of the multiple-sets split feasibility problem (MSSFP), for any starting vector , whenever the MSSFP has a solution. In the inconsistent case, it finds a point which is least violating the feasibility by being ‘closest’ to all sets as ‘measured’ by a proximity function.
In the general case, computing the projection in the MSSFP is difficult to implement. So, Yang [7] solves this problem by the relaxed CQ-algorithm. Without loss of generality, take the two-sets split feasibility problem for instance. He assumes the sets C and Q are nonempty and given by
where and are convex functions, respectively. Here he uses the subgradient projections instead of the orthogonal projections. This is a huge achievement and it enables the split feasibility problem to achieve computer operation.
Lastly, we want to say that our work is related to significant real-world applications. The multiple-sets split feasibility problem was applied to the inverse problem of intensity-modulated radiation therapy (IMRT). In this field, beams of penetrating radiation are directed at the lesion (tumor) from external sources in order to eradicate the tumor without causing irreparable damage to surrounding healthy tissues; see, e.g., [4].
References
Censor Y, Segal A: The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16: 587–600.
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017
Censor Y, Bortfeld T, Martin B, Trofimov A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51: 2353–2365. 10.1088/0031-9155/51/10/001
Byrne C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310
Yang Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20: 1261–1266. 10.1088/0266-5611/20/4/014
Qu B, Xiu N: A new half space-relaxation projection method for the split feasibility problem. Linear Algebra Appl. 2008, 428: 1218–1229. 10.1016/j.laa.2007.03.002
Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007
Zhang WX, Han DR, Yuan XM: An efficient simultaneous method for the constrained multiple-sets split feasibility problem. Comput. Optim. Appl. 2012. doi:10.1007/s10589–011–9429–8
Facchinei F, Pang JS I. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.
Facchinei F, Pang JS II. In Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin; 2003.
Baillon J, Haddad G: Quelques propriétés des opérateurs angel-bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664
Zhang WX, Han DR, Li ZB: A self-adaptive projection method for solving the multiple-sets split feasibility problem. Inverse Probl. 2009., 25: Article ID 11. doi:10.1088/0266–5611/25/11/115001
Bertsekas DP, Tsitsiklis JN: Parallel and Distributed Computation. Numerical Methods. Prentice Hall, Englewood Cliffs; 1989.
Acknowledgements
We wish to thank the referees for their helpful comments and suggestions. This research was supported by the National Natural Science Foundation of China, under the Grant No.11071279.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
YD proposed the algorithm, carried out the proof of this paper and drafted the manuscript. RC participated in the design of this paper and coordination. Both of the authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Du, Y., Chen, R. An improved method for solving multiple-sets split feasibility problem. Fixed Point Theory Appl 2012, 168 (2012). https://doi.org/10.1186/1687-1812-2012-168
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2012-168