- Research
- Open access
- Published:
Viscosity iteration methods for a split feasibility problem and a mixed equilibrium problem in a Hilbert space
Fixed Point Theory and Applications volume 2012, Article number: 226 (2012)
Abstract
In this paper, we consider and analyze two viscosity iteration algorithms (one implicit and one explicit) for finding a common element of the solution set of a mixed equilibrium problem and the set Γ of a split feasibility problem in a real Hilbert space. Furthermore, we derive the strong convergence of a viscosity iteration algorithm to an element of under mild assumptions.
1 Introduction
The split feasibility problem (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. In this paper we work in the framework of infinite-dimensional Hilbert spaces. In this setting, the split feasibility (SFP) is formulated as finding a point with the property
where C and D are the nonempty closed convex subsets of the infinite-dimensional real Hilbert spaces and , and is a bounded linear operator. For related works, please refer to [2–8].
Let H be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let C be a nonempty closed convex subset of H, and let F be a bifunction of into ℝ which is the set of real numbers. The equilibrium problem for is to find such that
The set of solutions of (1.1) is denoted by . Equilibrium problems theory has emerged as an interesting and fascinating branch of applicable mathematics. The mixed equilibrium problem is as follows:
In the sequel, we indicate by the set of solutions of our mixed equilibrium problem. If , we denote by . The mixed equilibrium problem (1.3) has become a rich source of inspiration and motivation for the study of a large number of problems arising in economics, optimization problems, variational inequalities, minimax problem, Nash equilibrium problem in noncooperative games and others (e.g., [9–14]).
It is our purpose in this paper to consider and analyze two viscosity iteration algorithms (one implicit and one explicit) for finding a common element of a solution set Γ of the split feasibility problem (1.1) and a set of the mixed equilibrium problem (1.3) in a real Hilbert space. Furthermore, we prove that the proposed viscosity iteration methods converge strongly to a particular solution of the mixed equilibrium problem (1.3) and the split feasibility problem (1.1).
2 Preliminaries
Assume H is a Hilbert space and C is a nonempty closed convex subset of H. The projection, denoted by , from H onto C assigns for each the unique point so that
Proposition 2.1 (Basic properties of projections [15])
-
(i)
for all and ;
-
(ii)
for all ;
-
(iii)
for all and .
We also consider some nonlinear operators which are introduced in the following.
Definition 2.2 Let be a nonlinear mapping. A is said to be
-
(i)
Monotone if
-
(ii)
Strongly monotone if there exists a constant such that
For such a case, A is said to be α-strongly-monotone.
-
(iii)
Inverse-strongly monotone (ism) if there exists a constant such that
For such a case, A is said to be α-inverse-strongly-monotone (α-ism).
-
(iv)
k-Lipschitz continuous if there exists a constant such that
Remark 2.3 Let , where f is a L-Lipschitz mapping on H with the coefficient , . It is a simple matter to see that the operator F is -strongly monotone over H, i.e.,
Definition 2.4 A mapping is said to be
-
(a)
nonexpansive if , ;
-
(b)
firmly nonexpansive if is nonexpansive. , where is nonexpansive, Alternatively, T is firmly nonexpansive if and only if
-
(c)
average if , where and is nonexpansive. In this case, we also claimed that T is ϵ-averaged. A firmly nonexpansive mapping is -averaged.
Proposition 2.5 ([3])
Let be a given mapping.
-
(i)
T is nonexpansive if and only if the complement is -ism.
-
(ii)
If T is v-ism, then for , γT is -ism.
-
(iii)
T is averaged if and only if the complement is v-ism for . Indeed, for , T is α-averaged if and only if is -ism.
Proposition 2.6 ([3])
Given operators .
-
(i)
If for some and if S is averaged and V is nonexpansive, then S is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(iii)
If for some , S is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composite of finitely many averaged mappings is averaged. That is, if each of the mappings is averaged, then so is the composite . In particular, if is -averaged and is -averaged, where , then the composite is α-averaged, where .
-
(v)
If the mappings are averaged and have a common fixed point, then
(Here the notation denotes the set of fixed points of the mapping T, that is, .)
Definition 2.7 A bifunction is monotone if , . A function is upper hemicontinuous if
For solving the mixed equilibrium problem for a bifunction , let us assume that F satisfies the following conditions:
(A1) for all ;
(A2) F is monotone, that is, for all ;
(A3) for each ,
(A4) for each , is convex and lower semicontinuous.
Lemma 2.8 ([16])
Let C be a convex closed subset of a Hilbert space H. Let be a bifunction such that
(f1) , ;
(f2) is monotone and supper hemicontinuous;
(f3) is lower semicontinuous and convex.
Let be a bifunction such that
(h1) , ;
(h2) is monotone and upper semicontinuous;
(h3) is convex.
Moreover, let us suppose that
-
(H)
for fixed and , there exists a bounded set and such that for all , , for and . Let be a mapping defined by
(2.1)
called a resolvent of and .
Then
-
(i)
;
-
(ii)
is a single value;
-
(iii)
is firmly nonexpansive;
-
(iv)
and it is closed and convex.
Definition 2.9 Let H be a real Hilbert space and be a function.
-
(i)
Minimization problem:
-
(ii)
Tikhonov’s regularization problem:
(2.2)
where is the regularization parameter.
Proposition 2.10 ([17])
If the SFP is consistent, then the strong exists and is the minimum-norm solution of the SFP.
Proposition 2.11 ([17])
A necessary and sufficient condition for to converge in norm as is that the minimization
is attained at a point in the set .
Remark 2.12 ([17])
Assume that the SFP is consistent, and let be its minimum-norm solution, namely has the property
From (2.2), observing that the gradient
is an -Lipschitzian and α-strongly monotone mapping, the mapping is a contraction with the coefficient
where
Remark 2.13 The mapping is nonexpansive.
In fact, we have seen that is -inverse strongly monotone and is -inverse strongly monotone, by Proposition 2.5(iii) the complement is -averaged. Therefore, noting that is -averaged and applying Proposition 2.6(iv), we know that for each , is α-averaged, with
Hence, it is clear that T is nonexpansive.
Lemma 2.14 ([17])
Assume that the SFP (1.1) is consistent. Define a sequence by the iterative algorithm
where and satisfy the following conditions:
-
(i)
for all n;
-
(ii)
and ;
-
(iii)
;
-
(iv)
.
Then converges in norm to the minimum-norm solution of the SFP (1.1).
Lemma 2.15 ([18])
Let and be bounded sequences in a Banach space X and let be a sequence in . Suppose that for all and . Then, .
Lemma 2.16 ([19])
Let K be a nonempty closed convex subset of a real Hilbert space H and be a nonexpansive mapping with . If is a sequence in K weakly converging to x and if converges strongly to y, then ; in particular, if , then .
Assume is a sequence of nonnegative real numbers such that
where is a sequence in and is a sequence such that
-
(1)
,
-
(2)
or .
Then .
3 Main results
In this section, we introduce two algorithms for solving the mixed equilibrium problem (1.3). Namely, we want to find a solution of the mixed equilibrium problem (1.3) and also solves the following variational inequality:
where B is a k-Lipschitz and η-strongly monotone operator on H with , and , and is a β-contraction mapping, . Let be two bifunctions. In order to find a particular solution of the variational inequality (3.1), we construct the following implicit algorithm.
Algorithm 3.1 For an arbitrary initial point , we define a sequence iteratively
for all , where is a real sequence in , is defined by Lemma 2.8 and is introduced in Remark 2.12.
We show that the sequence defined by (3.2) converges to a particular solution of the variational inequality (3.1). As a matter of fact, in this paper, we study a general algorithm for solving the variational inequality (3.1).
Let be a β-contraction mapping. For each , we consider the following mapping given by:
Lemma 3.2 is a contraction. Indeed,
where , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14.
Proof It is clear that is a self-mapping. Observe that
Let and , we obtain
Note that and are nonexpansive, is a contraction mapping with the coefficient and . Hence, , we obtain
Therefore, is a contraction mapping when . □
From Lemma 3.2 and using the Banach contraction principle, there exists a unique fixed point of in C, i.e., we obtain the following algorithm.
Algorithm 3.3 For an arbitrary initial point , we define a sequence iteratively
for all , where and are two real sequences in , is defined by Lemma 2.8 and is introduced in Remark 2.12.
At this point, we would like to point out that Algorithm 3.3 includes Algorithm 3.1 as a special case due to the fact that the contraction g is a possible nonself-mapping.
Theorem 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with , and , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14. Let be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma 2.8. Let be a β-contraction. Assume . Then the sequence generated by implicit Algorithm 3.3 converges in norm, as , to the unique solution of the variational inequality (3.1). In particular, if we take , then the sequence defined by Algorithm 3.1 converges in norm, as , to the unique solution of the following variational inequality:
Proof Next, we divide the remainder of the proof into several steps.
Step 1. We prove that the sequence is bounded.
Set for all . Take . It is clear that . From Remark 2.13, we know that is nonexpansive, then we have
From (3.5), (3.6) and the fact that is nonexpansive, it follows that
It follows by induction that
This indicates that is bounded. It is easy to deduce that and are also bounded.
Now, we can choose a constant such that
Step 2. We prove that .
From (3.5), (3.6) and the fact that is nonexpansive, we have
Note that is an -Lipschitzian and α-strongly monotone mapping. From Lemma 2.8, (3.5) and (3.6), we have
which implies that
By (3.7) and (3.8), we obtain
It follows that
This together with and implies that
Setting , we have
From , is bounded and (3.9), we obtain
By (3.9) and (3.10), we also have
Step 3. We prove .
By (3.4) and (3.5), we deduce
It follows that
Since is bounded, without loss of generality, we may assume that converges weakly to a point . Hence, and .
Step 4. We show .
Since , for any , we obtain
From the monotonicity of and , we get
Hence,
Since and , from (A4), it follows for all . Put for all and , then we have , So, from (A1) and (A4), we have
and hence . From (A3), we have for all . Therefore, .
Next, we prove .
From Remark 2.13, we know that is nonexpansive, then we have
So, from , , , and the bounded sequence of it follows that
Thus, taking into account and , and from Lemma 2.16, we get . Therefore, we have . This shows that it holds that
Step 5. .
We substitute for z in (3.12) to get
Hence, the weak convergence of implies that strongly.
Now, we return to (3.12) and take the limit as to obtain
In particular, solves the following variational inequality:
or the equivalent dual variational inequality
Therefore, . That is, is the unique fixed point in Ω of the contraction . □
Remark 3.5 If we take , then (3.15) is reduced to
Equivalently,
This clearly implies that
Therefore, is a particular solution of the variational inequality (3.1).
Next, we introduce an explicit algorithm for finding a solution of the variational inequality (3.1). This scheme is obtained by discretizing the implicit scheme (3.3). We show the strong convergence of this algorithm.
Theorem 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with , and , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14. Let be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma 2.8. Let be a β-contraction. Assume . For given , let the sequence generated by
where and are two sequences in , satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
.
Then the sequence converges strongly to which is the unique solution of the variational inequality (3.1). In particular, if , then the sequence generated by
converges strongly to a solution of the following variational inequality:
Proof First, we prove that the sequence is bounded. Indeed, pick .
Let . Set for all . From (3.16), we have
and
By induction, we have, ,
Hence, is bounded. Consequently, we deduce that , and are all bounded. Let be a constant such that
Next, we show .
Define for all . It follows from (3.16) that
This together with (i) implies that
Hence, by Lemma 2.15, we get . Consequently,
By the convexity of the norm , we obtain
Let and by , we obtain
Thus, we deduce
By (3.17) and (3.19), we obtain
It follows that
Since , , , is bounded and , we derive that
Setting , from (3.16), we have
Thus,
From and is bounded, we obtain
By (3.20) and (3.21), we also have
Next, we prove
Indeed, we can choose a subsequence of such that
Without loss of generality, we may further assume that . By the same argument as that of Step 4 from Theorem 3.4, we can deduce that . Therefore,
From (3.16), we have
where and . It is clear that and . Hence, all the conditions of Lemma 2.17 are satisfied. Therefore, we immediately deduce that .
Remark 3.7 If we take , by the similar argument as that in Theorem 3.6, we deduce immediately that is a particular solution of the variational inequality (3.1). This completes the proof. □
4 Application in the multiple-set split feasibility problem
Recall that the multiple-set split feasibility problem (MSSFP) [4] is to find a point such that
where are integers, and are closed convex subsets of Hilbert spaces and , and is a bounded linear operator. The special case where , called the split feasibility problem (1.1), was introduced by Censor and Elfving [4] for modeling phase retrieval and other image restoration problems.
Let Γ be the solution set of SFP, and let . Assume that . Thus, which implies the equation which in turn implies the equation , and hence the fixed point equation . Requiring that , we consider the fixed point equation
It is claimed that the solutions of the fixed point equation (4.2) are exactly the solution of the SFP. According the Byrne [2] and Xu [17], we obtain the following proposition.
Proposition 4.1 Given , solves the SFP if and only if solves the fixed point (4.2).
From this proposition, we can easily obtain that MSSFP (4.1) is equivalent to a common fixed point problem of finitely many nonexpansive mappings, as we show in the following.
Decompose MSSFP into N subproblems ():
Next, we define a mapping as follows:
where the proximity function g is defined by
where are such that . Consider the minimization of g over C:
Observe that the gradient ∇g is
which is L-Lipschitz continuous with the constant and thus is -ism. It is claimed that if , is nonexpansive. Therefore, fixed point algorithms for nonexpansive mappings can be applied to MSSFP (4.1).
From Algorithm 3.1, Algorithm 3.3 and Proposition 4.1, we consider our results on the optimization method for solving MSSFP (4.1), and obtain the following two algorithms.
Algorithm 4.2 For an arbitrary initial point , we define a sequence iteratively
for all , is defined by Lemma 2.8 and ∇g is introduced in (4.3).
Algorithm 4.3 For an arbitrary initial point , we define a sequence iteratively
for all , where are two real sequences in , is defined by Lemma 2.8 and ∇g is introduced in (4.3).
In addition, we would like to point out that Algorithm 4.3 includes Algorithm 4.2 as a special case due to the fact that the contraction f is a possible nonself-mapping. According to Theorem 3.4, we obtain the following theorem.
Theorem 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with , and , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14. Let be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma 2.8. Let be a β-contraction. Assume , Γ is the solution set of MSSFP (4.1). Then the sequence generated by implicit Algorithm 4.3 converges in norm, as , to the unique solution of the variational inequality (3.1). In particular, if we take , then the sequence defined by Algorithm 4.3 converges in norm, as , to the unique solution of the variational inequality (3.1).
Proof Let
Then, as the composition of finitely many nonexpansive mappings, U is nonexpansive. Also Algorithm 4.3 can be written as
Since and U are nonexpansive, and following the proof of Theorem 3.4, we obtain the sequence converges strongly to a fixed point of U which is also a common fixed point of or a solution of MSSFP (4.1). □
From Theorem 3.6, we introduce an explicit algorithm for finding a common fixed point and for solving the variational inequality (3.1) and multiple set feasibility problem (4.1). This scheme is obtained by discretizing the implicit scheme (4.8).
Theorem 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with , and , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14. Let be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma 2.8. Let be a β-contraction. Assume , Γ is the solution set of MSSFP (4.1). For given , let the sequence generated by
where and are two sequences in , satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
.
Then the sequence converges strongly to which is the unique solution of the variational inequality (3.1). In particular, if , then the sequence generated by
converges strongly to a solution of the following variational inequality:
Proof Following the assumption of (4.6), (4.8) can be written as
Since and U are nonexpansive, following the proof of Theorem 3.6, we can easily claim that the sequence converges strongly to the common fixed point of which solves the mixed equilibrium problem (), and U is a solution of MSSFP (4.1). □
According to [22], we can obtain the following proposition.
Proposition 4.6 is a solution of MSSFP (4.1) if and only if .
Observe that if MSSFP(4.1) is consistent, then any solution x is a minimizer of f with minimum value zero. Note that a proximity functionf is as follows:
where for all and for all . Then the gradient of f is
It is claimed that the gradient ∇f is Lipschitz with the constant
To see this, we notice that projections and their complements are nonexpansive. Thus, both and are nonexpansive for each i and j. In addition, we can easily obtain that is a nonexpansive mapping. Therefore, we can use the gradient projection method to solve the minimization problem:
where Ω is a closed convex subset of , whose intersection with the solution set of MSSFP is nonempty, and obtain a solution of the so-called constrained multiple set feasibility problem (CMSSFP):
From Proposition 4.6 and Algorithm 3.3, we obtain the corresponding algorithm and the convergence theorems for MSSFP (4.1).
Algorithm 4.7 For an arbitrary initial point , we define a sequence iteratively
for all , where and are two real sequences in , is defined by Lemma 2.8 and ∇f is introduced in (4.9).
Theorem 4.8 Let C be a nonempty closed convex subset of a real Hilbert space H. Let B be a k-Lipschitz and η-strongly monotone operator on H with , and , and the sequence of and satisfy the conditions (i)-(iv) in Lemma 2.14. Let be two bifunctions which satisfy the conditions (f1)-(f4), (h1)-(h3) and (H) in Lemma 2.8. Let be a β-contraction. Assume , Γ is the solution set of MSSFP (4.1). For given , let the sequence generated by Algorithm 4.7, where and are two sequences in , satisfy the following conditions:
-
(i)
and ;
-
(ii)
;
-
(iii)
.
Then the sequence converges strongly to which is the unique solution of the variational inequality (3.1). In particular, if , then the sequence generated by
converges strongly to a solution of the following variational inequality:
Proof From Proposition 4.6, we know that is a nonexpansive mapping. Thus, using the proof of Theorem 3.4, we obtain that the sequence converges strongly to a fixed point of or a solution of MSSFP (4.1), and this fixed point is a solution of the set of mixed equilibrium problem (1.3). □
References
Censor Y, Elfving T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8: 221–239. 10.1007/BF02142692
Byrne C: Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18: 441–453. 10.1088/0266-5611/18/2/310
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 18: 103–120.
Censor Y, Elfving T, Kopf N, Bortfeld T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21: 2071–2084. 10.1088/0266-5611/21/6/017
Lopez G, Martin V, Xu HK: Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Edited by: Censor Y, Jiang M, Wang G. Medical Physics Publishing, Madison; 2009:243–279.
Qu B, Xiu N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21: 1655–1665. 10.1088/0266-5611/21/5/009
Wang F, Xu HK: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 2011. doi:10.1016/j.na.2011.03.044
Xu HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22: 2021–2034. 10.1088/0266-5611/22/6/007
Jaiboona C, Kumam P: A general iterative method for addressing mixed equilibrium problems and optimization problems. Nonlinear Anal. 2010, 73: 1180–1202. 10.1016/j.na.2010.04.041
Kumam P, Jaiboon C: Approximation of common solutions to system of mixed equilibrium problems, variational inequality problem, and strict pseudo-contractive mappings. Fixed Point Theory Appl. 2011., 2011: Article ID 347204
Saewan S, Kumam P: A modified hybrid projection method for solving generalized mixed equilibrium problems and fixed point problems in Banach spaces. Comput. Math. Appl. 2011, 62: 1723–1735. 10.1016/j.camwa.2011.06.014
Saewan S, Kumam P: Convergence theorems for mixed equilibrium problems, variational inequality problem and uniformly quasi-asymptotically nonexpansive mappings. Appl. Math. Comput. 2011, 218: 3522–3538. 10.1016/j.amc.2011.08.099
Xie DP: Auxiliary principle and iterative algorithm for a new system of generalized mixed equilibrium problems in Banach spaces. Appl. Math. Comput. 2011, 218: 3507–3514. 10.1016/j.amc.2011.08.097
Yao YH, Noor MA, Lioud YC, Kang SM: Some new algorithms for solving mixed equilibrium problems. Comput. Math. Appl. 2010, 60: 1351–1359. 10.1016/j.camwa.2010.06.016
Geobel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics in Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Cianciaruso F, Marino G, Muglia L, Hong Y: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010., 2010: Article ID 383740
Xu HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010., 26: Article ID 105018
Nadezhkina N, Takahashi W: Weak convergence theorem by an extra gradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006, 128: 191–201. 10.1007/s10957-005-7564-z
Browder FE: Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 43: 1272–1276.
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 2: 1–17.
Xu HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059
Yao YH, Chen RD, Marino G, Liou YC: Applications of fixed-point and optimization methods to the multiple-set split feasibility problem. J. Appl. Math. 2012., 2012: Article ID 927530
Acknowledgements
This work is supported in part by National Natural Science Foundation of China (71272148), the Ph.D. Programs Foundation of Ministry of Education of China (20120032110039) and China Postdoctoral Science Foundation (Grant No. 20100470783).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Deng, BC., Chen, T. & Dong, QL. Viscosity iteration methods for a split feasibility problem and a mixed equilibrium problem in a Hilbert space. Fixed Point Theory Appl 2012, 226 (2012). https://doi.org/10.1186/1687-1812-2012-226
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2012-226