Skip to main content

The contraction-proximal point algorithm with square-summable errors

Abstract

In this paper, we study the contraction-proximal point algorithm for approximating a zero of a maximal monotone mapping. The norm convergence of such an algorithm has been established under two new conditions. This extends a recent result obtained by Ceng, Wu and Yao to a more general case.

MSC:47J20, 49J40, 65J15, 90C25.

1 Introduction

We consider the problem of finding x ˆ H so that

0A x ˆ ,

where is a Hilbert space and A:HH is a given maximal monotone mapping. This problem is essential due to its various application in some concrete disciplines, including convex programming and variational inequalities. A classical way to solve such a problem is the proximal point algorithm (PPA) [1]. For any initial guess x 0 H, the PPA generates an iterative sequence as

x n + 1 = J c n ( x n + e n ),
(1)

where J c n stands for the resolvent of A and ( e n ) is the error sequence. In general, the following accuracy criterion on the error sequence:

e n ϵ n with  n = 0 ϵ n <,
(I)

is needed to ensure the convergence of PPA. In [1], Rockefeller also presented another accuracy criterion on the error sequence:

e n η n x ˜ n x n with  n = 0 η n <,

where

x ˜ n = J c n ( x n + e n ).

This criterion was then improved by Han and He [2] as

e n η n x ˜ n x n with  n = 0 η n 2 <.
(II)

It is well known that the PPA does not necessarily converge strongly [3]. Then how to modify the PPA so that the strong convergence is guaranteed attracts serious attention of many researchers (see, e.g., [48]). In particular, one method for doing this has the following scheme:

x n + 1 = λ n u+(1 λ n ) J c n ( x n + e n ),
(2)

where uH is fixed and ( λ n ) is a real sequence. This algorithm, introduced independently by Xu [8] and Kamimura-Takahashi [5], is known as the contraction-proximal point algorithm (CPPA) [9], which is indeed a combination of Halpern’s iteration and the PPA. There are various conditions that ensure the norm convergence of the CPPA with criterion (I) (cf. [7, 1012]) and the weakest one so far may be the following [13]:

  1. (i)

    c n c>0;

  2. (ii)

    lim n λ n =0, n = 0 λ n =;

  3. (iii)

    e n η n , n = 0 η n <.

Let us now turn our attention to the CPPA under criterion (II). In this situation, Ceng, Wu and Yao [14] obtained the norm convergence under the following conditions:

  1. (i)

    lim n c n =;

  2. (ii)

    lim n λ n =0, n = 0 λ n =;

  3. (iii)

    e n η n x ˜ n x n with n = 0 η n 2 <.

In the hypothesis mentioned above, the sequence ( c n ) is assumed to tend to infinity, so it is natural to ask whether the norm convergence is still guaranteed for bounded ( c n ), especially for constant sequence. In the present paper, we shall answer this question affirmatively and relax condition lim n c n = to a more general case:

c n c>0,

that is, we only need to assume the sequence ( c n ) is bounded below away from zero. The paper is organized as follows. In Section 2, we prove two useful lemmas that are very useful for proving the boundedness of the iteration. In Section 3, we establish norm convergence of the CPPA under two different conditions. As a result, we extend the corresponding result obtained in [14].

2 Some lemmas

We denote by ‘→’ strong convergence, and ‘’ weak convergence. An operator A:HH is called monotone if

uv,xy0,

for any uAx, vAy; maximal monotone if its graph

G(A)= { ( x , y ) : x D ( A ) , y A x }

is not properly contained in the graph of any other monotone operator.

Let C be a nonempty, closed and convex subset of . We use P C to denote the projection from onto C; namely, for xH, P C x is the unique point in C with the property:

x P C x= min y C xy.

It is well known that P C x is characterized by

x P C x,z P C x0,zC.
(3)

A mapping T:HH is called firmly nonexpansive if

T x T y 2 x y 2 ( I T ) x ( I T ) y 2

for all x,yH. Here and hereafter, we denote by

J c = ( I + c A ) 1

the resolvent of A, where c>0 and I is the identity operator. The zero set of A is denoted by S:={xD(A):0Ax}. The resolvent operator has the following properties (see [15]).

Lemma 1 Let A be a maximal monotone operator. Then

  1. (i)

    D( J c )=H;

  2. (ii)

    J c is single-valued and firmly nonexpansive;

  3. (iii)

    Fix( J c )=S, where Fix( J c ) denotes the fixed point set of J c ;

  4. (iv)

    its graph G(A) is weak-to-strong closed in H×H.

Since J c is firmly nonexpansive, this implies that

J c x z 2 x z 2 ( I J c ) x 2
(4)

for all xH and all zS. In what follows, we present two lemmas that are very useful for proving the boundedness of the iterative sequence.

Lemma 2 Given β>0, let ( s n ) be a nonnegative real sequence satisfying

s n + 1 (1 λ n )(1+ ϵ n ) s n + λ n β,

where ( λ n )(0,1) and ( ϵ n ) 1 are real sequences. Then ( s n ) is bounded; more precisely,

s n max{β, s 0 }exp ( n = 0 ϵ n ) <.

Proof

We first show the following estimates:

s n + 1 max{β, s 0 } k = 0 n (1+ ϵ k ),n0.
(5)

For n=0, we have

s 1 ( 1 λ 0 ) ( 1 + ϵ 0 ) s 0 + λ 0 β ( 1 + ϵ 0 ) [ ( 1 λ 0 ) s 0 + λ 0 β ] max { β , s 0 } ( 1 + ϵ 0 ) .

Assume s n max{β, s 0 } k = 0 n 1 (1+ ϵ k ). Since max{β, s 0 } k = 0 n 1 (1+ ϵ k )β, we have

s n + 1 ( 1 + ϵ n ) ( 1 λ n ) s n + λ n β ( 1 + ϵ n ) [ ( 1 λ n ) s n + λ n β ] ( 1 + ϵ n ) max { s 0 , β } k = 0 n 1 ( 1 + ϵ k ) = max { β , s 0 } k = 0 n ( 1 + ϵ k ) .

We thus verify inequality (5) by induction. Hence,

s n + 1 max { β , s 0 } k = 0 n ( 1 + ϵ k ) = max { β , s 0 } exp ( k = 0 n ln ( 1 + ϵ k ) ) max { β , s 0 } exp ( k = 0 ϵ k ) < ,

where the last inequality follows from the basic inequality: ln(1+x)<x for all x>0. □

Lemma 3 Given β>0, let ( s n ) be a nonnegative real sequence satisfying

s n + 1 (1 λ n )(1+ ϵ n ) s n + λ n β,

where ( λ n )(0,1) and ( ϵ n )[0,) are real sequences. If 2 ϵ n (1 λ n ) λ n , then ( s n ) is bounded; more precisely, s n max{2β, s 0 }<.

Proof Let τ n = λ n ϵ n (1 λ n ). Then τ n (0,1). It follows that

s n + 1 ( 1 + ϵ n ) ( 1 λ n ) s n + λ n β = ( 1 τ n ) s n + τ n ( λ n β / τ n ) max { λ n β / τ n , s n } .

Since 2 ϵ n (1 λ n ) λ n , we have

λ n τ n = λ n λ n ϵ n ( 1 λ n ) 2,

which implies that

s n + 1 max{2β, s n }.

By induction, we can show the result as desired. □

We end this section by two useful lemmas. The first one is due to Maingé [16] and the second one is due to Xu [8].

Lemma 4 Let ( s n ) be a real sequence that does not decrease at infinity, in the sense that there exists a subsequence ( s n k ) so that

s n k s n k + 1 for all k0.

For every n> n 0 define an integer sequence (τ(n)) as

τ(n)=max{ n 0 kn: s k < s k + 1 }.

Then τ(n) as n and for all n> n 0

max( s τ ( n ) , s n ) s τ ( n ) + 1 .
(6)

Lemma 5 Let { s n }, { c n } R + , { λ n }(0,1) and { b n }R be sequences such that

s n + 1 (1 λ n ) s n + b n + c n for all n0.

If λ n 0, n = 0 λ n =, n = 0 c n < and lim n b n / λ n 0, then lim n s n =0.

3 Convergence analysis

In what follows, we assume that A is a maximal monotone mapping and its zero set S is nonempty. To establish the convergence, we need the following lemma, which is indeed proved in [2]. We present here a different proof that is mainly based on property of firmly nonexpansive mappings.

Lemma 6 Let η(0,1/2), x,eH and x ˜ := J c (x+e). If eηx x ˜ , then

x ˜ z 2 ( 1 + ( 2 η ) 2 ) x z 2 1 2 x ˜ x 2 ,zS.
(7)

Proof Since zFix( J c ), it follows from (4) that

x ˜ z 2 x + e z 2 x + e J c ( x + e ) 2 = ( x z ) + e 2 ( x x ˜ ) + e 2 = x z 2 + 2 x ˜ z , e x ˜ x 2 .
(8)

By using inequality 2a,b2 η 2 a 2 + b 2 /2 η 2 , we have

2 x ˜ z,e2 η 2 x ˜ z 2 + 1 2 η 2 e 2 .

Subsisting this into (8) and noting eηx x ˜ , we see that

x ˜ z 2 x z 2 +2 η 2 x ˜ z 2 1 2 x ˜ x 2 ,

from which it follows that

x ˜ z 2 ( 1 + 2 η 2 1 2 η 2 ) x z 2 1 2 ( 1 2 η 2 ) x ˜ x 2 .

Consequently, the desired inequality (7) follows from the fact η(0,1/2). □

We now are ready to prove our main results.

Theorem 1 For any x 0 H, the sequence ( x n ) generated by

[ x ˜ n = J c n ( x n + e n ) , x n + 1 = λ n u + ( 1 λ n ) x ˜ n ,
(9)

converges strongly to P S (u), provided that

  1. (i)

    c n c>0;

  2. (ii)

    lim n λ n =0, n = 0 λ n =;

  3. (iii)

    e n η n x ˜ n x n , n = 0 η n 2 <.

Proof Let z= P S (u). By our hypothesis, we may assume without loss of generality that η n (0,1/2). Then by Lemma 6, we have

x ˜ n z 2 (1+ ϵ n ) x n z 2 1 2 x ˜ n x n 2 ,
(10)

where ϵ n := ( 2 η n ) 2 satisfying n = 0 ϵ n <. It then follows from (9) that

x n + 1 z 2 = ( 1 λ n ) ( x ˜ n z ) 2 + λ n ( u z ) 2 ( 1 λ n ) x ˜ n z 2 + λ n u z 2 ,

which together with (10) yields

x n + 1 z 2 (1 λ n )(1+ ϵ n ) x n z 2 + λ n u z 2 .

Applying Lemma 2 to the last inequality, we conclude that ( x n ) is bounded.

It follows from the subdifferential inequality that

x n + 1 z 2 = ( 1 λ n ) ( x ˜ n z ) + λ n ( u z ) 2 ( 1 λ n ) x ˜ n z 2 + 2 λ n u z , x n + 1 z .

Combining this with (10) yields

x n + 1 z 2 ( 1 λ n ) x n z 2 1 λ n 2 x ˜ n x n 2 + 2 λ n u z , x n + 1 z + M ϵ n ,
(11)

where M>0 is a sufficiently large number. Since ( ϵ n ) 1 , we assume that

s:= lim n k = 0 n ϵ k <

and define t n :=s k = 0 n 1 ϵ k . Setting s n = x n z 2 +M t n , we rewrite (11) as

s n + 1 s n + λ n x n z 2 + 1 λ n 2 x ˜ n x n 2 2 λ n uz, x n + 1 z.
(12)

It is obvious that s n 0 x n z0.

We next consider two possible cases on the sequence ( s n ).

Case 1. ( s n ) is eventually decreasing (i.e., there exists N0 such that ( s n ) is decreasing for nN). In this case, ( s n ) must be convergent, and from (12) it follows

1 λ n 2 x ˜ n x n 2 ( s n s n + 1 )+M λ n 0,

from which we have x ˜ n x n 0. Extract a subsequence ( x n k ) from ( x n ) so that ( x n k ) converges weakly to x ˆ and

lim n uz, x n + 1 z= lim k uz, x n k z.

By noting the fact that x ˜ n = J n ( x n + e n ), this implies

0 x n k + e n k x ˜ n k c n k A( x ˜ n k )

and x ˜ n k = x n k +( x ˜ n k x n k ) x ˆ . Hence, the weak-to-strong closedness of G(A) implies 0A( x ˆ ), i.e., x ˆ S. Consequently, we have

lim n uz, x n + 1 z=uz, x ˆ z0,

where the inequality follows from (3). Again it follows from (12) that

x n + 1 z 2 (1 λ n ) x n z 2 +2 λ n uz, x n + 1 z+M ϵ n .

By using Lemma 5, we conclude that x n z0.

Case 2. ( s n ) is not eventually decreasing. Hence, we can find a subsequence ( s n k ) so that s n k s n k + 1 for all k0. In this case, we may define an integer sequence (τ(n)) as in Lemma 4. Since s τ ( n ) s τ ( n ) + 1 for all n> n 0 , it follows again from (12) that

1 λ τ ( n ) 2 x ˜ τ ( n ) x τ ( n ) 2 M λ τ ( n ) 0,

so that x ˜ τ ( n ) x τ ( n ) 0 as n. Analogously,

lim n uz, x τ ( n ) z0.
(13)

On the other hand, we deduce from (9) that

x τ ( n ) x τ ( n ) + 1 λ τ ( n ) u x τ ( n ) + x ˜ τ ( n ) x τ ( n ) 0,

which together with (13) gets

lim n uz, x τ ( n ) + 1 z0.
(14)

Noting s τ ( n ) + 1 s τ ( n ) 0 and dividing by λ τ ( n ) in (12), we arrive at

x τ ( n ) z 2 2uz, x τ ( n ) + 1 z

for all n> n 0 , which together with (14) yields

lim n x τ ( n ) z0.

In view of (6), we have

x n z 2 +M t n x τ ( n ) + 1 z 2 +M t τ ( n ) + 1 .

Since x τ ( n ) z0 and x τ ( n ) + 1 x τ ( n ) 0 implies x τ ( n ) + 1 z0, this together with the fact t n 0 immediately yields x n z. □

For criterion (I), Boikanyo and Morosanu [10] introduced a new condition:

e n η n with  lim n η n λ n =0

to ensure the convergence of the CPPA. In the following theorem, we shall present a similar condition under the accuracy criterion (II).

Theorem 2 For any x 0 H, the sequence ( x n ) generated by

[ x ˜ n = J c n ( x n + e n ) , x n + 1 = λ n u + ( 1 λ n ) x ˜ n ,
(15)

converges strongly to P S (u), provided that

  1. (i)

    c n c>0;

  2. (ii)

    lim n λ n =0, n = 0 λ n =;

  3. (iii)

    e n η n x ˜ n x n , lim n η n 2 / λ n =0.

Proof Let z= P S (u). Similarly, we have

x n + 1 z 2 (1 λ n )(1+ ϵ n ) x n z 2 + λ n u z 2 ,
(16)

where ϵ n := ( 2 η n ) 2 satisfying ϵ n / λ n 0, so we assume without loss of generality that 2 ϵ n (1 λ n ) λ n . Applying Lemma 3, we conclude that ( x n ) is bounded.

From inequality (11), we also obtain

s n + 1 s n + λ n s n + 1 λ n 2 x ˜ n x n 2 2 λ n uz, x n + 1 z+ s n ϵ n ,
(17)

where we define s n := x n z 2 .

To show s n 0, we consider two possible cases for ( s n ).

Case 1. ( s n ) is eventually decreasing (i.e., there exists N0 such that ( s n ) is decreasing for nN). In this case, ( s n ) must be convergent, and from (12) it follows

1 λ n 2 x ˜ n x n 2 ( s n s n + 1 )+M( ϵ n + λ n )0,

where M>0 a sufficiently large number. Analogous to the previous theorem,

lim n uz, x n + 1 z0.

Rearranging terms in (17) yields that

s n + 1 (1 λ n ) s n + λ n M ( 2 u z , x n + 1 z + ϵ n / λ n ) .

We note that by our hypothesis ϵ n / λ n goes to zero, and thus apply Lemma 5 to the previous inequality to conclude that s n 0.

Case 2. ( s n ) is not eventually decreasing. In this case, we may define an integer sequence (τ(n)) as in Lemma 4. Since s τ ( n ) s τ ( n ) + 1 for all n> n 0 , it follows again from (17) that

1 λ τ ( n ) 2 x ˜ τ ( n ) x τ ( n ) 2 M( λ τ ( n ) + ϵ τ ( n ) )0,

so that x ˜ τ ( n ) x τ ( n ) 0, and furthermore x τ ( n ) x τ ( n ) + 1 0. Analogously,

lim n uz, x τ ( n ) + 1 z0.

It follows from (17) that for all n> n 0

s τ ( n ) 2uz, x τ ( n ) + 1 z+ M ϵ τ ( n ) λ τ ( n ) .

By combining the last two inequalities, we have

lim n s τ ( n ) 0,

from which we arrive at

s τ ( n ) + 1 = ( x τ ( n ) z ) ( x τ ( n ) x τ ( n ) + 1 ) s τ ( n ) + x τ ( n ) x τ ( n ) + 1 0 .

Consequently, s n 0 follows from (6) immediately. □

References

  1. Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056

    Article  MathSciNet  Google Scholar 

  2. Han D, He BS: A new accuracy criterion for approximate proximal point algorithms. J. Math. Anal. Appl. 2001, 263: 343–354. 10.1006/jmaa.2001.7535

    Article  MathSciNet  Google Scholar 

  3. Güler O: On the convergence of the proximal point algorithm for convex optimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022

    Article  MathSciNet  Google Scholar 

  4. Bauschke HH, Combettes PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26: 248–264. 10.1287/moor.26.2.248.10558

    Article  MathSciNet  Google Scholar 

  5. Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493

    Article  MathSciNet  Google Scholar 

  6. Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.

    MathSciNet  Google Scholar 

  7. Wang F: A note on the regularized proximal point algorithm. J. Glob. Optim. 2011, 50: 531–535. 10.1007/s10898-010-9611-z

    Article  Google Scholar 

  8. Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332

    Article  Google Scholar 

  9. Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.

    Article  MathSciNet  Google Scholar 

  10. Boikanyo OA, Morosanu G: A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4: 635–641. 10.1007/s11590-010-0176-z

    Article  MathSciNet  Google Scholar 

  11. Boikanyo OA, Morosanu G: Four parameter proximal point algorithms. Nonlinear Anal. 2011, 74: 544–555. 10.1016/j.na.2010.09.008

    Article  MathSciNet  Google Scholar 

  12. Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7

    Article  Google Scholar 

  13. Wang F, Cui H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54: 485–491. 10.1007/s10898-011-9772-4

    Article  MathSciNet  Google Scholar 

  14. Ceng LC, Wu SY, Yao JC: New accuracy criteria for modified approximate proximal point algorithms in Hilbert space. Taiwan. J. Math. 2008, 12: 1691–1705.

    MathSciNet  Google Scholar 

  15. Bauschke HH, Combettes PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York; 2011.

    Book  Google Scholar 

  16. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the referees for their useful comments and suggestions. This work is supported by the National Natural Science Foundation of China, Tianyuan Foundation (11226227), the Basic Science and Technological Frontier Project of Henan (122300410268, 122300410375).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fenghui Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

Both authors contributed equally and significantly to writing this manuscript. Both authors read and approved the manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tian, C., Wang, F. The contraction-proximal point algorithm with square-summable errors. Fixed Point Theory Appl 2013, 93 (2013). https://doi.org/10.1186/1687-1812-2013-93

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2013-93

Keywords