Skip to main content

New complexity analysis for primal-dual interior-point methods for self-scaled optimization problems

Abstract

A linear optimization problem over a symmetric cone, defined in a Euclidean Jordan algebra and called a self-scaled optimization problem (SOP), is considered. We formulate an algorithm for a large-update primal-dual interior-point method (IPM) for the SOP by using a proximity function defined by a new kernel function, and we obtain the best known complexity results of the large-update IPM for the SOP by using the Euclidean Jordan algebra techniques.

MSC:90C51, 90C25, 65K05.

1 Introduction and preliminaries

Primal and dual interior-point methods (IPMs) have been well known as the most effective methods for solving wide classes of optimization problems, for example, the linear optimization (LO) problem, the quadratic optimization problem (QOP), the semidefinite optimization (SDO) problem, the second-order cone optimization (SOCO) problem, and the convex optimization problem (CP).

The so-called barrier update parameter θ in algorithms for IPMs plays an important role in both theory and practice of IPMs. Usually, if θ is a constant independent of the dimension of the problem, then the algorithm is called a large-update method. If it depends on the dimension, then the algorithm is said to be a small-update method. Large-update methods are much more efficient than small-update methods in practice [1], but have a worst-case iteration bound. Such a gap between theory and practice has been referred to as irony of IPMs [2]. Recently, many authors have tried to reduce the gap of the worst-case iteration bound between the large-update IPM and the small-update IPM.

Using self-regular proximity functions instead of a classical logarithmic barrier function, Peng et al. [35] improved the complexity of large-update IPMs for the LO problem, the SDO problem, and the SOCO problem. Bai et al. [6] introduced a new class of eligible kernel functions. The class was defined by some simple conditions on the kernel function and its derivatives. The best iteration bound for the LO problem, which was given by Bai et al. [6], is O( n lognlog n ϵ ). Recently, Wang et al. [7] obtained the complexity result O(nlogn/ϵ) for the SDO problem based on a simple kernel function. Bai and Wang [8] obtained the best known complexity result for the SOCO problem based on a parametric kernel function including the classical logarithmic function, the prototype regular kernel function, and the non-self-regular kernel function. Very recently, using the kernel function ϕ(t)=( t 2 1)/2+( e t q 1 1)/q, Choi and Lee [9, 10] have obtained the complexity results of large-update primal-dual IPMs for SDO and SOCO, O( n ( log n ) ( q + 1 ) / q logn/ϵ) and O( N ( log N ) ( q + 1 ) / q logN/ϵ), respectively.

In this paper, we consider a linear optimization problem over a symmetric cone which is defined in a Euclidean Jordan algebra. Nesterov and Todd [11] proposed first this kind of an optimization problem under the name of convex programming for self-scaled cones and established the polynomial complexity of the primal-dual interior point method using the so-called NT (Nesterov-Todd) direction [12]. We call the linear optimization problem over the symmetric cone the self-scaled optimization problem (SOP).

Faybusovich first studied the SOP in view of a Euclidean Jordan algebra and gave a theoretical background for nondegeneracy assumptions and the uniqueness of solutions for Newton systems in IPMs for the SOP [13], presented a short-step path-following algorithm for a quadratic programming problem defined on the intersection of a symmetric cone with an affine subspace [14] and obtained complexity estimates for a long-step primal-dual interior-point algorithm for the optimization problem of the minimization of a linear function on a feasible set obtained as the intersection of an affine subspace and a symmetric cone [15]. SOPs include linear optimization problems, semidefinite optimization problems, second-order optimization problems, and various combinations of these types of problems as special cases. Schmieta and Alizadeh [16] extended primal-dual interior point algorithms for LOs, SDOs, and SOCOs to SOPs by using logarithmic barrier functions.

Baes raised an open question in his monograph [17] as follows: The theory of self-regular functions has been created for linear programming by Jiming Peng, Cornelius Roos, and Tamás Terlaky [5]. They subsequently extended it to second-order programming and semidefinite programming separately using implicitly the aforementioned construction. However, the unified treatment of this theory using the Jordan algebraic framework is not accomplished yet.

Choi and Lee [18] gave primal-dual interior point algorithms by using a very simple self-regular function ψ(t)= 1 2 ( t 1 t ) 2 , t>0 for the SOP and gave partial answers for the question of Baes. Very recently, Vieira [19, 20] gave complete answers for the open question of Baes by proving the e-convexity property of eligible kernel functions and, in particular, he presented the iteration complexity results for ten eligible kernel functions. Among ten kernel functions in [19], the best iteration complexity for a large-update method was obtained for ψ(t)= t 2 1 2 + t 1 q 1 q 1 with q=logr, and its iteration complexity is O( r logrlog r ϵ ), which is the best known one.

In this paper, we define a new eligible kernel function ψ(t)= t 2 1 2 + e p ( t q 1 ) 1 p q , p1 and q1 for t>0, which was modified from the one in [9, 10], and obtain the best known iteration complexity result for the large-update IPM of the SOP by using the analysis emphasized on the kernel function and the Euclidean Jordan algebra techniques. In our algorithm, we use the well-known lemma for the upper bound of the μ-update (see Lemma 3.1) instead of using Theorem 5.4 in [20]. The lemma makes our analysis in the outer while loop easy. We refer to Theorem 4.9 and Proposition 5.6 in [20] for complexity analysis. But we use Proposition 3.1 in [18] obtained from the technique of Sun and Sun [21] instead of using Proposition 5.7 in [20].

This paper is organized as follows. In Section 2, we introduce our kernel functions, formulate the Newton system for the SOP, and present a useful inequality for our proximity function. In Section 3, we give an algorithm for the SOP and calculate an upper bound for the proximity function after μ-update. We calculate an upper bound for difference between proximity functions after one step in inner iterations and then determine our default step size for search directions. We present a worst-case iteration bound for our large-update primal-dual interior point method for the SOP.

Now, we give definitions and preliminary properties for a Euclidean Jordan algebra which are found in [22] and will be used in the next sections.

Definition 1.1 ([22])

A finite-dimensional real vector space V is called an algebra if a bilinear mapping (x,y)xy from V×V to V is defined.

An algebra V is called a Jordan algebra if the following hold:

  1. (i)

    commutativity: for all x,yV, xy=yx;

  2. (ii)

    Jordan’s axiom: for all x,yV, x 2 (xy)=x( x 2 y), where x 2 =xx.

A Jordan algebra V is said to be Euclidean if

  1. (iii)

    x 2 + y 2 =0x=y=0, equivalently, there exists an inner product (|) on V such that (xy|z)=(y|xz).

A Jordan algebra V is simple if it does not contain any non-trivial ideal. The Jordan algebra may not be associative, but it is power-associative, i.e., x p x q = x p + q . We assume a Jordan algebra V has an identity element, i.e., there exists e such that xe=ex=x. Since V is finite-dimensional, given xV, there exists a minimal positive integer k such that the vectors e,x,, x k are linearly dependent. Denote this integer m(x). We define the rank of V as

rank(V)=r=max { m ( x ) x V } .

An element xV is said to be invertible if there exists an element yR[x] such that xy=e, where R[x] is the algebra over of polynomials in one variable with coefficients in . It is defined by x 1 . An element vV is called idempotent if v 2 =v. For an element xV, let L(x) be a linear map of V defined as L(x)y=xy. The cone of squares

Ω ¯ := { x 2 x V }

is a symmetric cone; the following conditions hold:

  1. (i)

    for every pair of x,yint Ω ¯ , there is an invertible linear transformation L:VV such that L( Ω ¯ )= Ω ¯ and L(x)=y;

  2. (ii)

    Ω ¯ = Ω ¯ , where Ω ¯ :={yV(x,y)0, for any x Ω ¯ }.

Let Ω=int Ω ¯ . Then Ω={ x 2 xV is invertible}={xVL(x) is positive definite}.

Definition 1.2 ([22])

Let c 1 ,, c k V. Then { c 1 ,, c k } is said to be a Jordan frame if c i , i=1,,k are non-zero and cannot be written as a sum of other two idempotents, and the following properties hold:

{ c i 2 = c i , c i c j = 0 if  i j , i = 1 k c i = e .

Theorem 1.1 (Theorem III.1.2 in [22])

For every xV, there exist a Jordan frame { c 1 (x),, c r (x)} and real numbers λ 1 (x),, λ r (x) such that

x= λ 1 (x) c 1 (x)++ λ r (x) c r (x).
(1)

The numbers λ i (x), for all i=1,,r, are said to be the eigenvalues of x, and (1) is called the eigenvalue (or spectral) decomposition of x. Now, it is possible to extend the definition of any real-valued function ψ() to elements of the Euclidean Jordan algebra via their eigenvalues:

ψ(x):=ψ ( λ 1 ( x ) ) c 1 (x)++ψ ( λ r ( x ) ) c r (x).
(2)

Particularly, we have some examples as follows:

  1. (i)

    Square root: x 1 / 2 = λ 1 1 / 2 (x) c 1 (x)++ λ r 1 / 2 (x) c r (x) if all λ i (x)0.

  2. (ii)

    Inverse: x 1 = λ 1 1 (x) c 1 (x)++ λ r 1 (x) c r (x) if all λ i (x)0.

  3. (iii)

    Square: x 2 = λ 1 2 (x) c 1 (x)++ λ r 2 (x) c r (x).

From the above examples, we know that for x Ω ¯ , λ i ( x 1 / 2 )= λ i 1 / 2 (x) and for xΩ, λ i ( x 1 )= λ i 1 (x). Let us denote by ψ (x) the derivative of ψ(x) with respect to λ i (x):

ψ (x):= ψ ( λ 1 ( x ) ) c 1 (x)++ ψ ( λ r ( x ) ) c r (x).
(3)

In the Jordan algebra, we define the determinant of x and the trace of x as follows:

det(x)= i = 1 r λ i (x),tr(x)= i = 1 r λ i (x).

Since V is a Euclidean Jordan algebra, x,y:=tr(xy) is a scalar product on V (see Proposition III.1.5 in [22]). The following lemma is called the second Pierce decomposition theorem which will be used in Section 3.

Lemma 1.1 (Theorem IV.2.1 in [22], Theorem 2.6.6 (Second Pierce decomposition theorem) in [17])

Let { c 1 ,, c r } be a Jordan frame of V. If

V i j :={ { v i j c i v i j = v i j } if  i = j , { v i j c i v i j = 1 2 v i j } { v i j c j v i j = 1 2 v i j } if  i j ,

we have

  1. (i)

    V= 1 i j r V i j ;

  2. (ii)

    V i j V k l ={0}, if {i,j}{k,l}=;

  3. (iii)

    V i j V j k V i k , if ik;

  4. (iv)

    tr( v i k )=0, for v i k V i k if ik.

Consider the following self-scaled optimization problem (SOP):

( P ) Minimize c , x subject to a i , x = b i , i = 1 , , m , x Ω ¯ ,

and its dual problem:

( D ) Maximize i = 1 m b i y i subject to i = 1 m y i a i + s = c , s Ω ¯ , y R m ,

where c, a 1 ,, a m V and b R m are given. We call x Ω ¯ primal feasible if a i ,x= b i for i=1,,m. Similarly, (y,s) R m × Ω ¯ is called dual feasible if i = 1 m y i a i +s=c. Let Ax= ( a 1 , x , , a m , x ) T for any xV. Then A:V R m is a linear transformation. Throughout this paper, we assume that A is surjective. Then its adjoint A T is injective and A T y= i = 1 m y i a i , where y= ( y 1 , , y m ) T R m . So, we can reformulate (P) and (D) as follows:

( P ) Minimize c , x subject to A x = b , x Ω ¯ ,

and its dual problem:

( D ) Maximize b T y subject to A T y + s = c , s Ω ¯ , y R m .

We can check that weak duality between (P) and (D) holds, that is, inf(P)sup(D). From now on, we assume that both (P) and (D) satisfy the interior-point condition (IPC), that is, there exists ( x 0 , y 0 , s 0 ) such that A x 0 =b, x 0 Ω, A T y 0 + s 0 =c, s 0 Ω. Then there exists a pair of optimal solutions (x,y,s) of (P) and (D), and inf(P)=sup(D) [11, 23].

The following lemma is well known [13, 17, 22, 24].

Lemma 1.2 For x,sV, the following statements are equivalent:

  1. (i)

    x,s Ω ¯ and x,s=0;

  2. (ii)

    x,s Ω ¯ and xs=0.

Using Lemma 1.2, we can check (see Proposition 2.1 in [13]) that finding a pair of optimal solutions (x,y,s) of (P) and (D) is equivalent to solving the following Newton system:

{ A x = b , A T y + s = c , x s = 0 , x , s Ω ¯ , y R m .
(4)

The basic idea of primal-dual IPMs is to replace the third equation in (4), the so-called complementarity condition for the SOP, by the parameterized system with a positive parameter μ:

{ A x = b , A T y + s = c , x s = μ e , x , s Ω , y R m .
(5)

For each xV, we define the quadratic representation as follows:

Q x :=2 L 2 (x)L ( x 2 ) .

Lemma 1.3 ([16])

Let x,sΩ and p be invertible. Then xs=μe if and only if Q p x Q p 1 s=μe.

Proposition 1.1 (Proposition 18 in [16])

If x,sΩ, then Q x sΩ.

Let x,sΩ. Then there uniquely exists pΩ such that Q p 2 x=s [25, 26]. So, we can choose pΩ such that Q p x= Q p 1 s. Such a choice exists and is unique, and leads to the Nesterov-Todd (NT) method.

From Lemma 1.3, the system (5) becomes

{ A x = b , A T y + s = c , Q p x Q p 1 s = μ e , x , s Ω , y R m .
(6)

Then, for each μ>0, the parameterized system (6) has a unique solution (x(μ),y(μ),s(μ)) [11, 27], which is called a μ-center of (P) and (D). The set of μ-centers, that is, C={(x(μ),y(μ),s(μ))μ>0}, is said to be the central path of (P) and (D). Therefore, as μ tends to zero, (x(μ),y(μ),s(μ)) converges to a pair of optimal solutions of (P) and (D) [13, 28].

In general, IPMs for the SOP consist of two strategies. The first one, which is called the inner iteration scheme, is to keep the iterative sequence in a certain neighborhood of the central path or to keep the iterative sequence in a certain neighborhood of the μ-center. And the second one, called the outer iteration scheme, is to decrease the parameter μ to μ + :=(1θ)μ for some θ(0,1).

2 Proximity functions and search directions

Newton’s method is a well-known procedure to solve a system of nonlinear equations. Most IPMs for solving the SOP employ different search directions together with suitable strategies for following the central path appropriately.

Assume that a starting point ( x 0 , s 0 ) in a certain neighborhood of the central path corresponding to μ=1 is available. We then decrease μ to μ + :=(1θ)μ for some fixed θ(0,1) and linearize the Newton system for (6) by replacing x, y, s with x + :=x+Δx, y + :=y+Δy, s + :=s+Δs, respectively. Then we get the following system in [16]:

{ A Δ x = 0 , A T Δ y + Δ s = 0 , Q p x Q p 1 Δ s + Q p Δ x Q p 1 s = μ + e Q p x Q p 1 s .
(7)

To describe our new search direction, we need more notations:

A ¯ : = 1 μ A Q p 1 , v : = 1 μ Q p x = 1 μ Q p 1 s , d x : = 1 μ Q p Δ x , d s : = 1 μ Q p 1 Δ s .
(8)

In this case,

p= [ Q x 1 / 2 ( Q x 1 / 2 s ) 1 / 2 ] 1 / 2 = [ Q s 1 / 2 ( Q s 1 / 2 x ) 1 / 2 ] 1 / 2 .
(9)

From Proposition 1.1, vΩ. Hence, L(v) is positive definite. Thus, the system (7) is equivalent to the following system:

{ A ¯ d x = 0 , A ¯ T Δ y + d s = 0 , d x + d s = v 1 v .
(10)

We say that the above (dx,Δy,ds) is called the NT search direction for the SOP. Furthermore, dx,ds=0, which is coming from the first and second equations of (10) or from the orthogonality of Δx and Δs.

For our IPM, we use the following new eligible kernel function:

ψ(t)= t 2 1 2 + e p ( t q 1 ) 1 p q ,p1 and q1 for t>0.
(11)

Please see the definition of an eligible function in [6]. The new kernel function (11) satisfies

ψ (t)>1, ψ (t)<0and lim t 0 + ψ(t)= lim t ψ(t)=.

Note that ψ(1)= ψ (1)=0. Then ψ(t) is determined:

ψ(t)= 1 t 1 ξ ψ (ζ)dζdξ.
(12)

The proximity function (measure) for (P) and (D) is

Φ(x,s;μ):=Ψ(v):=tr ( ψ ( v ) ) = i = 1 r ψ ( λ i ( v ) ) ,
(13)

where ψ(v) is defined by (2). Note that Ψ(v)=0, if v=e (i.e., xs=μe) and Ψ(v)>0, otherwise. Replacing the right-hand side of the last equation in (10) by ψ (v), we have the following system from (10):

{ A ¯ d x = 0 , A ¯ T Δ y + d s = 0 , d x + d s = ψ ( v ) .
(14)

Let X={xV A ¯ x=0}. Then X ={ A ¯ T yy R m }. Hence, the system (14) has a unique solution.We introduce the norm-based proximity measure as follows:

σ:=dx+ds= ψ (v)= d x 2 + d s 2 .
(15)

The following lemma gives a lower bound of σ in terms of Ψ(v).

Lemma 2.1 For any vΩ,

σ 2 Ψ ( v ) .

Proof Since (11) satisfies 2ψ(t) ( ψ ( t ) ) 2 and σ 2 = i = 1 r ( ψ ( λ i ( v ) ) ) 2 ,

2Ψ(v) σ 2 .

This completes the proof. □

Also, our new kernel function (11) satisfies the following exponential convexity property.

Lemma 2.2 Let t 1 >0 and t 2 >0. Then

ψ( t 1 t 2 ) 1 2 ( ψ ( t 1 ) + ψ ( t 2 ) ) .

The following proposition can be found in [20], but for the completeness, we give its proof.

Proposition 2.1 (Theorem 4.9 in [20])

Let Ψ be the proximity function defined in (13), then for any x,sΩ,

Ψ ( ( Q x 1 / 2 s ) 1 / 2 ) 1 2 ( Ψ ( x ) + Ψ ( s ) ) .

Proof Since Q x 1 / 2 sΩ,

λ i ( ( Q x 1 / 2 s ) 1 / 2 ) = λ i 1 / 2 ( Q x 1 / 2 s)andΨ ( ( Q x 1 / 2 s ) 1 / 2 ) = i = 1 r ψ ( λ i 1 / 2 ( Q x 1 / 2 s ) ) .

By Theorem 3.5 in [20],

i = 1 k λ i ( Q x 1 / 2 s) i = 1 k λ i (x) λ i (s),for k=1,,r1,

and

i = 1 r λ i ( Q x 1 / 2 s)= i = 1 r λ i (x) λ i (s).

Thus,

i = 1 k λ i 1 / 2 ( Q x 1 / 2 s) i = 1 k λ i 1 / 2 (x) λ i 1 / 2 (s),for k=1,,r1,

and

i = 1 r λ i 1 / 2 ( Q x 1 / 2 s)= i = 1 r λ i 1 / 2 (x) λ i 1 / 2 (s).

Let α i = λ i 1 / 2 ( Q x 1 / 2 s) and β i = λ i 1 / 2 (x) λ i 1 / 2 (s). Then α i >0 and β i >0. Moreover, since these conditions satisfy the assumptions of Corollary 3.3.10 in [29] and (iii) in Corollary 3.3.10 in [29] with our kernel function (11),

i = 1 r ψ ( λ i 1 / 2 ( Q x 1 / 2 s ) ) i = 1 r ψ ( λ i 1 / 2 ( x ) λ i 1 / 2 ( s ) ) .

By Lemma 2.2, we obtain the following result:

i = 1 r ψ ( λ i 1 / 2 ( x ) λ i 1 / 2 ( s ) ) 1 2 ( i = 1 r ψ ( λ i ( x ) ) + i = 1 r ψ ( λ i ( s ) ) ) = 1 2 ( Ψ ( x ) + Ψ ( s ) ) .

 □

3 Algorithm and its complexity analysis

Now, we explain our algorithm for the large-update primal-dual IPM for the SOP. Assuming that a starting point in a certain neighborhood of the central path is available, we can set out from this point. Then, we will go to the outer ‘while loop’. If μ satisfies rμϵ, then it is reduced by the factor 1θ, where θ(0,1). Then, we make use of the inner ‘while loop’, and we repeat the procedure until we find iterates that are ‘close’ to (x(μ),y(μ),s(μ)), that is, the proximity Φ(x,s;μ)<τ. Here, we apply Newton’s method targeting at the new μ-centers to decide a search direction (Δx,Δy,Δs). We return to the outer ‘while loop’. The whole process is repeated until μ is small enough, say until rμ<ϵ.

The choice of the step size α is another crucial issue in the analysis of the algorithm. It has to be taken so that the closeness of the iterates to the current μ-center can improve by a sufficient amount. In the algorithm, the inner ‘while loop’ is called the inner iteration and the outer ‘while loop’ is called the outer iteration. Each outer iteration consists of an update of the parameter μ and a sequence of (one or more) inner iterations. The total number of inner iterations is the worst-case iteration bound for our algorithm.

The algorithm for our large-update primal-dual IPM for the SOP is given as follows:

3.1 Bound of the proximity function after μ-update

We have Ψ(v)τ before the update of μ with the factor 1θ at the start of each outer iteration. After updating μ in an outer iteration, the vector v is divided by the factor 1 θ , which in general leads to an increase in the value of Ψ(v). Then during the inner iteration, the value of Ψ(v) decreases until it passes the threshold τ.

As we mentioned, our kernel function (11) is eligible. To obtain an upper bound for a μ-updated proximity function in each outer iteration in the algorithm, we use the well-known Lemma 3.1, which can be induced from the decreasing part of the kernel function, instead of using theorems which can be obtained from some properties for eligible functions (for example, Theorem 3.2 in [6] and Theorem 5.4 in [20]). Both of the following lemmas make our analysis in the outer while loop easy. And we will show a theorem that an upper bound for Ψ( 1 1 θ v) is expressed with Ψ(v) by using the following two lemmas.

Lemma 3.1 Let β1. Then

ψ(βt)ψ(t)+ ( β 2 1 ) 2 t 2 .

Proof Define ψ b (t):= e p ( t q 1 ) 1 p q . Then ψ b (t) is monotonically decreasing in t. So, we can easily obtain

ψ ( β t ) = β 2 t 2 1 2 + ψ b ( β t ) = t 2 1 2 + ψ b ( t ) + β 2 t 2 t 2 2 + ψ b ( β t ) ψ b ( t ) ψ ( t ) + ( β 2 1 ) 2 t 2 .

 □

Lemma 3.2 For any vΩ, then

v 2 2 ( Ψ ( v ) + 2 r ) .

Proof Since e p ( t q 1 ) p q is positive and pq1, the kernel function (11) has a lower bound as follows:

ψ(t) t 2 1 2 1 p q t 2 2 11.

This implies 1 2 i = 1 r λ i 2 (v)Ψ(v)+2r. □

Theorem 3.1 Let θ be such that 0<θ<1. Then, for any vΩ,

Ψ ( 1 1 θ v ) 2 1 θ ( Ψ ( v ) + r ) .

Proof From Lemma 3.1 with β= 1 1 θ and Lemma 3.2,

Ψ ( 1 1 θ v ) = i = 1 r ψ ( 1 1 θ λ i ( v ) ) Ψ ( v ) + 1 2 ( 1 1 θ 1 ) v 2 Ψ ( v ) + θ 1 θ ( Ψ ( v ) + 2 r ) 2 1 θ ( Ψ ( v ) + r ) ,

the last inequality comes from θ(0,1). □

By the assumption Ψ(v)τ just before the update of μ,

Ψ ( 1 1 θ v ) 2 1 θ (τ+r).

We define

L(r,θ,τ)= 2 1 θ (τ+r).

Since τ=O(r) and θ=Θ(1),

L=O(r).

3.2 Determining a default step size

In this section, we compute the feasible step size α such that the proximity function is decreasing and is bound for the decrease during inner iterations; then we give our default step size α ¯ ; α ¯ = ( 3 ( 1 + 3 σ ( 1 + p q + q ) ( 1 + p 1 log 3 σ ) ( q + 1 ) / q ) ) 1 . We will show that the step size not only keeps the iterates feasible but also gives rise to a sufficiently large decrease in the barrier function Ψ(v) in each inner iteration. Let us denote the difference between the proximity before and after one step by a function of the step size, that is,

f(α):=Ψ( v + )Ψ(v).
(16)

The main task in the rest of this section is to study the decreasing behavior of f(α).

Now, in equation (16), v + and p + are determined by x, s in (9) and (8) replaced by x + :=x+αΔx, s + :=s+αΔs, respectively, which is as follows:

v + := 1 μ Q p + (x+αΔx)= 1 μ Q p + 1 (s+αΔs).

Lemma 3.3 (Proposition II.3.3 in [22])

Let x and s be elements in V. Then

  1. (i)

    ( Q x s ) 1 = Q x 1 s 1 if x and s are invertible.

  2. (ii)

    Q Q s x = Q s Q x Q s .

Lemma 3.4 ([16])

Let x,s,pΩ. Then

  1. (i)

    Q x 1 / 2 s and Q s 1 / 2 x have the same eigenvalues.

  2. (ii)

    Q x 1 / 2 s and Q ( Q p x ) 1 / 2 ( Q p 1 s) have the same eigenvalues.

The following proposition was given by Vieira in [20] (see Proposition 5.6 in [20]), but we provide its proof using Lemma 3.3 and Lemma 3.4.

Proposition 3.1 Let Ψ be the proximity function defined in (13). Then we have

Ψ( v + )=Ψ ( ( Q ( v + α d x ) 1 / 2 ( v + α d s ) ) 1 / 2 ) .

Proof From Q p 1 s= Q p x and (i) in Lemma 3.4, we know that Q p x and Q x 1 / 2 p 2 have the same eigenvalues. By the definition of p and (ii) in Lemma 3.3,

Q x 1 / 2 p 2 = Q x 1 / 2 ( Q x 1 / 2 ( Q x 1 / 2 s ) 1 / 2 ) 1 = Q x 1 / 2 Q x 1 / 2 ( Q x 1 / 2 s ) 1 / 2 = ( Q x 1 / 2 s ) 1 / 2 .

Then we can find Q p + 1 s + and ( Q x + 1 / 2 s + ) 1 / 2 have the same eigenvalues. Here, μ v + = Q p + 1 s + . We know that x + = μ Q p 1 (v+αdx) and s + = μ Q p (v+αds), by the definition (8) and by (ii) in Lemma 3.4, then ( Q x + 1 / 2 s + ) 1 / 2 = μ ( Q ( Q p 1 ( v + α d x ) ) 1 / 2 ( Q p ( v + α d s ) ) ) 1 / 2 and μ ( Q ( v + α d x ) 1 / 2 ( v + α d s ) ) 1 / 2 have the same eigenvalues. Therefore, the proximity function satisfies the equality. □

Then Proposition 2.1 and Proposition 3.1 imply the following inequality:

Ψ( v + ) 1 2 Ψ(v+αdx)+ 1 2 Ψ(v+αds).

So, we can define f 1 (α):

f(α) f 1 (α):= 1 2 ( Ψ ( v + α d x ) + Ψ ( v + α d s ) ) Ψ(v).

To facilitate the forthcoming analysis, we also define, for any xV,

λ min (x):=min { λ i ( x ) i = 1 , , r } .

The following lemma is obtained from Lemma 14 in [16] so that we can get the common lower bound of eigenvalues of v+αdx and v+αds, where α satisfies v+αdxΩ and v+αdsΩ.

Lemma 3.5 For any α(0, λ min ( v ) σ ),

λ min (v+αdx) λ min (v)ασand λ min (v+αds) λ min (v)ασ,

where σ is a number defined in (15).

Proof Let α be a fixed number in (0, λ min ( v ) σ ). From Lemma 14 in [16],

λ min (v+αdx) λ min (v)αdx.

Since σdx, we have

λ min (v+αdx) λ min (v)ασ.

Similarly, we obtain

λ min (v+αds) λ min (v)ασ.

 □

The proof of the following proposition can be found in [18], but for the completeness, we give its detailed proof.

Proposition 3.2 ([18])

Suppose that the functions ψ(x) and Ψ(x) are defined by (2) and (13), respectively. Then, for any α(0, λ min ( v ) σ ),

where

Δ ψ ( λ i ( ) , λ j ( ) ) ={ ψ ( λ i ( ) ) if  λ i ( ) = λ j ( ) , ψ ( λ i ( ) ) ψ ( λ j ( ) ) λ i ( ) λ j ( ) if  λ i ( ) λ j ( ) .

Proof Using Lemma 3.1 in [21], we have

(17)

Then we have

d d α tr ( ψ ( v + α d x ) ) = d d α ψ ( v + α d x ) , e = d d α ψ ( v + α d x ) , e = tr ( d d α ψ ( v + α d x ) ) = tr ( i = 1 r ψ ( λ i ( v + α d x ) ) c i ( v + α d x ) , d x c i ( v + α d x ) ) ( by associativity of trace ) = i = 1 r ψ ( λ i ( v + α d x ) ) c i ( v + α d x ) , d x tr ( c i ( v + α d x ) ) = i = 1 r ψ ( λ i ( v + α d x ) ) c i ( v + α d x ) , d x tr ( c i ( v + α d x ) ) .

Then from Baes [17, 30] we know that tr( c i (v+αdx))=1, and hence, from the definition (3), we get

d d α tr ( ψ ( v + α d x ) ) =tr ( ψ ( v + α d x ) d x ) .

Thus, we have

d d α f 1 (α)= 1 2 tr ( ψ ( v + α d x ) d x ) + 1 2 tr ( ψ ( v + α d s ) d s ) .

So, the first equality holds.

For the second inequality, we will use (17) by replacing ψ by ψ .

d 2 d α 2 tr ( ψ ( v + α d x ) ) = d d α tr ( ψ ( v + α d x ) d x ) = tr ( ( d d α ψ ( v + α d x ) ) d x ) = tr ( ( i = 1 r Δ ψ ( λ i ( v + α d x ) , λ i ( v + α d x ) ) c i ( v + α d x ) , d x c i ( v + α d x ) + 1 j < l r 4 Δ ψ ( λ j ( v + α d x ) , λ l ( v + α d x ) ) × c j ( v + α d x ) ( c l ( v + α d x ) d x ) ) d x ) .

Here, let dx= j = 1 r λ j (dx) c j (dx). Then we have

i = 1 r ( tr ( c i ( v + α d x ) d x ) ) 2 = i = 1 r ( tr ( j = 1 r λ j ( d x ) c i ( v + α d x ) c j ( d x ) ) ) 2 = i = 1 r ( j = 1 r λ j ( d x ) tr ( c i ( v + α d x ) c j ( d x ) ) ) 2 .

Since c i (v+αdx) and c j (dx) are in Ω ¯ which is a self-dual cone, then

tr ( c i ( v + α d x ) c j ( d x ) ) 0,for i,j=1,,r.

Furthermore, j = 1 r tr( c i (v+αdx) c j (dx))=tr( c i (v+αdx))=1. Then we have

i = 1 r ( tr ( c i ( v + α d x ) d x ) ) 2 = i = 1 r ( j = 1 r tr ( c i ( v + α d x ) c j ( d x ) ) λ j ( d x ) ) 2 i = 1 r ( j = 1 r λ j 2 ( d x ) tr ( c i ( v + α d x ) c j ( d x ) ) ) = j = 1 r ( λ j 2 ( d x ) i = 1 r tr ( c i ( v + α d x ) c j ( d x ) ) ) .

Since i = 1 r tr( c i (v+αdx) c j (dx))=tr( c j (dx))=1, we have

i = 1 r ( tr ( c i ( v + α d x ) d x ) ) 2 d x 2 .
(18)

Now, we decompose dx along Lemma 1.1 such as dx= 1 i k r d x i k for the system of idempotent { c 1 (v+αdx),, c r (v+αdx)}. Then, for j<l,

This means, for each j<l,

tr ( ( c j ( v + α d x ) d x ) ( c l ( v + α d x ) d x ) ) 0.

Moreover, we have,

(19)

Since for each i, ( tr ( c i ( v + α d x ) d x ) ) 2 are nonnegative and for each j, l with j<l, tr(( c j (v+αdx)dx)( c l (v+αdx)dx)) are nonnegative, we get from (18) and (19)

d 2 d α 2 tr ( ψ ( v + α d x ) ) 3max { Δ ψ ( λ i ( v + α d x ) , λ j ( v + α d x ) ) i , j = 1 , , r } d x 2 .

Similarly,

d 2 d α 2 tr ( ψ ( v + α d s ) ) 3max { Δ ψ ( λ i ( v + α d s ) , λ j ( v + α d s ) ) i , j = 1 , , r } d s 2 .

From the definition of f 1 (α),

d 2 d α 2 f 1 (α)= d 2 d α 2 ( 1 2 tr ( ψ ( v + α d x ) ) + 1 2 tr ( ψ ( v + α d s ) ) ) .

Thus, we have the conclusion. □

The next result presents an upper bound for the second derivative of f 1 (α) which is usable for establishing the polynomial complexity of the algorithm.

Proposition 3.3 For any α(0, λ min ( v ) σ ),

f 1 (α) 3 2 ψ ( λ min ( v ) α σ ) σ 2 .

Proof Since ψ (t) is a decreasing function on t(0,), using Lemma 3.5 and the mean value theorem, we have

ψ ( λ min ( v ) α σ ) max { Δ ψ ( λ i ( v + α d x ) , λ j ( v + α d x ) ) i , j = 1 , , r }

and

ψ ( λ min ( v ) α σ ) max { Δ ψ ( λ i ( v + α d s ) , λ j ( v + α d s ) ) i , j = 1 , , r } .

Thus, by Proposition 3.2,

d 2 d α 2 f 1 ( α ) 3 2 ψ ( λ min ( v ) α σ ) d x 2 + 3 2 ψ ( λ min ( v ) α σ ) d s 2 = 3 2 ψ ( λ min ( v ) α σ ) σ 2 .

 □

We can easily check that f 1 (0)=0 and f 1 (0)= σ 2 2 . By Proposition 3.3, we obtain an upper bound f 2 (α) for f 1 (α) as follows:

f 1 ( α ) = f 1 ( 0 ) + f 1 ( 0 ) α + 0 α 0 ξ f 1 ( ζ ) d ζ d ξ f 2 ( α ) : = f 1 ( 0 ) + f 1 ( 0 ) α + 3 2 σ 2 0 α 0 ξ ψ ( λ min ( v ) α σ ) d ζ d ξ .

Note that f 2 (0)=0. Furthermore, since f 2 (α)= σ 2 2 + 3 σ 2 ( ψ ( λ min (v)) ψ ( λ min (v)ασ)), we have f 2 (0)= σ 2 2 which is the same value of f 1 (0), and f 2 (α)= 3 σ 2 2 ψ ( λ min (v)ασ) which is increasing on α[0, λ min ( v ) σ ). Using f 1 (0)= f 2 (0) and f 1 (α) f 2 (α), we can easily check that

f 1 (α)= f 1 (0)+ 0 α f 1 (ξ)dξ f 2 (α).

This relation gives that

f 1 (α)0,if  f 2 (α)0.

To compute the feasible step size α such that the proximity measure is decreasing when we take a new iterate for fixed μ, we want to calculate the step size α which satisfies that f 2 (α)0 holds with α as large as possible. Since f 2 (α)>0, that is, f 2 (α) is monotonically increasing at α, the largest possible value at α satisfying f 2 (α)0 occurs when f 2 (α)=0, that is,

ψ ( λ min ( v ) α σ ) + ψ ( λ min ( v ) ) = σ 3 .
(20)

Since ψ (t) is monotonically decreasing, the derivative of the left-hand side in (20) with respect to λ min (v) is

ψ ( λ min ( v ) α σ ) + ψ ( λ min ( v ) ) <0.

So, the left-hand side in (20) is decreasing at λ min (v). This implies that if λ min (v) becomes smaller, then α gets smaller with fixed σ. Note that

σ= i = 1 n ( ψ ( λ i ( v ) ) ) 2 | ψ ( λ min ( v ) ) | ψ ( λ min ( v ) )

and the equality is true if and only if λ min (v) is the only coordinate in ( λ 1 (v),, λ r (v)) which is different from 1 and λ min (v)<1, that is, ψ ( λ min (v))<0. Hence, the worse situation for the largest step size occurs when λ min (v) satisfies

ψ ( λ min ( v ) ) =σ.
(21)

In that case, the largest α satisfying (20) is minimal. For our purpose, we need to deal with the worse case, and so we assume that (21) holds.

From now on, we denote that ρ:[0,)(0,1] is the inverse function of the restriction of ψ (t) in the interval (0,1]. Then (21) implies

λ min (v)=ρ(σ).
(22)

By using (20) and (21), we immediately obtain

ψ ( λ min ( v ) α σ ) = 4 3 σ.

By the definition of ρ and (22), the largest step size α of the worse case is given as follows:

α = ρ ( σ ) ρ ( 4 3 σ ) σ .
(23)

For the purpose of finding an upper bound of f(α), we need a default step size α ¯ that is the lower bound of the α and consists of σ.

Lemma 3.6 Let σ1. Then, for 0<tρ( 4 3 σ),

ψ (t)1+3σ(1+pq+q) ( 1 + 1 p log 3 σ ) q + 1 q .

Proof From ψ (t)=t t q 1 e p ( t q 1 ) , let ψ b (t)= t q 1 e p ( t q 1 ) and let ρ ̲ :[1,)(0,1] denote the inverse function of the restriction of ψ b (t) to the interval (0,1]. Let ρ( 4 3 σ)= t ˜ . Then 0< t ˜ 1 and 4 3 σ= ψ ( t ˜ )= t ˜ ψ b ( t ˜ ). So, ψ b ( t ˜ )= t ˜ + 4 3 σ1+2σ3σ. Since ρ ̲ is a decreasing function, (ρ( 4 3 σ)= t ˜ =) ρ ̲ ( ψ b ( t ˜ )) ρ ̲ (3σ). Let ρ ̲ (3σ)= t ˆ . Then

3σ= ψ b ( t ˆ )= ( ρ ̲ ( 3 σ ) ) q 1 e p ( ( ρ ̲ ( 3 σ ) ) q 1 )
(24)

implies

(25)

the last inequality comes from t ˆ (0,1] and (25). □

Now, we present a lower bound of the value of α .

Theorem 3.2 Let α be as defined in (23). Then

α 1 3 ( 1 + 3 σ ( 1 + p q + q ) ( 1 + 1 p log 3 σ ) q + 1 q ) .

Proof Since ψ (ρ(σ))=σ, taking the derivative of σ at both sides, we get

ρ (σ)= 1 ψ ( ρ ( σ ) ) .

Moreover, we have

α = 1 σ 4 3 σ σ ρ (ξ)dξ= 1 σ σ 4 3 σ 1 ψ ( ρ ( ξ ) ) dξ 1 σ [ ξ ψ ( ρ ( 4 3 σ ) ) ] σ 4 3 σ = 1 3 ψ ( ρ ( 4 3 σ ) ) ,

where the inequality follows from σξ 4 3 σ and ρ and ψ are monotonically decreasing. Also, by Lemma 3.6, we can complete the proof. □

For using α ¯ as the default step size in the algorithm for the SOP, define the α ¯ as follows:

α ¯ = 1 3 ( 1 + 3 σ ( 1 + p q + q ) ( 1 + 1 p log 3 σ ) q + 1 q ) .
(26)

We will use α ¯ as the default step size in our algorithm.

3.3 Decrease of the proximity function during an inner iteration

Now, we show that our proximity function Ψ with our default step size α ¯ is decreasing. It can be easily established by using the following result.

Lemma 3.7 ([4])

Let h(t) be a twice differentiable convex function with h(0)=0, h (0)<0 and let h(t) attain its (global) minimum at t >0. If h (t) is increasing for t[0, t ], then

h(t) t h ( 0 ) 2 ,0t t .

Since f 2 (α) satisfies assumptions of the above lemma,

f(α) f 1 (α) f 2 (α) f 2 ( 0 ) 2 αfor all 0α α .

Since f 2 (0)= σ 2 2 , we can obtain the upper bound for the decreasing value of the proximity in the inner iteration by Lemma 3.7.

Theorem 3.3 Let α ¯ be the default step size as defined in (26). Then we have

f( α ¯ ) 1 6 Ψ 1 + 3 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q .

Proof Since f 2 (0)= σ 2 2 and α ¯ [0, α ], we have

f ( α ¯ ) 1 2 α ¯ f 2 ( 0 ) = 1 2 1 3 ( 1 + 3 σ ( 1 + p q + q ) ( 1 + 1 p log 3 σ ) q + 1 q ) ( σ 2 2 ) = 1 12 σ 2 1 + 3 σ ( 1 + p q + q ) ( 1 + 1 p log 3 σ ) q + 1 q .

This expresses the decrease in one inner iteration in terms of σ. Since the decrease depends monotonically on σ, we can express the decrease in terms of Ψ=Ψ(v) by Lemma 2.1 as follows:

f ( α ¯ ) 1 6 Ψ 1 + 3 2 Ψ ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ ) q + 1 q 1 6 Ψ Ψ Ψ + 3 2 Ψ ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q = 1 6 Ψ 1 + 3 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q ,

where the inequality follows from Ψ 0 Ψτ1. The theorem is satisfied. □

3.4 Iteration bound

We need to count how many inner iterations are required to return to the situation where Ψ(v)τ after a μ-update. We denote the value of Ψ(v) after μ-update as Ψ 0 ; the subsequent values in the same outer iteration are denoted as Ψ k , k=1, . If K denotes the total number of inner iterations in the outer iteration, then we have

Ψ 0 L(r,θ,τ)=O(r), Ψ K 1 >τ,0 Ψ K τ

and according to Theorem 3.3,

Ψ k + 1 Ψ k 1 6 + 18 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q Ψ k 1 2 .

At this stage, we invoke Lemma 14 in [4].

Lemma 3.8 ([4])

Let t 0 , t 1 ,, t K be a sequence of positive numbers such that

t k + 1 t k β t k 1 γ ,k=0,1,,K1,

where β>0 and 0<γ1. Then

K t 0 γ β γ .

Letting t k = Ψ k , β= 1 6 + 18 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q and γ= 1 2 , we can get the following lemma from Lemma 3.8.

Lemma 3.9 Let K be the total number of inner iterations in the outer iteration. Then we have

K2 ( 6 + 18 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q ) Ψ 0 1 / 2 ,

where Ψ 0 is the value of Ψ(v) after the μ-update in the outer iteration.

Now, we estimate the total number of iterations of our algorithm.

Theorem 3.4 If τ1 and 0<θ<1, the total number of iterations is not more than

2 ( 6 + 18 2 ( 1 + p q + q ) ( 1 + 1 p log 3 2 Ψ 0 ) q + 1 q ) Ψ 0 1 / 2 1 θ log r ϵ .

Proof In the algorithm, rμϵ, μ k := ( 1 θ ) k μ 0 and μ 0 =1. By simple computation, we have

k 1 θ log r ϵ .

Therefore, the number of outer iterations is bounded above by

1 θ log r ϵ .

Multiplication of this result by the number in the above lemma satisfies the theorem. □

Since Ψ 0 1 / 2 =O( r ), if we take p=O(logr) and q=1, then we can get the best known upper bound for the total number of inner iterations in the outer iteration is

O( r logr).

Also, we take for θ a constant (not depending on r), namely 1 θ =Θ(1). With τ=O(r), the best complexity of the primal-dual interior-point method for a linear optimization problem based on our new proximity function with p=logr and q=1 is given by

O ( r log r log r ϵ ) .

References

  1. Andersen ED, Gondzio J, Mészáros C, Xu X: Implementation of interior point methods for large scale linear programming. In Interior Point Methods of Mathematical Programming. Edited by: Terlaky T. Kluwer Academic, Dordrecht; 1996:189–252.

    Chapter  Google Scholar 

  2. Renegar J MPS/SIAM Ser. Optim. In A Mathematical View of Interior-Point Methods in Convex Optimization. SIAM, Philadelphia; 2001.

    Chapter  Google Scholar 

  3. Peng J, Roos C, Terlaky T: Primal-dual interior-point methods for second-order conic optimization based on self-regular proximities. SIAM J. Optim. 2002, 13: 179–203. 10.1137/S1052623401383236

    Article  MathSciNet  Google Scholar 

  4. Peng J, Roos C, Terlaky T: Self-regular functions and new search directions for linear and semidefinite optimization. Math. Program. 2002, 93: 129–171. 10.1007/s101070200296

    Article  MathSciNet  Google Scholar 

  5. Peng J, Roos C, Terlaky T: Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton; 2002.

    Google Scholar 

  6. Bai YQ, Ghami ME, Roos C: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 2004, 15: 101–128. 10.1137/S1052623403423114

    Article  MathSciNet  Google Scholar 

  7. Wang GQ, Bai YQ, Roos C: Primal-dual interior-point algorithms for semidefinite optimization based on a simple kernel function. J. Math. Model. Algorithms 2005, 4: 409–433. 10.1007/s10852-005-3561-3

    Article  MathSciNet  Google Scholar 

  8. Bai YQ, Wang GQ: Primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function. Acta Math. Sin. Engl. Ser. 2007, 23: 2027–2042. 10.1007/s10114-007-0967-z

    Article  MathSciNet  Google Scholar 

  9. Choi BK, Lee GM: On complexity analysis of the primal-dual interior-point methods for semidefinite optimization problem based on a new proximity function. Nonlinear Anal., Theory Methods Appl. 2009, 71: e2628-e2640. 10.1016/j.na.2009.05.078

    Article  MathSciNet  Google Scholar 

  10. Choi BK, Lee GM: On complexity analysis of the primal-dual interior-point method for second-order cone optimization problem. J. Korean Soc. Ind. Appl. Math. 2010, 14: 93–111.

    MathSciNet  Google Scholar 

  11. Nesterov YE, Tood M: Primal-dual interior-point methods for self-scaled cones. SIAM J. Optim. 1998, 8: 324–364. 10.1137/S1052623495290209

    Article  MathSciNet  Google Scholar 

  12. Muramatsu M: On a commutative class of search directions for linear programming over symmetric cones. J. Optim. Theory Appl. 2002, 112: 595–625. 10.1023/A:1017920200889

    Article  MathSciNet  Google Scholar 

  13. Faybusovich L: Linear systems in Jordan algebras and primal-dual interior-point algorithms. J. Comput. Appl. Math. 1997, 86: 149–175. 10.1016/S0377-0427(97)00153-2

    Article  MathSciNet  Google Scholar 

  14. Faybusovich L: Euclidean Jordan and interior-point algorithms. Positivity 1997, 1: 331–357. 10.1023/A:1009701824047

    Article  MathSciNet  Google Scholar 

  15. Faybusovich L, Arana R: A long-step primal-dual algorithm for symmetric programming problems. Syst. Control Lett. 2001, 43: 3–7. 10.1016/S0167-6911(01)00092-5

    Article  MathSciNet  Google Scholar 

  16. Schmieta S, Alizadeh F: Extensions of primal-dual interior-point algorithms to symmetric cones. Math. Program. 2003, 96: 409–438. 10.1007/s10107-003-0380-z

    Article  MathSciNet  Google Scholar 

  17. Baes, M: Optimization Methods for Convex Symmetric Problems. Monograph, April (2007)

    Google Scholar 

  18. Choi, BK, Lee, GM: Complexity analysis for primal-dual interior-point methods for self-scaled optimization problems (submitted)

  19. Vieira, MVC: Jordan algebraic approach to symmetric optimization. Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2007)

    Google Scholar 

  20. Vieira MVC: Interior-point methods based on kernel functions for symmetric optimization. Optim. Methods Softw. 2011, 27: 513–537.

    Article  MathSciNet  Google Scholar 

  21. Sun D, Sun J: Lowner’s operator and spectral functions in Euclidean Jordan algebras. Math. Oper. Res. 2008, 33: 421–445. 10.1287/moor.1070.0300

    Article  MathSciNet  Google Scholar 

  22. Faraut J, Korányi A: Analysis on Symmetric Cones. Oxford University Press, London; 1994.

    Google Scholar 

  23. Nesterov YE, Nemirovskii A SIAM Stud. Appl. Math. 13. In Interior Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia; 1994.

    Chapter  Google Scholar 

  24. Gowda MS, Szajder R, Tao J: Some P-properties for linear transformations on Euclidean Jordan algebras. Linear Algebra Appl. 2004, 393: 203–232.

    Article  MathSciNet  Google Scholar 

  25. Faybusovich L: A Jordan-algebraic approach to potential-reduction algorithms. Math. Z. 2002, 239: 117–129. 10.1007/s002090100286

    Article  MathSciNet  Google Scholar 

  26. Lim Y: Applications of geometric means on symmetric cones. Math. Ann. 2001, 319: 457–468. 10.1007/PL00004442

    Article  MathSciNet  Google Scholar 

  27. Tunçel L: Potential reduction and primal-dual methods. In Handbook of Semidefinite Programming Theory, Algorithms and Applications. Edited by: Wolkowicz H, Saigal R, Vandenberghe L. Kluwer Academic, Boston; 2000:235–265.

    Chapter  Google Scholar 

  28. Alizadeh F, Schmieta S: Symmetric cones, potential reduction methods and word-by-word extensions. In Handbook of Semidefinite Programming, Theory, Algorithms and Applications. Edited by: Wolkowicz H, Saigal R, Vandenberghe L. Kluwer Academic, Boston; 2000:195–233.

    Chapter  Google Scholar 

  29. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1991.

    Book  Google Scholar 

  30. Baes M: Convexity and differentiability properties of spectral functions and spectral mappings on Euclidean Jordan algebras. Linear Algebra Appl. 2007, 422: 664–700. 10.1016/j.laa.2006.11.025

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012-0006236).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gue Myung Lee.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors, together discussed and solved the problems in the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Choi, B.K., Lee, G.M. New complexity analysis for primal-dual interior-point methods for self-scaled optimization problems. Fixed Point Theory Appl 2012, 213 (2012). https://doi.org/10.1186/1687-1812-2012-213

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1812-2012-213

Keywords