a family of extragradient methods for solving equilibrium problems

12
JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2015.11.619 MANAGEMENT OPTIMIZATION Volume 11, Number 2, April 2015 pp. 619–630 A FAMILY OF EXTRAGRADIENT METHODS FOR SOLVING EQUILIBRIUM PROBLEMS Thi Phuong Dong Nguyen Institute for Computational Science and Technology (ICST) Ho Chi Minh City, Vietnam Jean Jacques Strodiot 1 Institute for Computational Science and Technology (ICST) Ho Chi Minh City, Vietnam and University of Namur, Belgium Thi Thu Van Nguyen Institute for Computational Science and Technology (ICST) Ho Chi Minh City, Vietnam and University of Science at Ho Chi Minh City (VNU), Vietnam Van Hien Nguyen Institute for Computational Science and Technology (ICST) Ho Chi Minh City, Vietnam and University of Namur, Belgium (Communicated by Masao Fukushima) Abstract. In this paper we introduce a class of numerical methods for solv- ing an equilibrium problem. This class depends on a parameter and contains the classical extragradient method and a generalization of the two-step ex- tragradient method proposed recently by Zykina and Melen’chuk for solving variational inequality problems. Convergence of each algorithm of this class to a solution of the equilibrium problem is obtained under the condition that the equilibrium function associated with the problem is pseudomonotone and Lipschitz continuous. Some preliminary numerical results are given to compare the numerical behavior of the two-step extragradient method with respect to the other methods of the class and in particular to the extragradient method. 1. Introduction. Let K be a nonempty closed convex subset of R n and let f : K × K R be a mapping such that, for every point x K, f (x, x) = 0. The equilibrium problem denoted by EP (f,K) consists in finding a point x * K such that f (x * ,y) 0 y K. 2010 Mathematics Subject Classification. Primary: 49J40, 90C25; Secondary: 65K10. Key words and phrases. Extragradient method, two-step extragradient method, equilibrium problems, variational inequalities. 1 Corresponding author 619

Upload: van-hien

Post on 22-Feb-2017

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A family of extragradient methods for solving equilibrium problems

JOURNAL OF INDUSTRIAL AND doi:10.3934/jimo.2015.11.619MANAGEMENT OPTIMIZATIONVolume 11, Number 2, April 2015 pp. 619–630

A FAMILY OF EXTRAGRADIENT METHODS FOR SOLVING

EQUILIBRIUM PROBLEMS

Thi Phuong Dong Nguyen

Institute for Computational Science and Technology (ICST)

Ho Chi Minh City, Vietnam

Jean Jacques Strodiot1

Institute for Computational Science and Technology (ICST)Ho Chi Minh City, Vietnam

and

University of Namur, Belgium

Thi Thu Van Nguyen

Institute for Computational Science and Technology (ICST)

Ho Chi Minh City, Vietnam

andUniversity of Science at Ho Chi Minh City (VNU), Vietnam

Van Hien Nguyen

Institute for Computational Science and Technology (ICST)

Ho Chi Minh City, Vietnamand

University of Namur, Belgium

(Communicated by Masao Fukushima)

Abstract. In this paper we introduce a class of numerical methods for solv-ing an equilibrium problem. This class depends on a parameter and contains

the classical extragradient method and a generalization of the two-step ex-

tragradient method proposed recently by Zykina and Melen’chuk for solvingvariational inequality problems. Convergence of each algorithm of this class

to a solution of the equilibrium problem is obtained under the condition that

the equilibrium function associated with the problem is pseudomonotone andLipschitz continuous. Some preliminary numerical results are given to compare

the numerical behavior of the two-step extragradient method with respect tothe other methods of the class and in particular to the extragradient method.

1. Introduction. Let K be a nonempty closed convex subset of Rn and let f :K × K → R be a mapping such that, for every point x ∈ K, f(x, x) = 0. Theequilibrium problem denoted by EP (f,K) consists in finding a point x∗ ∈ K suchthat

f(x∗, y) ≥ 0 ∀y ∈ K.

2010 Mathematics Subject Classification. Primary: 49J40, 90C25; Secondary: 65K10.Key words and phrases. Extragradient method, two-step extragradient method, equilibrium

problems, variational inequalities.1Corresponding author

619

Page 2: A family of extragradient methods for solving equilibrium problems

620 THI PHUONG DONG NGUYEN ET AL.

The solution set of EP (f,K) is denoted S∗. This problem has been introduced in[6] under the name of Ky Fan inequality problem and recently revisited by Blumand Oettli in [3]. The equilibrium problem is very general in the sense that itcontains, among others, the optimization problem, the variational inequality, thesaddle point problem, the Nash equilibrium problem in noncooperative games, andthe fixed point problem; see, for instance, [1, 2, 5, 10] and the references therein. Itsinterest comes from the fact that it allows us to unify all these particular problemsin a convenient way.

An important example of function f is given by

f(x, y) = 〈F (x), y − x〉 ∀x, y ∈ K

where F : K → K. In that case, problem EP (f,K) can be expressed as findingx∗ ∈ K such that

〈F (x∗), y − x∗〉 ≥ 0 ∀y ∈ K.

This problem is called a variational inequality and is denoted by V IP (F,K).There are a lot of computational methods for solving this problem; see the mono-graphs of Nagurney [12], of Facchinei and Pang [4], of Konnov [8] and the referencestherein. It is well-known that V IP (F,K) can be reformulated as a fixed point prob-lem

x∗ = PK(x∗ − βF (x∗))

where β is any positive number and PK(·) denotes the orthogonal projection on K.The corresponding iterative algorithm is given by

xk+1 = PK(xk − βF (xk)) ∀k

where x0 is chosen in K. Unfortunately, this method converges to a solution ofV IP (F,K) when F is both strongly monotone and Lipschitz continuous. Theseconditions are very strong and as such they narrow down the scope of applicationsof the method. To overcome these drawbacks, many modified methods have beenproposed. A first important modification is due to Korpelevich [9]. She proposedan extragradient method consisting of two projections instead of a single projectionas above. More precisely, given xk ∈ K, the iterate xk+1 is computed in two stepsas follows:

x̄k = PK(xk − βF (xk))

xk+1 = PK(xk − βF (x̄k)).

A prediction step (giving x̄k) is followed by a correction step (giving xk+1). It isproven that the extragradient method is globally convergent when F is monotoneand Lipschitz continuous on K and when 0 < β < 1/L where L > 0 is the Lipschitzconstant associated with F .

Since then, many variants of the extragradient method were developed to improvethe efficiency of the method. See, for example, [4, 7, 8] for solving V IP (F,K) and[2, 11, 13, 14, 18, 19] for solving EP (f,K).

Page 3: A family of extragradient methods for solving equilibrium problems

A FAMILY OF EXTRAGRADIENT METHODS 621

Recently, the authors of [21] proposed a two-step extragradient method for solv-ing V IP (F,K). Given xk ∈ K, the vectors x̄k, x̃k, and xk+1 are successively com-puted as

x̄k = PK(xk − βF (xk))

x̃k = PK(x̄k − βF (x̄k))

xk+1 = PK(xk − βF (x̃k)).

So, at each iteration, three steps are necessary to find xk+1 from xk. First, twoprojection steps are successively performed for finding a good search direction F (x̃k)from the current point xk and after, a new point xk+1 is calculated by following thisdirection. A convergence theorem is given without proof and computational resultsare presented to show the efficiency of the two-step extragradient method for solvinglinear programming problems with filled (without zeroes) matrices of dimension10 to 50. The proof of convergence appeared in 2012 with some other numericalexamples in [20]. In that paper, the authors proved that the distance betweenthe iterate xk and the solution x∗ is sufficiently decreasing at each iteration. Thisproperty allows them to show that the sequence {xk} generated by their algorithmconverges to a solution of V IP (F,K) when F is both monotone and Lipschitzcontinuous on K provided that the Lipschitz constant L > 0 associated with Fis such that 0 < β < 1/L. Furthermore, conditions are given on F to obtain theconvergence in a finite number of iterations [24]. Finally, let us also mention that thetwo-step extragradient method has been applied for solving a resource managementproblem (see [22, 23]).

In order to compare the efficiency of the two-step extragradient method withrespect to the extragradient method, we consider a class of methods containingthese two methods. So, for solving V IP (F,K), we propose the following iteration:Given xk ∈ K, the vectors x̄k, x̃k and xk+1 are successively computed as

x̄k = PK(xk − αkF (xk))

x̃k = PK(x̄k − βkF (x̄k))

xk+1 = PK(xk − βkF (x̃k)) (1)

where βk > 0 and 0 ≤ αk ≤ βk. Notice that when αk = 0, the vectors x̄k andxk coincide and the corresponding iteration becomes the classical extragradientiteration. On the other hand, when αk = βk = β > 0, the iteration corresponds tothe two-step extragradient iteration [20].

The aim of the paper is twofold. Firstly to study the convergence of a classof methods extending the methods given by (1) to the equilibrium problem, andsecondly to compare the numerical behavior of the proposed algorithms on a testproblem. In particular, we want to show that the two-step extragradient methodgives better numerical results than the classical extragradient method when filled(without zeroes) matrices are used (see Section 3).

The remainder of the paper is organized as follows: In Section 2, a family ofextragradient methods is introduced for solving problem EP (f,K) depending onthe value of parameters αk and βk. When αk = 0 for all k, the corresponding methodcoincides with the classical extragradient method used for solving an equilibriumproblem. When αk = βk = β > 0 for all k, and the variational inequality problem

Page 4: A family of extragradient methods for solving equilibrium problems

622 THI PHUONG DONG NGUYEN ET AL.

is considered, the method corresponds to the two-step extragradient method of[20]. The convergence of the new methods is deduced following the value of theparameters αk and βk, and under a Lipschitz continuity assumption on f . InSection 3, some preliminary test is reported to illustrate the efficiency of the two-step extragradient method.

2. A family of extragradient methods for the equilibrium problem. In thissection, we introduce and analyze the following General Extragradient Algorithm forsolving problem EP (f,K).

Algorithm GEA.

Step 0. Let x0 ∈ K. Choose ε > 0. Set k = 0.Step 1. Choose αk ≥ 0 and βk > 0.Step 2. Compute

x̄k = arg miny∈K

{αkf(xk, y) +

1

2‖y − xk‖2

}and

x̃k = arg miny∈K

{βkf(x̄k, y) +

1

2‖y − x̄k‖2

}.

Step 3. Compute

xk+1 = arg miny∈K

{βkf(x̃k, y) +

1

2‖y − xk‖2

}.

Step 4. Set k = k + 1 and go to Step 1.In order to obtain the convergence of the sequence {xk} generated by Algorithm

GEA, we introduce the next assumption that we assume to hold from now on.

Assumption A

(a) f(·, ·) is lower semicontinuous on K ×K and f(·, y) is upper semicontinuouson K.

(b) f is pseudomonotone on K.(c) f(x, ·) is convex on K for every x ∈ K.(d) f is Lipschitz continuous on K ×K with Lipschitz constants c1 and c2.

Let us recall the following two definitions:

Definition 2.1. A mapping f : K ×K → R is said to be

(a) pseudomonotone if

f(x, y) ≥ 0⇒ f(y, x) ≤ 0 ∀x, y ∈ K.(b) Lipschitz continuous if there exist positive numbers c1 and c2 such that

f(x, y) + f(y, z) ≥ f(x, z)− c1‖x− y‖2 − c2‖y − z‖2 ∀x, y, z ∈ K.

The reader is referred to [17] for any undefined terms concerning Convex Analysis.Before proving the convergence of Algorithm GEA, we can observe that when

αk = 0 for all k, the iterate x̄k, defined in Step 2, coincides with xk. So, in thiscase, Algorithm GEA reduces to the classical extragradient algorithm. When thefunction f is defined for every x, y ∈ K by f(x, y) = 〈F (x), y−x〉 where F : K → K,the equilibrium problem becomes a variational inequality problem. In that case, iffor all k we choose the parameters αk = βk = β where β > 0, then our algorithmcorresponds to the algorithm studied in [20].

Page 5: A family of extragradient methods for solving equilibrium problems

A FAMILY OF EXTRAGRADIENT METHODS 623

In order to prove the convergence of Algorithm GEA, we need the followinglemmas.

Lemma 2.2. For each k ∈ IN and every y ∈ K, the iterates x̄k, x̃k and xk+1

generated by Algorithm GEA satisfy the following inequalities

(a) αkf(xk, y)− αkf(xk, x̄k) ≥ 〈xk − x̄k, y − x̄k〉(b) βkf(x̄k, y)− βkf(x̄k, x̃k) ≥ 〈x̄k − x̃k, y − x̃k〉(c) βkf(x̃k, y)− βkf(x̃k, xk+1) ≥ 〈xk − xk+1, y − xk+1〉

where αk ≥ 0 and βk > 0.

Proof. We only prove inequality (c), the proofs of (a) and (b) being similar. Letk ∈ N and y ∈ K. By definition of xk+1, we have xk − xk+1 − βk ωk ∈ NK(xk+1),where ωk ∈ ∂2f(x̃k, xk+1) ≡ ∂f(x̃k, ·)(xk+1) and NK(xk+1) denotes the normalcone to K at xk+1, namely

NK(xk+1) = {z ∈ IRn | 〈z, y − xk+1〉 ≤ 0 ∀y ∈ K}.This implies that

〈xk − xk+1, y − xk+1〉 ≤ 〈βk ωk, y − xk+1〉 ∀y ∈ K.Since ωk ∈ ∂2f(x̃k, xk+1), we obtain

f(x̃k, y) ≥ f(x̃k, xk+1) + 〈ωk, y − xk+1〉 ∀y ∈ K.Therefore, we have

βkf(x̃k, y) ≥ βkf(x̃k, xk+1) + 〈xk − xk+1, y − xk+1〉 ∀y ∈ K.

Lemma 2.3. For every x∗ ∈ S∗, y ∈ K, and for all k ∈ IN, the iterates x̃k andxk+1 generated by Algorithm GEA satisfy the inequality

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − xk+1‖2 − 2βkf(x̃k, xk+1).

Proof. Substituting x∗ for y in Lemma 2.2 (c) and using the equality

‖xk − x∗‖2 = ‖xk − xk+1‖2 + ‖xk+1 − x∗‖2 + 2〈xk − xk+1, xk+1 − x∗〉,we obtain that

‖xk − x∗‖2 − ‖xk − xk+1‖2 − ‖xk+1 − x∗‖2 ≥ 2βkf(x̃k, xk+1)− 2βkf(x̃k, x∗).

Since f(x∗, x̃k) ≥ 0 and f is pseudomonotone, we have f(x̃k, x∗) ≤ 0, and thus

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − xk+1‖2 − 2βkf(x̃k, xk+1).

In the next two theorems, we prove that for every x∗ ∈ S∗, the sequence {‖xk −x∗‖} is nonincreasing.

Theorem 2.4. Suppose that for some k, the following inequality holds

αk[f(xk, xk+1)− f(xk, x̄k)] ≤ 0.

Then, for every x∗ ∈ S∗, we have

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 2βkc2)‖xk+1 − x̃k‖2

− (1− 2βkc1)‖x̃k − x̄k‖2 − ‖x̄k − xk‖2. (2)

Page 6: A family of extragradient methods for solving equilibrium problems

624 THI PHUONG DONG NGUYEN ET AL.

Proof. 1) Since f is Lipschitz continuous, we have that

f(x̃k, xk+1) ≥ f(x̄k, xk+1)− f(x̄k, x̃k)− c1‖x̃k − x̄k‖2 − c2‖xk+1 − x̃k‖2.So, it follows from Lemma 2.3 that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − xk+1‖2 − 2βk[f(x̄k, xk+1)− f(x̄k, x̃k)]

+ 2βkc1‖x̃k − x̄k‖2 + 2βkc2‖xk+1 − x̃k‖2. (3)

Substituting xk+1 for y in Lemma 2.2 (b), we can write

βk[f(x̄k, xk+1)− f(x̄k, x̃k) ≥ 〈x̃k − x̄k, x̃k − xk+1〉. (4)

So, from (3) and (4), we obtain that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk+1 − xk‖2 − 2〈x̃k − x̄k, x̃k − xk+1〉

+ 2βkc1‖x̃k − x̄k‖2 + 2βkc2‖xk+1 − x̃k‖2. (5)

2) Using successively the following equalities

‖xk+1 − xk‖2 = ‖xk+1 − x̃k‖2 + ‖x̃k − xk‖2 + 2〈xk+1 − x̃k, x̃k − xk〉and

〈x̃k − x̄k, x̃k − xk+1〉 = 〈x̃k − xk, x̃k − xk+1〉+ 〈xk − x̄k, x̃k − xk+1〉,we deduce from (5) that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 2βkc2)‖xk+1 − x̃k‖2 − ‖x̃k − xk‖2

− 2〈xk − x̄k, x̃k − xk+1〉+ 2βkc1‖x̃k − x̄k‖2. (6)

Observing that ‖x̃k − xk‖2 = ‖x̃k − x̄k‖2 + ‖x̄k − xk‖2 + 2〈x̃k − x̄k, x̄k − xk〉, weobtain from (6) that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 2βkc2)‖xk+1 − x̃k‖2 − (1− 2βkc1)‖x̃k − x̄k‖2

− ‖x̄k − xk‖2 − 2〈x̄k − xk, xk+1 − x̄k〉. (7)

Substituting xk+1 for y in Lemma 2.2 (a) and using the assumption, we can write

〈xk − x̄k, xk+1 − x̄k〉 ≤ αk[f(xk, xk+1)− f(xk, x̄k)] ≤ 0. (8)

So, gathering (7) and (8), we obtain the desired inequality.

Theorem 2.5. Suppose that for some k, the following inequalities hold

f(xk, xk+1)− f(xk, x̄k) > 0 and 0 < αk ≤ βk.Then, for every x∗ ∈ S∗, we have

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 6βkc1)‖xk − x̄k‖2 − (1− 4βkc2)‖xk+1 − x̄k‖2

− (2− 4βkc1 − 6βkc2)‖x̄k − x̃k‖2. (9)

Proof. 1) It follows from Lemma 2.3 that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − xk+1‖2 − 2βkf(x̃k, xk+1).

Consequently, developing ‖xk−xk+1‖2 with respect to ‖xk−x̄k‖2 and ‖x̄k−xk+1‖2,we deduce the following inequality

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − x̄k‖2 − ‖x̄k − xk+1‖2

− 2〈xk − x̄k, x̄k − xk+1〉 − 2βkf(x̃k, xk+1). (10)

Page 7: A family of extragradient methods for solving equilibrium problems

A FAMILY OF EXTRAGRADIENT METHODS 625

Substituting xk+1 for y in Lemma 2.2 (a) gives

〈xk − x̄k, xk+1 − x̄k〉 ≤ αkf(xk, xk+1)− αkf(xk, x̄k).

Using this inequality in (10), we can deduce

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − x̄k‖2 − ‖x̄k − xk+1‖2 + 2αkf(xk, xk+1)

− 2αkf(xk, x̄k)− 2βkf(x̃k, xk+1). (11)

Substituting x̄k for y in Lemma 2.2 (b) and remembering that f(x̄k, x̄k) = 0, wecan write

‖x̄k − x̃k‖2 ≤ βkf(x̄k, x̄k)− βkf(x̄k, x̃k) = −βkf(x̄k, x̃k)

and thus,0 ≤ −‖x̄k − x̃k‖2 − βkf(x̄k, x̃k). (12)

Multiplying (12) by 2, and adding to (11), we get

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − x̄k‖2 − ‖x̄k − xk+1‖2 + 2αkf(xk, xk+1)

− 2αkf(xk, x̄k)− 2βkf(x̃k, xk+1)− 2βkf(x̄k, x̃k)− 2‖x̄k − x̃k‖2.(13)

Since, by assumption, f(xk, xk+1) − f(xk, x̄k) > 0 and 0 < αk ≤ βk, we deducefrom the previous inequality that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − ‖xk − x̄k‖2 − ‖x̄k − xk+1‖2 + 2βkf(xk, xk+1)

− 2βkf(xk, x̄k)− 2βkf(x̃k, xk+1)− 2βkf(x̄k, x̃k)− 2‖x̄k − x̃k‖2.(14)

2) Now using the Lipschitz continuity of f , first with x = xk, y = x̃k, z = xk+1,and after with x = xk, y = x̄k, z = x̃k, we obtain that

f(xk, xk+1)− f(x̃k, xk+1)− f(xk, x̄k)− f(x̄k, x̃k)

≤ f(xk, x̃k) + c1‖xk − x̃k‖2 + c2‖x̃k − xk+1‖2 − f(xk, x̄k)− f(x̄k, x̃k)

= f(xk, x̃k)− f(xk, x̄k)− f(x̄k, x̃k) + c1‖xk − x̃k‖2 + c2‖xk+1 − x̃k‖2

≤ c1‖xk − x̄k‖2 + c2‖x̄k − x̃k‖2 + c1‖xk − x̃k‖2 + c2‖xk+1 − x̃k‖2.(15)

This implies, from (14), that

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 2βkc1)‖xk − x̄k‖2 − ‖x̄k − xk+1‖2

− (2− 2βkc2)‖x̄k − x̃k‖2 + 2βkc1‖xk − x̃k‖2 + 2βkc2‖xk+1 − x̃k‖2. (16)

Using twice the inequality ‖a + b‖2 ≤ 2‖a‖2 + 2‖b‖2 valid for every a, b ∈ IRn, weget

‖xk − x̃k‖2 ≤ 2‖xk − x̄k‖2 + 2‖x̄k − x̃k‖2 (17)

and‖xk+1 − x̃k‖2 ≤ 2‖xk+1 − x̄k‖2 + 2‖x̄k − x̃k‖2. (18)

From (16), (17) and (18), we have

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 2βkc1)‖xk − x̄k‖2 − ‖x̄k − xk+1‖2

− (2− 2βkc2)‖x̄k − x̃k‖2 + 4βkc1‖xk − x̄k‖2 + 4βkc1‖x̄k − x̃k‖2

+ 4βkc2‖xk+1 − x̄k‖2 + 4βkc2‖x̄k − x̃k‖2.

Page 8: A family of extragradient methods for solving equilibrium problems

626 THI PHUONG DONG NGUYEN ET AL.

Hence

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − (1− 6βkc1)‖xk − x̄k‖2 − (1− 4βkc2)‖xk+1 − x̄k‖2

− (2− 4βkc1 − 6βkc2)‖x̄k − x̃k‖2.

Let x∗ ∈ S∗. Our aim is now to use inequalities (2) and (9) to give sufficientconditions on the sequences {αk} and {βk} to obtain that for all k

‖xk+1 − x∗‖2 ≤ ‖xk − x∗‖2 − b‖xk − x̄k‖2 − d‖x̄k − x̃k‖2 (19)

where b and d are positive numbers. First we consider the case when αk = 0 forall k.

Proposition 2.6. Let x∗ ∈ S∗ and suppose that for all k,

αk = 0 and 0 < βk ≤ β∗ < min

{1

2c1,

1

2c2

}.

Then, there exist b > 0 and d > 0 such that inequality (19) holds for all k.

Proof. Since αk = 0 for all k, we have that x̄k = xk for all k. So, from Theorem2.4, we can write that for all k,

‖xk+1−x∗‖2 ≤ ‖xk−x∗‖2−(1−2βkc2)‖xk+1− x̃k‖2−(1−2βkc1)‖x̃k− x̄k‖2. (20)

On the other hand, by assumption, we have that

1− 2βkc1 ≥ 1− 2β∗c1 > 0 and 1− 2βkc2 ≥ 1− 2β∗c2 > 0.

Consequently, since x̄k = xk, we can deduce from (20) that (19) holds for b = 1 andd = 1− 2β∗c1.

Using Theorems 2.4 and 2.5, we obtain a similar result when αk > 0 for all k.

Proposition 2.7. Let x∗ ∈ S∗ and suppose that for all k,

0 < αk ≤ βk ≤ β∗ < min

{1

6c1,

1

4c2,

1

2c1 + 3c2

}.

Then, there exist b > 0 and d > 0 such that inequality (19) holds for all k.

Proof. We consider two cases. When f(xk, xk+1) − f(xk, x̄k) ≤ 0, it follows fromTheorem 2.4 that (2) is satisfied. Hence, since

β∗ <1

6c1<

1

2c1and β∗ <

1

4c2<

1

2c2,

we obtain immediately that (19) is satisfied with 0 < b ≤ 1 and 0 < d ≤ 1− 2β∗c1.On the other hand, when f(xk, xk+1) − f(xk, x̄k) > 0, it follows from Theorem

2.5 that (9) is satisfied. Hence, since

β∗ < min

{1

6c1,

1

4c2,

1

2c1 + 3c2

},

we have that (19) is satisfied with

0 < b ≤ 1− 6βc1 and 0 < d ≤ 2− 4β∗c1 − 6β∗c2.

Consequently, whatever the sign of f(xk, xk+1)− f(xk, x̄k), (19) is satisfied with

b = 1− 6β∗c1 and d = min{1− 2β∗c1, 2− 4βc1 − 6β∗c2}.

Page 9: A family of extragradient methods for solving equilibrium problems

A FAMILY OF EXTRAGRADIENT METHODS 627

Proposition 2.8. Let x∗ ∈ S∗. Assume that inequality (19) holds for all k. Thenthe sequence {xk} generated by Algorithm GEA is bounded. Moreover, the sequences{‖x̃k − x̄k‖} and {‖xk − x̄k‖} converge to zero as k →∞.

Proof. Since, by (19), the sequence {‖xk − x∗‖} is nonincreasing, we have that forall k

‖xk‖ ≤ ‖xk − x∗‖+ ‖x∗‖ ≤ ‖x0 − x∗‖+ ‖x∗‖,and thus that the sequence {xk} is bounded. On the other hand, adding the in-equalities (19) for k going from 0 to N , we can write

b

N∑k=0

‖xk − x̄k‖2 + d

N∑k=0

‖x̃k − x̄k‖2 ≤ ‖x0 − x∗‖2 − ‖xN+1 − x∗‖2 ≤ ‖x0 − x∗‖2.

Hence, taking the limit as N →∞, we obtain that the series∑∞k=0 ‖xk − x̄k‖2 and∑∞

k=0 ‖x̃k − x̄k‖2 are convergent, and thus that

‖xk − x̄k‖ → 0 and ‖x̃k − x̄k‖ → 0 as k →∞. (21)

Finally, we are now in a position to give the convergence of the sequence generatedby Algorithm GEA.

Theorem 2.9. Suppose that 0 < β∗ ≤ βk ≤ β∗ < min{

16c1, 14c2, 12c1+3c2

}and that

0 ≤ αk ≤ βk for all k. Let {xk} be the sequence generated by Algorithm GEA.Then {xk} converges to a solution z∗ of EP (f,K).

Proof. First, it follows from Propositions 2.6 and 2.7 that (19) is satisfied for everyx∗ ∈ S∗. Consequently, by Proposition 2.8, the sequence {xk} is bounded and thesequences {‖xk − x̄k‖} and {‖x̄k − x̃k‖} converge to zero as k →∞. So, there existsome z∗ ∈ K and a subsequence {xki} of {xk} such that

xki → z∗, x̄ki → z∗ and x̃ki → z∗.

By definition of x̃ki , we have that, for every y ∈ K,

βkif(x̄ki , x̃ki) +1

2‖x̃ki − x̄ki‖2 ≤ βkif(x̄ki , y) +

1

2‖y − x̄ki‖2.

Dividing both members of this inequality by βki and using the definition of β∗ inthe right hand side, we can write, for every y ∈ K, that

f(x̄ki , x̃ki) +1

2βki‖x̃ki − x̄ki‖2 ≤ f(x̄ki , y) +

1

2β∗‖y − x̄ki‖2.

Using the lower semicontinuity of f(·, ·) in the left hand side and the upper semi-continuity of f(·, y) in the right hand side, we easily deduce, after taking the limiton i, that for every y ∈ K

0 = f(z∗, z∗) ≤ f(z∗, y) +1

2β∗‖y − z∗‖2.

So z∗ ∈ K is an optimal solution of the problem

miny∈K

{f(z∗, y) +

1

2β∗‖y − z∗‖2

}.

Since the gradient of the function 12β∗‖ ·−z∗‖2 is equal to 0 at z∗, we have that z∗ is

also a minimum of the function f(z∗, ·) over K. But this means that z∗ is a solution

Page 10: A family of extragradient methods for solving equilibrium problems

628 THI PHUONG DONG NGUYEN ET AL.

of EP (f,K), and thus that z∗ ∈ S∗. So, by (19), the whole sequence {‖xk − z∗‖}is convergent. Since xki → z∗, we immediately obtain that xk → z∗.

3. Numerical results. In this section we consider two numerical examples tostudy the behavior of Algorithm GEA following the values of parameters αk andβk. Here, these parameters have been chosen constant and equal to α ≥ 0 andβ > 0, respectively. In particular, we examine the special cases when α = 0 andα = β corresponding to the extragradient method and the two-step extragradientmethod, respectively. The stopping criterion ‖xk+1 − xk‖ ≤ ε has been chosenwith ε = 10−6. Algorithm GEA has been implemented in MATLAB 7.14 and theoptimization subproblems have been solved with the solver FMINCON from theOptimization Toolbox on a PC with an Intel(R) Core(TM) of 2.10 GHz with 4.00GB of RAM memory.

FMINCON is a solver that attempts to find a constrained minimum of a scalarfunction of several variables starting at an initial estimate. It contains four algo-rithmic options depending on the numerical method used for finding the minimum:the interior-point algorithm, the sequential quadratic programming algorithm, theactive-set algorithm and the trust-region-reflective algorithm (see [15, 16] for a de-scription of these algorithms). In our test-problems, we have used the active-setoption because of its better numerical performance.

Example 1. Consider the equilibrium problem given in [19] where the functionf : K ×K → R is defined for every x, y ∈ K by

f(x, y) = 〈Px+Qy + q, y − x〉

and where the set K is a polyhedral convex set given by

K = {x ∈ R5|5∑i=1

xi ≥ −1,−5 ≤ xi ≤ 5, i = 1, ..., 5}.

Here the vector q ∈ R5, and the matrices P and Q are two square matrices of order5 such that Q is symmetric positive semidefinite and P −Q is negative semidefinite.In our test, the matrices P , Q and the vector q are chosen as follows:

P =

3.1 2 0 0 02 3.6 0 0 00 0 3.5 2 00 0 2 3.3 00 0 0 0 3

Q =

1.6 1 0 0 01 1.5 0 0 00 0 1.5 1 00 0 1 1.5 00 0 0 0 2

q =

1−2−12−1

The numerical behavior of Algorithm GEA has been examined following the valuesof the parameter α, the value of β being fixed to 0.5. Three values are consideredfor α: α = 0 (which corresponds to the classical extragradient method), α = β(which corresponds to the two-step extragradient method) and α between 0 and β.Three starting points have also been chosen. The numerical results are collected inTable 1 where for each starting point and each value of α, the number of iterationsand the corresponding CPU time (in seconds) are mentioned.

Example 2. Consider the equilibrium problem given in Example 1 but with thesparse matrix P replaced by the full matrix

Page 11: A family of extragradient methods for solving equilibrium problems

A FAMILY OF EXTRAGRADIENT METHODS 629

Table 1. Results of Example 1

Starting point (1, 3, 1, 1, 2) (1, 4, 0, 0, 0) (−1,−3,−1,−1,−2)

α = 0.00 14 iter / 1.07 s 16 iter / 1.20 s 19 iter / 1.38 sα = 0.25 16 iter / 1.63 s 15 iter / 1.68 s 18 iter / 2.04 sα = 0.50 17 iter / 1.99 s 17 iter / 1.77 s 19 iter / 2.15 s

P =

3.1 2 1 1 12 3.6 1.5 1.4 1.3

1.1 1.2 3.5 2 11.1 1.2 2 3.3 21.1 1.2 1.3 1.4 3

(22)

Using the same starting points and the same values for the parameters α and βas in Example 1, we obtain, this time, that in all tests, the two-step extragradientmethod (where α > 0) is faster than the classical extragradient method (whereα = 0). The numerical results are displayed in next table

Table 2. Results of Example 2

Starting point (1, 3, 1, 1, 2) (1, 4, 0, 0, 0) (−1,−3,−1,−1,−2)

α = 0.00 57 iter / 4.66 s 56 iter / 4.52 s 53 iter / 4.11 sα = 0.25 15 iter / 1.57 s 15 iter / 1.70 s 16 iter / 1.74 sα = 0.50 20 iter / 2.05 s 18 iter / 1.93 s 17 iter / 1.87 s

Remark 3.1. As mentioned in the Introduction of this paper, other exampleswhere the two-step extragradient method is faster than the classical extragradientmethod have been displayed in [21]. These examples have been obtained whenlinear programming problems are solved with filled matrices of dimension greaterthan 10.

4. Concluding remarks. In this paper we have introduced a family of extragra-dient methods for solving equilibrium problems. The convergence of each methodhas been obtained depending on the value of a parameter α. Two examples ofequilibrium problems have been tested with different starting points. From thesepreliminary numerical results, it follows that the two-step extragradient method(corresponding to α = β) gives rise to better results when filled matrices are used.However, other numerical results on other test problems should be performed inorder to confirm our results.

Acknowledgments. The authors would like to thank the Associate Editor and twoanonymous referees for their valuable comments that allowed to improve the originalversion of this paper substantially. This research is funded by the Department ofScience and Technology at Ho Chi Minh City, Vietnam. Computing resourcesand support provided by the Institute for Computational Science and Technology(ICST) at Ho Chi Minh City is gratefully acknowledged.

Page 12: A family of extragradient methods for solving equilibrium problems

630 THI PHUONG DONG NGUYEN ET AL.

REFERENCES

[1] J. Bello Cruz, P. Santos and S. Scheimberg, A two-phase algorithm for a variational inequalityformulation of equilibrium problems, J. Optim. Theory Appl., 159 (2013), 562–575.

[2] G. Bigi, M. Castellani, M. Pappalardo and M. Passacantando, Existence and solution methods

for equilibria, Eur. J. Oper. Research, 227 (2013), 1–11.[3] E. Blum and W. Oettli, From optimization and variational inequalities to equilibrium prob-

lems, Math. Student, 63 (1994), 123–145.

[4] F. Facchinei and J. Pang, Finite-Dimensional Variational Inequalities and ComplementarityProblems, Vols I and II, Springer-Verlag, New York, 2003.

[5] A. Heusinger and C. Kanzow, Relaxation methods for generalized Nash equilibrium problems

with inexact line search, J. Optim. Theory Appl., 143 (2009), 159–183.[6] K. Fan, A minimax inequality and applications, in Inequality III (ed. O. Shisha), Academic

Press, (1972), 103–113.

[7] E. Khobotov, Modification of the extragradient method for solving variational inequalities andcertain optimization problems, USSR Comput. Math. Math. Phys., 27 (1987), 1462–1473.

[8] I. Konnov, Equilibrium Models and Variational Inequalities, Elsevier, Amsterdam, 2007.[9] G. Korpelevich, The extragradient method for finding saddle points and other problems,

Matecon, 12 (1976), 747–756.

[10] J. Krawczyk and S. Uryasev, Relaxation algorithms to find Nash equilibria with economicapplications, Environmental Modeling and Assessment, 5 (2000), 63–73.

[11] G. Mastroeni, On auxiliary principle for equilibrium problems, in Equilibrium Problems and

Variational Models (eds. P. Daniele, F. Giannessi and A. Maugeri), Kluwer Academic Pub-lishers, Dordrecht, 68 (2003), 289–298.

[12] A. Nagurney, Network Economics: A Variational Inequality Approach, Kluwer Academic

Publishers, Dordrecht, 1993.[13] T. T. V. Nguyen, J. J. Strodiot and V. H. Nguyen, The interior proximal extragradient method

for solving equilibrium problems, J. Glob. Optim., 44 (2009), 175–192.

[14] T. T. V. Nguyen, J. J. Strodiot and V. H. Nguyen, A bundle method for solving equilibriumproblems, Math. Program., 116 (2009), 529–552.

[15] J. Nocedal and S. Wright, Numerical Optimization, Springer, New York, 2006.[16] Optimization Toolbox User’s Guide. For Use with MATLAB, The Math Works Inc., 2014.

[17] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970.

[18] J. J. Strodiot, T. T. V. Nguyen and V. H. Nguyen, A new class of hybrid extragradientalgorithms for solving quasi-equilibrium problems, J. Global Optim., 56 (2013), 373–397.

[19] D. Q. Tran, L. D. Muu and V. H. Nguyen, Extragradient algorithms extended to equilibrium

problems, Optimization, 57 (2008), 749–776.[20] D. Zaporozhets, A. Zykina and N. Melen’chuk, Comparative analysis of the extragradient

methods for solution of the variational inequalities of some problems, Automation and Remote

Control , 73 (2012), 626–636.[21] A. Zykina and N. Melen’chuk, A two-step extragradient method for variational inequalities,

Russian Mathematics, 54 (2010), 71–73.[22] A. Zykina and N. Melen’chuk, A doublestep extragradient method for solving a resource

management problem, Modeling and Analysis of Information Systems, 17 (2010), 65–75.[23] A. Zykina and N. Melen’chuk, A doublestep extragradient method for solving a problem of

the management of resources, Automatic Control and Computer Science, 45 (2011), 452–459.[24] A. Zykina and N. Melen’chuk, Convergence of the two-step extragradient method in a finite

number of iterations, III International Conference: Optimization and Applications, Optima-2012, Costa da Caparica, Portugal, (2012), 23–30.

Received November 2013, 1st revision March 2014; 2nd revision April 2014.

E-mail address: [email protected]

E-mail address: [email protected]

E-mail address: [email protected]

E-mail address: [email protected]