hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point...

23
J Optim Theory Appl DOI 10.1007/s10957-013-0400-y Hybrid Methods for Solving Simultaneously an Equilibrium Problem and Countably Many Fixed Point Problems in a Hilbert Space Thi Thu Van Nguyen · Jean Jacques Strodiot · Van Hien Nguyen Received: 5 February 2013 / Accepted: 12 August 2013 © Springer Science+Business Media New York 2013 Abstract This paper presents a framework of iterative methods for finding a common solution to an equilibrium problem and a countable number of fixed point problems defined in a Hilbert space. A general strong convergence theorem is established un- der mild conditions. Two hybrid methods are derived from the proposed framework in coupling the fixed point iterations with the iterations of the proximal point method or the extragradient method, which are well-known methods for solving equilibrium problems. The strategy is to obtain the strong convergence from the weak conver- gence of the iterates without additional assumptions on the problem data. To achieve this aim, the solution set of the problem is outer approximated by a sequence of poly- hedral subsets. Communicated by Jen-Chih Yao. T.T.V. Nguyen · J.J. Strodiot · V.H. Nguyen Institute for Computational Science and Technology at Ho Chi Minh City (ICST), Ho Chi Minh City, Vietnam T.T.V. Nguyen e-mail: [email protected] V.H. Nguyen e-mail: [email protected] T.T.V. Nguyen Faculty of Mathematics and Computer Science, University of Science, VNU-HCM, Ho Chi Minh City, Vietnam e-mail: [email protected] J.J. Strodiot (B ) · V.H. Nguyen University of Namur, Namur, Belgium e-mail: [email protected] V.H. Nguyen e-mail: [email protected]

Upload: van-hien

Post on 18-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

J Optim Theory ApplDOI 10.1007/s10957-013-0400-y

Hybrid Methods for Solving Simultaneouslyan Equilibrium Problem and Countably Many FixedPoint Problems in a Hilbert Space

Thi Thu Van Nguyen · Jean Jacques Strodiot ·Van Hien Nguyen

Received: 5 February 2013 / Accepted: 12 August 2013© Springer Science+Business Media New York 2013

Abstract This paper presents a framework of iterative methods for finding a commonsolution to an equilibrium problem and a countable number of fixed point problemsdefined in a Hilbert space. A general strong convergence theorem is established un-der mild conditions. Two hybrid methods are derived from the proposed frameworkin coupling the fixed point iterations with the iterations of the proximal point methodor the extragradient method, which are well-known methods for solving equilibriumproblems. The strategy is to obtain the strong convergence from the weak conver-gence of the iterates without additional assumptions on the problem data. To achievethis aim, the solution set of the problem is outer approximated by a sequence of poly-hedral subsets.

Communicated by Jen-Chih Yao.

T.T.V. Nguyen · J.J. Strodiot · V.H. NguyenInstitute for Computational Science and Technology at Ho Chi Minh City (ICST), Ho Chi Minh City,Vietnam

T.T.V. Nguyene-mail: [email protected]

V.H. Nguyene-mail: [email protected]

T.T.V. NguyenFaculty of Mathematics and Computer Science, University of Science, VNU-HCM, Ho Chi MinhCity, Vietname-mail: [email protected]

J.J. Strodiot (B) · V.H. NguyenUniversity of Namur, Namur, Belgiume-mail: [email protected]

V.H. Nguyene-mail: [email protected]

J Optim Theory Appl

Keywords Equilibrium problems · Variational inequalities · Fixed-point problems ·Proximal point method · Hybrid extragradient methods · Armijo backtrackinglinesearch

1 Introduction

Recently, there was an increasing interest in the design of methods for finding a com-mon fixed point of countably many nonexpansive mappings defined on a real Hilbertspace (see, for instance, [1–4]). The strategy exploited by many authors has beento extend the existing algorithms for finding a fixed point of a single nonexpansivemapping to the case of an infinite sequence of such mappings. So Takahashi et al. [2]generalized the Nakajo–Takahashi method [5] and gave conditions on the sequenceof operators to obtain an algorithm strongly convergent to a common fixed pointto each operator. Since then, many papers have proposed methods for solving thisproblem. Most of these methods assume that the sequence of operators satisfies a so-called ‘NST Condition’ (for Nakajo, Shimoji, and Takahashi [6]) in order to provethe strong convergence of the iterates to the metric projection of the starting pointonto the solution set.

Another problem of great interest is the so-called equilibrium problem [7], whichcan be considered as a more general model than the variational inequality problem;see, for instance, [8–10]. This problem, also called the Ky Fan inequality problem[11, 12], is very general in the sense that it includes, as special cases, the optimiza-tion problem, the variational inequality, the saddle point problem, the Nash equilib-rium problem in noncooperative games, the fixed-point problem, and others; see, forinstance, the excellent review by Bigi et al. [13].

More recently, a particular attention has been paid to the problem of finding com-mon elements to the set of fixed points of operators and the set of solutions of varia-tional inequalities. The motivation for studying such a problem is in its possible ap-plication to mathematical models whose constraints can be expressed as fixed-pointproblems as signal processing, network resource allocation, image recovery; see, forinstance, [14, 15]. Let us mention that existence theorems related to this problem arealso studied in [15] in the framework of Banach spaces.

The purpose of this work is to propose a strongly convergent method for findinga common element of the set of fixed points of a wide class of operators and the setof solutions of an equilibrium problem. The strategy is to consider the problem in ageneral algorithmic framework. So we propose and study general conditions on theequilibrium function and the sequence of nonexpansive mappings to obtain a generalalgorithmic scheme that generates a sequence of iterates strongly converging to a so-lution of the problem. Then, we incorporate the proximal point method [16–18] andthe extragradient method [10] into this general algorithmic setting to obtain two prac-tical hybrid methods. In particular, we show that the iterative sequences generated byour algorithms converge strongly to the metric projection of the starting point ontothe solution set. The results of this paper generalize the strong convergence resultsobtained by Takahashi et al. [2] for finding a common fixed point of countably manynonexpansive mappings.

However, the proximal and the extragradient methods have some drawbacks: thesubproblems can be difficult to solve for the first method and the Lipschitz continuity

J Optim Theory Appl

assumption on the equilibrium function is a rather strong condition for the conver-gence of the second method. In order to avoid these difficulties, we introduce in oursecond hybrid algorithm, an inexact linesearch procedure into the extragradient steps.This strategy allows us to avoid the Lipschitz constant and also to consider optimiza-tion subproblems that are easier to solve than the inequality subproblems used in theproximal point method.

Recently, several authors [4, 19, 20] have obtained strong convergence results forfinding a common solution of an equilibrium problem and an infinite number offixed-point problems. Contrary to our results that are derived from a general theo-rem, their results are directly based on the proximal point method for solving theequilibrium problem. In [4, 20], (shrinking) hybrid methods are derived in the frame-work of Banach spaces and their strong convergence is obtained thanks to a so-calledNST-condition. In [19], a hybrid pseudoviscosity approximation algorithm is consid-ered in the framework of Hilbert spaces. Finally, other works have also been done inthat direction, but for a finite number of fixed-point problems [21–24], or with othertypes of projection [25–28].

The remainder of the paper is organized as follows. In Sect. 2, some basic defi-nitions and results are recalled. In Sect. 3, a general scheme is presented for solvingsimultaneously an equilibrium problem and countably many fixed-point problems ina Hilbert space. Conditions to obtain the strong convergence of this scheme are given.In Sect. 4, the proximal point method and the extragradient method are successivelyincorporated into the general scheme to obtain strong convergence results. In nextsection, the introduction of inexact linesearches into the extragradient steps allows usto get strong convergence results under weaker assumptions. Finally, some numericalexperience is reported in the last section.

2 Preliminaries

Let H be a real Hilbert space with scalar product 〈·, ·〉 and induced norm ‖ · ‖. Wedenote by xn → x the strong convergence of the sequence {xn} to x, and by xn ⇀ x itsweak convergence to x. The following equality is useful for proving the convergenceof our algorithms: For all x, y ∈ H and α ∈ [0,1], we have

∥∥αx + (1 − α)y

∥∥2 = α‖x‖2 + (1 − α)‖y‖2 − α(1 − α)‖x − y‖2. (1)

Let C be a nonempty, closed and convex subset of H . Denoting by PC the metricprojection of H onto C, we have, for x ∈ H and z ∈ C,

z = PC(x) is equivalent to 〈x − z, z − u〉 ≥ 0 for all u ∈ C.

Let us also recall that a Hilbert space H satisfies the Opial condition [29], namely:for any sequence {xn} ⊂ H with xn ⇀ x, the inequality

lim infn→∞ ‖xn − x‖ < lim inf

n→∞ ‖xn − y‖holds for every y ∈ H , y = x.

In order to formulate our problem, let f : C × C → R and let {Tn}n∈N be a se-quence of operators from C into C. Our aim is to find a point x∗ ∈ C that is a fixed

J Optim Theory Appl

point of each operator Tn, n ≥ 0, and a solution of the equilibrium problem (EP)associated with f and C. In other words, we are looking for a point x∗ ∈ C such that

Tn

(

x∗) = x∗ for all n ∈N and f(

x∗, y) ≥ 0 for all y ∈ C.

The set of solutions of (EP) is denoted by EP(f,C) and the set of fixed points of Tn

by Fix(Tn). Here we assume that each operator Tn : C → C, n ∈ N, is nonexpansive,i.e.

∥∥Tn(x) − Tn(y)

∥∥ ≤ ‖x − y‖ ∀x, y ∈ C.

We also assume that the equilibrium function f satisfies the following conditions:

(A1) f (x, x) = 0 for all x ∈ C.(A2) lim supn→∞ f (xn, y) ≤ f (x, y) for all sequences {xn} ⊂ C converging weakly

to x.(A3) f (x, ·) is convex, lower semicontinuous (l.s.c.), and subdifferentiable on C, for

each x ∈ C.(A4) f is pseudomonotone on C × C, i.e. for all x, y ∈ C

f (x, y) ≥ 0 =⇒ f (y, x) ≤ 0.

In this paper, we suppose that the set Ω := EP(f,C) ∩ (⋂

n≥0 Fix(Tn)) is nonempty.Each Tn being nonexpansive, the sets Fix(Tn) are closed and convex (see, for instance,[30]). On the other hand, it is easy to see that, if f satisfies conditions (A1)–(A4),then the set EP(f,C) is closed and convex.

The nonemptyness of the set⋂

n≥0 Fix(Tn), and a fortiori of the set Ω remainsan open question. There are examples of sequences {Fix(Tn)} with a nonempty in-tersection and others with an empty intersection (see [31], Example 4.4, p. 256). Inthis paper, from now on, we assume that the solution set Ω is nonempty, closed andconvex.

Now, in order to get the convergence of our algorithms, we need to impose someassumptions on the sequence of operators {Tn}. The next definitions are motivated by[6]; see also [2].

Let {Tn} and T be two families of nonexpansive mappings from C into C suchthat

∅ = Fix(T ) =⋂

n≥0

Fix(Tn),

where Fix(T ) denotes the set of all common fixed points of T . Then, the sequence{Tn} is said to satisfy the NST-condition (I) with T iff, for each bounded sequence{zn} ⊂ C,

limn→∞‖zn − Tnzn‖ = 0 =⇒ lim

n→∞‖zn − T zn‖ = 0 for all T ∈ T .

{Tn} is said to satisfy the NST-condition (II) iff, for each bounded sequence {zn} ⊂ C,

limn→∞‖zn+1 − Tnzn‖ = 0 =⇒ lim

n→∞‖zn − Tmzn‖ = 0 for all m ∈ N.

J Optim Theory Appl

Let us give here a first example of a sequence {Tn} satisfying the NST-condition (I)with T . For each n, we consider the operator Tn defined by (see [31], Example 4.4)

Tn : H → H, Tn := PHn, (2)

where Hn = {x ∈ H | 〈x − (1 − 2−n−1) d, d〉 ≥ 0} with d ∈ H , d = 0 given, and PHn

denotes the orthogonal projection onto Hn. Then Fix(Tn) = Hn and

H ≡⋂

n≥0

Fix(Tn) = {

x ∈ H | 〈x − d, d〉 ≥ 0} = ∅.

It is easy to see that the sequence {Tn} defined in (2) satisfies the NST-condition (I)with T = {PH }. Here are some other examples of sequences {Tn} satisfying theNST-conditions (I) and (II).

Lemma 2.1 [3, Theorem 3.3] Let C be a nonempty, closed and convex subset of aHilbert space H and let T be a nonexpansive mapping of C into itself with Fix(T ) =∅. Let {βn} be a sequence of real numbers with 0 < a ≤ βn ≤ b < 1. For n ∈ N, definea mapping Tn of C into itself by

Tnx := (1 − βn)x + βnT x for all x ∈ C.

Then {Tn} satisfies the NST-condition (I) with T = {T } and the NST-condition (II).

Lemma 2.2 [6, Lemma 3.2(i)] Let C be a nonempty, closed and convex subset of aHilbert space H and let S and T be two nonexpansive mappings of C into itself withFix(S) ∩ Fix(T ) = ∅. Let {βn} be a sequence of real numbers with 0 < a ≤ βn ≤ b <

1. For every n ∈ N, consider a mapping Tn of C into itself defined by

Tn(x) := βnSx + (1 − βn)T x for all x ∈ C.

Then, the sequence {Tn} satisfies the NST-condition (I) with T = {S,T }.Lemma 2.3 [6, Lemma 3.5(iii)] Let H be a Hilbert space, let M : H ⇒ H be a max-imal monotone operator with M−1{0} = ∅ and let Jr = (I + rM)−1 be the resolventof M for r > 0. Let {λn} be a sequence of real numbers such that λn ∈ ]0,∞[ andlimn→∞ λn = +∞. Define Tn := Jλn for any n ∈ N. Then, the sequence {Tn} satisfiesthe NST-condition (I) with T = {J1} := {(I + M)−1} and the NST-condition (II).

Many other examples of sequences of operators {Tn} satisfying the NST-conditions are given in [6]. Finally, to prove the strong convergence of the sequencesgenerated by our algorithms from their weak convergence, we will use the followinglemma.

Lemma 2.4 [32, Lemma 1.5] Let Ω be a nonempty, closed and convex subset of H .Let u ∈ H and let {xn} be a sequence in H such that any weak limit point of {xn}belongs to Ω and

‖xn − u‖ ≤ ‖u − PΩ(u)‖ ∀n ∈ N. (3)

Then xn → PΩ(u).

J Optim Theory Appl

3 A Hybrid General Model

Let C be a nonempty, closed and convex subset of H and let x0 ∈ C. Let f : C ×C → R be an equilibrium function satisfying properties (A1)–(A4) and let {Tn} bean infinite sequence of nonexpansive mappings from C into C.

Recall that our problem is to find the metric projection PΩ(x0) of x0 onto Ω whereΩ = EP(f,C) ∩ (

n≥0 Fix(Tn)). The strategy used in this paper is to construct asequence {xn} of iterates satisfying the assumptions of Lemma 2.4, i.e. a sequence{xn} that satisfies the following properties:

(a) any weak limit point of {xn} belongs to Ω ;(b) ‖xn − x0‖ ≤ ‖x0 − PΩ(x0)‖ for all n ∈ N, i.e. we have (3) with u = x0.

The sequence {xn} so constructed will be such that xn → PΩ(x0).Since Ω = EP(f,C) ∩ (

n≥0 Fix(Tn)), we first give sufficient conditions for anyweak limit point of {xn} to belong to

n≥0 Fix(Tn). It is the subject of next proposi-tion.

Proposition 3.1 Let {Tn} and T be two families of nonexpansive mappings suchthat

n≥0

Fix(Tn) = FixT = ∅. (4)

Let also {xn} be a bounded sequence contained in C. If the sequence {Tn} satisfiesthe NST-condition (I) with T or the NST-condition (II), then

‖xn+1 − Tnxn‖ → 0 ⇒ w-Lim(xn) ⊂⋂

n≥0

Fix(Tn),

where w-Lim(xn) denotes the set of weak limit points of the sequence {xn}.

Proof Assume that the sequence {Tn} satisfies the NST-condition (I) with T and let{xn} ⊂ C be a bounded sequence such that ‖xn − Tnxn‖ → 0. Let also x be a weaklimit point of {xn}, i.e. x ∈ w-Lim(xn). Then there exists a subsequence {xni

} of {xn}that converges weakly to x. We have to prove that x ∈ Fix(Tn) for all n and thus, by(4), that T x = x for every T ∈ T . Let T ∈ T and suppose, to get a contradiction,that T x = x. Using the Opial condition, we can write

lim infi→∞ ‖xni

− x‖ < lim infi→∞ ‖xni

− T x‖.

However, by assumption, we have ‖xni− T xni

‖ → 0. So, T being nonexpansive, weget

lim infi→∞ ‖xni

− T x‖ ≤ lim infi→∞

{‖xni− T xni

‖ + ‖T xni− T x‖}

≤ lim infi→∞

{‖xni− T xni

‖ + ‖xni− x‖}

= lim infi→∞ ‖xni

− x‖.

J Optim Theory Appl

This is a contradiction, and consequently, x ∈ Fix(Tn) for all n. The proof is similarwhen the NST-condition (II) is satisfied. �

From this proposition, we can obtain the following general strong convergencetheorem.

Theorem 3.1 (General Strong Convergence Theorem) Let C be a nonempty, closedand convex subset of H . Let {xn} and {un} be two sequences in C. Let also be afunction f : C × C → R satisfying properties (A1)–(A3), and let {Tn} and T be twofamilies of nonexpansive mappings such that

Ω = EP(f,C) ∩(

n≥0

Fix(Tn)

)

= ∅ and⋂

n≥0

Fix(Tn) = FixT .

Suppose that the sequence {Tn} satisfies the NST-condition (I) with T and that thesequences {xn} and {un} in H are such that

(i) ‖xn − x0‖ ≤ ‖x0 − PΩ(x0)‖ ∀n ∈ N;(ii) ‖Tnxn − xn‖ → 0 as n → ∞;

(iii) ‖un − xn‖ → 0 as n → ∞;(iv) lim supi→∞ f (uni

, y) ≥ 0 for all y ∈ C and every subsequence {uni} of {un}.

Then the sequences {xn} and {un} both converge strongly to PΩ(x0). The same resultholds when the sequence {Tn} satisfies the NST-condition (II) and the second property(ii) of the sequence {xn} is replaced with ‖Tnxn − xn+1‖ → 0 as n → ∞.

Proof Since the sequence {xn} is bounded (thanks to (i)), and satisfies condition (ii),it follows from Proposition 3.1 that any weak limit point of the sequence {xn} belongsto

n≥0 Fix(Tn). On the other hand, let x be a weak limit point of the sequence{xn}. Then, by condition (iii), x is also a weak limit point of the sequence {un}. So,there exists a subsequence {uni

} of {un} converging weakly to x. Using successivelyassumption (iv) and condition (A2), we obtain

0 ≤ lim supi→∞

f (uni, y) ≤ f (x, y) for all y ∈ C,

i.e. x ∈ EP(f,C). Furthermore, since ‖xn − x0‖ ≤ ‖x0 − PΩ(x0)‖ for all n ∈ N, itfollows from Lemma 2.4 that the sequence {xn} converges strongly to PΩ(x0). Theproof is similar when the NST-condition (II) is satisfied. �

Now, in order to apply Theorem 3.1 in our context, we have to precise how to gen-erate the sequences {xn} and {un} in such a way that conditions (i)–(iv) of Theorem3.1 are fulfilled. First, we consider the construction of a sequence {xn} that verifiesconditions (i)–(ii) of Theorem 3.1. However, to get a bounded sequence, we imposeon the sequence {xn} a stronger condition than (i), namely that for all n ≥ 1, theinequality

‖xn − x0‖ ≤ ‖xn+1 − x0‖ ≤ ∥∥x0 − PΩ(x0)

∥∥ (5)

J Optim Theory Appl

is satisfied, where x0 ∈ C denotes the starting point of the sequence. Under theseconditions, the sequence {‖xn −x0‖} is convergent and the sequence {xn} is bounded.

Next, to obtain a sequence {xn} verifying (5) and (ii), we consider an outer approx-imation method, i.e. we construct a sequence {Ωn} of subsets of C such that Ωn ⊃ Ω

for all n ∈ N, and we set

xn = PΩn (x0) ∀n ∈N.

Our aim, in this section, is to construct sequences {Ωn} such that

PΩn (x0) → PΩ x0. (6)

In next algorithm below, we consider a first example of the construction of a se-quence {Ωn} that satisfies the property (6). Temporarily, we do not make precise thechoice of zn in Step 1 of the algorithm, but we only impose an inequality that zn mustverify. This choice of zn will be done in next section to ensure that conditions (iii)and (iv) of Theorem 3.1 are satisfied.

Algorithm 1 Let x0 ∈ H,Ω1 = C,x1 = PΩ1x0. Let 0 < a < b < 1. Set n = 1.

Step 1. Choose zn ∈ C such that ‖zn − x∗‖ ≤ ‖xn − x∗‖ for every x∗ ∈ Ω .Step 2. Calculate tn = αnzn + (1 − αn)Tnzn, where a ≤ αn ≤ b.Step 3. Compute xn+1 = PΩn+1 x0, where

Ωn+1 = {

z ∈ Ωn | ‖tn − z‖2 ≤ ‖xn − z‖2 − (1 − αn)αn‖Tnzn − zn‖2}. (7)

Step 4. Set n := n + 1 and go to Step 1.

First of all, we observe that Ωn+1 can be written as

Ωn+1 = {

z ∈ Ωn |2〈xn − tn, z〉 ≤ 2〈tn − xn, xn〉 − ‖tn − xn‖2

− (1 − αn)αn‖Tnzn − zn‖2}.

Thus this set is closed, convex and contained in C because the first set Ω1 = C. Soxn+1 is well defined when Ωn+1 = ∅.

The sets Ωn, defined as in (7), have been introduced by Kimura and al. [1]. Inmany papers (see, for instance, Takahashi et al. [2]), the sequence {Ωn} is replacedwith the sequence {Ωn} where, for all n ≥ 1,

Ωn+1 = {

z ∈ Ωn | ‖tn − z‖ ≤ ‖xn − z‖}.Since Ω ⊂ Ωn ⊂ Ωn for all n ≥ 1, the sets Ωn are better approximations of Ω thanthe sets Ωn.

In next theorem, we prove that Ω ⊂ Ωn+1 for all n, and we give conditions toensure that any sequence {xn} generated by Algorithm 1 satisfies conditions (i) and(ii) of Theorem 3.1.

Theorem 3.2 Let {xn} and {zn} be the two sequences generated by Algorithm 1.Assume that 0 < a ≤ αn ≤ b < 1 for all n ≥ 1. Then, for all n ≥ 1 and every x∗ ∈ Ω ,x∗ ∈ Ωn and the following assertions hold:

J Optim Theory Appl

(i) ‖xn − x0‖ ≤ ‖x∗ − x0‖;(ii) ‖tn − xn‖ → 0, ‖Tnzn − zn‖ → 0 as n → ∞, and ‖tn − x∗‖ ≤ ‖zn − x∗‖;

(iii) If ‖zn − xn‖ → 0 as n → ∞, then ‖Tnxn − xn‖ → 0 as n → ∞.

Proof Let x∗ ∈ Ω . First we prove by induction that x∗ ∈ Ωn for all n ≥ 1. SinceΩ1 = C and x∗ ∈ C, we have x∗ ∈ Ω1. Suppose x∗ ∈ Ωn for some n ≥ 1. Then, usingStep 2 of Algorithm 1, (1), the nonexpansiveness of Tn, and Step 1 of Algorithm 1,we have

‖tn − x∗‖2 = ∥∥αn

(

zn − x∗) + (1 − αn)(

Tnzn − x∗)∥∥2

= αn‖zn − x∗‖2 + (1 − αn)‖Tnzn − x∗‖2 − (1 − αn)αn‖Tnzn − zn‖2

≤ αn‖zn − x∗‖2 + (1 − αn)‖zn − x∗‖2 − (1 − αn)αn‖Tnzn − zn‖2

= ‖zn − x∗‖2 − (1 − αn)αn‖Tnzn − zn‖2 (8)

≤ ‖xn − x∗‖2 − (1 − αn)αn‖Tnzn − zn‖2. (9)

This means that x∗ ∈ Ωn+1 by definition of Ωn+1, see (7). So Ω ⊂ Ωn for all n ≥ 1.Therefore, for all n ≥ 1, since xn = PΩn(x0) and xn+1 ∈ Ωn+1 ⊂ Ωn, we have

‖xn − x0‖ ≤ ‖x∗ − x0‖ and 〈x0 − xn, xn − xn+1〉 ≥ 0.

Consequently, assertion (i) of Theorem 3.1 is proven, and for all n ∈ N, one canobserve that

‖xn+1 − x0‖2 = ‖xn+1 − xn + xn − x0‖2

= ‖xn+1 − xn‖2 + ‖xn − x0‖2 + 2〈xn+1 − xn, xn − x0〉≥ ‖xn+1 − xn‖2 + ‖xn − x0‖2

≥ ‖xn − x0‖2. (10)

Hence, ‖xn − x0‖ ≤ ‖xn+1 − x0‖ ≤ ‖x∗ − x0‖ holds for all n ∈N. Then

limn→∞‖xn − x0‖ exists, and by (10), lim

n→∞‖xn+1 − xn‖ = 0.

Since xn+1 = PΩn+1x0, it follows from the definition of Ωn+1 that

‖tn − xn+1‖ ≤ ‖xn − xn+1‖.Consequently,

‖tn − xn‖ ≤ ‖tn − xn+1‖ + ‖xn+1 − xn‖ ≤ 2‖xn+1 − xn‖ (11)

and limn→∞ ‖tn − xn‖ = 0. This is the first part of assertion (ii) of the theorem.On the other hand, (9) implies successively that

‖Tnzn − zn‖ ≤ 1

(1 − αn)αn

(‖xn − x∗‖2 − ‖tn − x∗‖2)

J Optim Theory Appl

= 1

(1 − αn)αn

(‖xn − x∗‖ − ‖tn − x∗‖)(‖xn − x∗‖ + ‖tn − x∗‖)

≤ 1

(1 − αn)αn

‖xn − tn‖(‖xn − x∗‖ + ‖tn − x∗‖)

≤ 1

(1 − b)a‖xn − tn‖

(‖xn − x∗‖ + ‖tn − x∗‖). (12)

Since limn→∞ ‖xn − tn‖ = 0 and the two sequences {‖xn − x∗‖}, {‖tn − x∗‖} arebounded, it follows that limn→∞ ‖Tnzn − zn‖ = 0. So the second part of assertion (ii)is proven.

The third and last part of assertion (ii) of the theorem follows immediately from(8). Finally, suppose ‖zn − xn‖ → 0. Since each operator Tn is nonexpansive, we get

‖Tnxn − xn‖ = ∥∥(Tnxn − Tnzn) + (Tnzn − zn) + (zn − xn)

∥∥

≤ ‖Tnxn − Tnzn‖ + ‖Tnzn − zn‖ + ‖zn − xn‖≤ ‖Tnzn − zn‖ + 2‖zn − xn‖.

Then it is easy to see that the last assertion of Theorem 3.1 is satisfied. This completesthe proof. �

A second example of a sequence {Ωn} that satisfies the property (6) is proposed innext algorithm.

Algorithm 2 Let x0 ∈ H,Ω1 = C,x1 = PΩ1x0. Let 0 < a < b < 1. Set n = 1.

Step 1. Choose zn ∈ C such that ‖zn − x∗‖ ≤ ‖xn − x∗‖ for every x∗ ∈ Ω .Step 2. Calculate tn = αnzn + (1 − αn)Tnzn, where a ≤ αn ≤ b.Step 3. Compute xn+1 = PΩn+1 x0, where Ωn+1 = Cn+1 ∩ Qn+1 with

Cn+1 = {

z ∈ C | ‖tn − z‖2 ≤ ‖xn − z‖2 − (1 − αn)αn ‖Tnzn − zn‖2} and

Qn+1 = {

z ∈ C | 〈x0 − xn, xn − z〉 ≥ 0}

.

Step 4. Set n := n + 1 and go to Step 1.

With this other choice for Ωn, the previous theorem becomes the following.

Theorem 3.3 Let {xn} and {zn} be the two sequences generated by Algorithm 2.Assume that 0 < a ≤ αn ≤ b < 1 for all n ≥ 1. Then, for all n ≥ 1 and every x∗ ∈ Ω ,x∗ ∈ Ωn and the following assertions hold:

(i) ‖xn − x0‖ ≤ ‖x∗ − x0‖;(ii) ‖tn − xn‖ → 0, ‖Tnzn − zn‖ → 0 as n → ∞, and ‖tn − x∗‖ ≤ ‖zn − x∗‖;(iii) If ‖zn − xn‖ → 0 as n → ∞ then ‖Tnxn − xn‖ → 0 as n → ∞.

Proof From the proof of Theorem 3.2, we easily observe that we only need to provethat, for every x∗ ∈ Ω and all n ≥ 1,

x∗ ∈ Ωn, ‖xn − x0‖ ≤ ‖x∗ − x0‖, and 〈x0 − xn, xn − xn+1〉 ≥ 0.

J Optim Theory Appl

The last inequality is obvious because xn+1 ∈ Qn+1. Let x∗ ∈ Ω . Then, using Step 2of Algorithm 2, (1), the nonexpansiveness of Tn, and Step 1 of Algorithm 2, wecan prove (exactly as in the proof of Theorem 3.2) that (9) holds true for all n ≥ 1.However, this means that x∗ ∈ Cn for all n ≥ 2. Next, we prove by induction thatx∗ ∈ Qn for all n ≥ 2. So, let n = 2. Since x1 = PΩ1x0 and x∗ ∈ C = Ω1, we have〈x0 − x1, x1 − x∗〉 ≥ 0, i.e. x∗ ∈ Q2. Now suppose that x∗ ∈ Qn for some n ≥ 2.Since x∗ ∈ Cn and xn = PCn∩Qn x0, we obtain

〈x0 − xn, xn − x∗〉 ≥ 0, (13)

i.e. x∗ ∈ Qn+1. Since x∗ ∈ Ω1, we can conclude that (13) is valid for all n ≥ 1, andthus x∗ ∈ Ωn for all n ≥ 1. Moreover, from (13), and using the Cauchy–Schwarzinequality, we can write, for all n ≥ 1,

‖x0 − xn‖2 = ⟨

x0 − xn, (x0 − x∗) + (x∗ − xn)⟩

≤ 〈x0 − xn, x0 − x∗〉 + 〈x0 − xn, x∗ − xn〉

≤ ‖x0 − xn‖‖x0 − x∗‖,which implies that ‖xn − x0‖ ≤ ‖x∗ − x0‖ for all n ≥ 1. �

Remark 1 An obvious choice for zn in Algorithm 1 is to take zn = xn in Step 1. Forthis choice, it follows from Theorem 3.2 that conditions (i) and (ii) of Theorem 3.1 aresatisfied. Consequently, setting f = 0 and un = xn in Theorem 3.1, we obtain Ω =⋂

n≥0 Fix(Tn) and the sequence {xn} generated by Algorithm 1 converges strongly toPΩx0, provided that the sequence {Tn} satisfies one of the NST-conditions. We thusrecover a strong convergence result similar to the one obtained in Takahashi et al. [2].The same remark can be done concerning Algorithm 2 and Theorem 3.3.

4 Two Particular Hybrid Methods

In order to obtain a well-defined algorithm, we have to precise how to choose zn ∈ C

in Step 1 of Algorithms 1 and 2. Since this step must be related to the equilibriumproblem, we propose to use either a proximal step or an extragradient step for com-puting zn. So doing, our method can be seen as a combination of the proximal pointmethod or the extragradient method with a fixed-point method associated with eachoperator Tn. When Tn = T for all n ∈ N, such methods have been recently studied(see, for instance, [30] for the proximal point method and [22] for the extragradientmethod). However, before considering the case of an infinite sequence of operators{Tn}, we prove in the next two lemmas the main property the iterates generated byeach of these methods must satisfy, namely that for all n ≥ 1 and for every x∗ ∈ Ω ,

‖zn − x∗‖ ≤ ‖xn − x∗‖. (14)

Let us recall that this property is supposed to hold in Step 1 of Algorithms 1 and 2.First, we consider the proximal point method. In that case, we assume that the

function f satisfies the following condition, which is stronger than (A4).

J Optim Theory Appl

(A4a) f is monotone on C × C, i.e. f (x, y) + f (y, x) ≤ 0 for all x, y ∈ C.

Lemma 4.1 Let C be a nonempty, closed and convex subset of H and let f be afunction from C ×C into R satisfying conditions (A1)–(A3) and (A4a). Suppose thatx∗ ∈ EP(f,C). Let xn ∈ C and rn > 0. Let also zn be defined by

f (zn, y) + 1

rn〈y − zn, zn − xn〉 ≥ 0 ∀y ∈ C.

Then the following inequality holds:

‖zn − x∗‖2 ≤ ‖xn − x∗‖2 − ‖xn − zn‖2. (15)

Proof For r > 0 and x ∈ H , define

Tr(x) :={

z ∈ C |f (z, y) + 1

r〈y − z, z − x〉 ≥ 0 ∀y ∈ C

}

.

Since f satisfies conditions (A1)–(A3) and (A4a), it follows from [33, Lemma 2.12]or [7, Corollary 1, and p. 135] that x → Tr(x) is single-valued, and firmly nonexpan-sive for all r > 0, i.e. ‖Tr(x) − Tr(y)‖2 ≤ 〈Tr(x) − Tr(y), x − y〉 for all x, y ∈ H .So, in virtue of the definitions of Trn and E(f,C), we can write successively

‖zn − x∗‖2 = ‖Trn(xn) − Trn

(

x∗)‖2

≤ ⟨

Trn(xn) − Trn

(

x∗), xn − x∗⟩

= ⟨

zn − x∗, xn − x∗⟩

= 1

2

(‖zn − x∗‖2 + ‖xn − x∗‖2 − ‖xn − zn‖2),

and thus we have

‖zn − x∗‖2 ≤ ‖xn − x∗‖2 − ‖xn − zn‖2. �

In order to obtain a similar result, but for the extragradient method, we need tointroduce, in addition to the conditions (A1)–(A4), the following assumption on thefunction f : C × C → R:

(A5) f satisfies the following Lipschitz-type condition: There exist constants c1 > 0and c2 > 0 such that, for every x, y, z ∈ C,

f (x, y) + f (y, z) ≥ f (x, z) − c1‖y − x‖2 − c2‖z − y‖2.

An example of a function f verifying assumption (A5) is given by

f (x, y) = ⟨

F(x), y − x⟩ ∀x, y ∈ C,

where F : C → H is Lipschitz continuous on C (with constant L > 0). In this ex-ample, we have c1 = c2 = L/2. Contrary to the proximal point method where f issupposed to be monotone, here we only assume that f is pseudomonotone.

J Optim Theory Appl

Lemma 4.2 [10, Theorem 3.2] Let C be a nonempty, closed and convex subset of H

and let f be a function from C × C into R satisfying conditions (A1)–(A5). Supposethat x∗ ∈ EP(f,C). Let xn ∈ C and λn > 0. Let also yn and zn be defined by

yn := arg miny∈C

{

λnf (xn, y) + 1

2‖xn − y‖2

}

;

zn := arg miny∈C

{

λnf (yn, y) + 1

2‖xn − y‖2

}

.

Then the following inequalities hold for every y ∈ C:

(i) λnf (xn, y) ≥ λnf (xn, yn) + 〈y − yn, xn − yn〉(ii) λnf (yn, y) ≥ λnf (yn, zn) + 〈y − zn, xn − zn〉

(iii) ‖zn − x∗‖2 ≤ ‖xn − x∗‖2 − (1 − 2λnc1)‖xn − yn‖2 − (1 − 2λnc2)‖yn − zn‖2.

As a consequence of these two lemmas, whatever method we choose—the proxi-mal point method or the extragradient method—(14) holds true for all n ≥ 1 and forevery x∗ ∈ Ω .

To obtain implementable algorithms, it remains to prove that if, at each iteration,zn is computed thanks to a proximal step or an extragradient step, then the sequences{xn} and {zn} generated by Algorithm 1 or Algorithm 2 satisfy the assumptions (i)–(iv) of Theorem 3.1 with un depending on the choice of the step. As a result, thesequence {xn} will be strongly convergent to PΩ(x0).

First, we consider the case when a proximal point iteration is done in Step 1 ofAlgorithm 1. A similar result can be obtained if Algorithm 2 is considered instead ofAlgorithm 1.

Theorem 4.1 Let C be a nonempty, closed and convex subset of a real Hilbert spaceH . Let f be a function from C × C into R satisfying conditions (A1)–(A3) and(A4a). Let {Tn} be a sequence of nonexpansive mappings from C into itself satisfyingthe NST-condition (I) with T . Suppose that Ω = ∅ and that, for all n ≥ 1, the vectorzn in Step 1 of Algorithm 1 is defined by

f (zn, y) + 1

rn〈y − zn, zn − xn〉 ≥ 0 ∀y ∈ C, (16)

where rn > 0. Then the inequality ‖zn − x∗‖ ≤ ‖xn − x∗‖ holds for all n ≥ 1 andevery x∗ ∈ Ω . Furthermore, the sequence {xn} generated by Algorithm 1 convergesstrongly to PΩ(x0), provided that lim infn→∞ rn > 0, 0 < a ≤ αn ≤ b < 1 for alln ≥ 1.

Proof Let {xn} be the sequence generated by Algorithm 1. Then, it follows fromTheorem 3.2(i)–(ii) that the sequence {xn} is bounded, the sequence ‖xn − tn‖ → 0and

‖tn − x∗‖ ≤ ‖zn − x∗‖ for all n ≥ 1 and every x∗ ∈ Ω.

J Optim Theory Appl

So, using (15), we can write

‖tn − x∗‖2 ≤ ‖xn − x∗‖2 − ‖xn − zn‖2.

Hence,

‖xn − zn‖2 ≤ ‖xn − x∗‖2 − ‖tn − x∗‖2

≤ ‖xn − tn‖(‖xn − x∗‖ + ‖tn − x∗‖).

Since ‖xn − tn‖ → 0 and the sequences {xn} and {tn} are bounded, we can concludethat ‖xn − zn‖ → 0. So, from Theorems 3.2, we can say that assumptions (i)–(iii) ofTheorem 3.1 are satisfied with un = zn. In particular, the sequence {zn} is boundedand 〈y − zn, zn − xn〉 → 0 for every y ∈ C. Taking the inferior limit in (16), we getlim infn→∞ f (zn, y) ≥ 0 for every y ∈ C. So assumption (iv) of Theorem 3.1 is alsosatisfied with un = zn. Then, the conclusion follows from Theorem 3.1. �

As an application of our general theorem (Theorem 3.1), we find again a conver-gence theorem that is similar to Theorems 5.1 and 5.2 in [4], but in the framework ofHilbert spaces.

Now, we consider the case when an extragradient iteration is done in Step 1 ofAlgorithm 1. A similar result can be obtained if Algorithm 2 is considered instead ofAlgorithm 1.

Theorem 4.2 Let C be a nonempty, closed and convex subset of a real Hilbert spaceH . Let f be a function from C × C into R satisfying conditions (A1)–(A5). Let {Tn}be a sequence of nonexpansive mappings from C into itself satisfying NST-condition(I) with T . Suppose that Ω = ∅ and that, for all n ≥ 1, the vector zn in Step 1 ofAlgorithm 1 is defined from xn via a vector yn as follows:

yn := arg miny∈C

{

λnf (xn, y) + 1

2‖xn − y‖2

}

,

zn := arg miny∈C

{

λnf (yn, y) + 1

2‖xn − y‖2

}

,

where λn > 0. Then the inequality ‖zn − x∗‖ ≤ ‖xn − x∗‖ holds for all n ≥ 1 andevery x∗ ∈ Ω . Furthermore, the sequence {xn} generated by Algorithm 1 convergesstrongly to PΩ(x0), provided that, for all n ≥ 1,

0 < a ≤ αn ≤ b < 1 and 0 < λ ≤ λn ≤ λ < min

{1

2c1,

1

2c2

}

.

Proof Let {xn} be the sequence generated by Algorithm 1. Then, it follows fromTheorem 3.2(i)–(ii) that the sequence {xn} is bounded, the sequence ‖xn − tn‖ → 0and

‖tn − x∗‖ ≤ ‖zn − x∗‖ for all n ≥ 1 and every x∗ ∈ Ω.

J Optim Theory Appl

So, using Lemma 4.2(iii), we can write

‖tn − x∗‖2 ≤ ‖xn − x∗‖2 − (1 − 2λnc1)‖xn − yn‖2 − (1 − 2λnc2)‖yn − zn‖2.

Consequently,

‖xn − yn‖2 ≤ 1

1 − 2λnc1

(‖xn − x∗‖2 − ‖tn − x∗‖2)

= 1

1 − 2λnc1

(‖xn − x∗‖ − ‖tn − x∗‖)(‖xn − x∗‖ + ‖tn − x∗‖)

≤ 1

1 − 2λnc1‖xn − tn‖

(‖xn − x∗‖ + ‖tn − x∗‖)

≤ 1

1 − 2λc1‖xn − tn‖

(‖xn − x∗‖ + ‖tn − x∗‖).

Since limn→∞ ‖xn − tn‖ = 0 and the sequences {xn} and {tn} are bounded, we obtainlimn→∞ ‖xn − yn‖ = 0. So, from Theorems 3.2, we can say that assumptions (i), (ii),and (iii) of Theorem 3.1 are satisfied with un = yn. Using a similar reasoning, we canalso prove that limn→∞ ‖zn − yn‖ = 0. Therefore, limn→∞ ‖xn − zn‖ = 0.

Furthermore, for all y ∈ C, we have lim infn→∞ f (yn, y) ≥ 0. Indeed, let y ∈ C

and n ∈ N. Since zn ∈ C, it follows from Lemma 4.2(i) that

λnf (xn, zn) ≥ λnf (xn, yn) + 〈zn − yn, xn − yn〉.Using successively Lemma 4.2(ii), (A5) and the last inequality, we can write

f (yn, y) ≥ f (yn, zn) + 1

λn

〈y − zn, xn − zn〉

≥ f (xn, zn) − f (xn, yn) − c1‖yn − xn‖2 − c2‖zn − yn‖2

+ 1

λn

〈y − zn, xn − zn〉

≥ 1

λn

〈zn − yn, xn − yn〉 − c1‖yn − xn‖2 − c2‖zn − yn‖2

+ 1

λn

〈y − zn, xn − zn〉.

Since limn→∞ ‖zn − yn‖ = limn→∞ ‖yn − xn‖ = 0, the sequence {zn} is bounded,and 0 < λ ≤ λn ≤ λ, we easily deduce that lim infn→∞ f (yn, y) ≥ 0. In other words,the assumption (iv) of Theorem 3.1 is satisfied with un = yn. Consequently, accordingto Theorem 3.1, the sequence {xn} converges strongly to PΩ(x0). �

At each iteration of the proximal point algorithm studied in Theorem 3.2, we haveto solve the following subproblem:

Find zn ∈ C such that f (zn, y) + 1

rn〈y − zn, zn − xn〉 ≥ 0 ∀y ∈ C.

J Optim Theory Appl

This subproblem seems not easy to solve. Recently, Mordukhovich et al. [34] sug-gested to solve this inequality by using descent methods based on gap functions; seethe cited reference for more details. On the other hand, if the extragradient methodis used instead of the proximal point method, two constrained convex minimizationproblems must be solved at each iteration. The difficulty is that the strong conver-gence of the iterates to the projection of x0 onto the solution set Ω is obtained underthe rather strong Lipschitz-type condition (A5). So, our aim in next section is to em-bed a linesearch procedure into the extragradient method for avoiding the use of theLipschitz constant. So doing, we obtain a method whose subproblems are minimiza-tion problems and whose strong convergence is established under the assumptionthat the function f satisfies the assumptions (A1)–(A4). In fact, the Lipschitz-typecondition (A5) is replaced by the condition

(A6) The function f is jointly weakly continuous on the product Δ × Δ where Δ ⊃C is an open convex set containing C, in the sense that, if x, y ∈ Δ and if {xn}and {yn} are two sequences in Δ converging weakly to x and y, respectively,then

f (xn, yn) → f (x, y).

5 A Linesearch Extragradient Method

In next algorithm, we embed a linesearch procedure into the hybrid extragradientmethod for solving together an equilibrium problem and countably many fixed-pointproblems. We introduce and analyze the corresponding algorithm as follows:

Algorithm 3 Let x0 ∈ H,Ω1 = C,x1 = PΩ1x0. Let α ∈ ]0,2[, γ ∈ ]0,1[. Let {αn}and {λn} ⊂ ]0,1[, and 0 < a ≤ αn ≤ b < 1. Set n = 1.

Step 1. Compute yn = arg miny∈C {λnf (xn, y) + 12‖y − xn‖2}, and wn = (1 −

γ m)xn + γ myn where m is the smallest positive integer such that

f (wn, xn) − f (wn, yn) ≥ α

2λn

‖xn − yn‖2. (17)

Step 2. Calculate zn = PC(xn − σngn) where gn ∈ ∂2f (wn, xn) := ∂[f (wn, ·)](xn)

and σn = f (wn,xn)

‖gn‖2 if yn = xn and σn = 0 otherwise.Step 3. Calculate tn = αnzn + (1 − αn)Tnzn where 0 < a ≤ αn ≤ b < 1.Step 4. Compute xn+1 = PΩn+1 x0 where

Ωn+1 = {

z ∈ Ωn | ‖tn − z‖2 ≤ ‖xn − z‖2 − (1 − αn)αn‖Tnzn − zn‖2}.

Step 5. Set n := n + 1 and go to Step 1.

Before proceeding to the strong convergence of the iterates {xn} generated byAlgorithm 3, we give an important property of the subdifferential of f in the nextproposition.

J Optim Theory Appl

Proposition 5.1 [22, Proposition 4.3] Let f : Δ × Δ → R be a function satisfyingconditions (A3) and (A6). Let x, v ∈ Δ and let {xn} and {vn} be two sequences in Δ

converging weakly to x and v, respectively. Then, for any ε > 0, there exist η > 0 andnε ∈ N such that

∂2 f (vn, xn) ⊂ ∂2 f (v, x) + ε

ηB

for every n ≥ nε , where B denotes the closed unit ball in H .

Theorem 5.1 Let C be a nonempty, closed and convex subset of a real Hilbert spaceH . Let f be a function from Δ×Δ into R satisfying conditions (A1)–(A4) and (A6).Let {Tn} be a sequence of nonexpansive mappings from C into itself satisfying NST-condition (I) with T . Suppose that Ω = ∅. Then the sequence {xn} generated byAlgorithm 3 converges strongly to PΩ(x0), provided that, for all n ≥ 1,

0 < λ ≤ λn ≤ λ < 1 and 0 < a ≤ αn ≤ b < 1.

Proof When yn = xn, we have wn = xn = zn. So the corresponding iteration is welldefined. When yn = xn, it was proven (see [22, Proposition 4.1]) that f (wn, xn) > 0,0 ∈ ∂2f (wn, xn), and the linesearch procedure is well defined in the sense that (17)is obtained after finitely many iterations. So, σn and the corresponding iteration arewell defined.

Now we prove that the vector zn computed at Step 2 of Algorithm 3 satisfies thecondition

‖zn − x∗‖ ≤ ‖xn − x∗‖ for all n ≥ 1 and every x∗ ∈ Ω. (18)

For this purpose, let n ≥ 1 and x∗ ∈ Ω . Since gn ∈ ∂2f (wn, xn), we have, for ally ∈ C,

f (wn, y) − f (wn, xn) ≥ 〈gn, y − xn〉.Taking y = x∗ in this inequality, using assumption (A4), and the definition of σn, wecan write successively

gn, xn − x∗⟩ ≥ f (wn, xn) − f(

wn,x∗) ≥ f (wn, xn) = σn‖gn‖2. (19)

Thus, using this inequality we have

‖zn − x∗‖2 = ∥∥PC(xn − σngn) − PCx∗∥∥2

≤ ‖xn − σngn − x∗‖2

= ‖xn − x∗‖2 − 2σn

gn, xn − x∗⟩ + σ 2n‖gn‖2

≤ ‖xn − x∗‖2 − 2σ 2n‖gn‖2 + σ 2

n‖gn‖2

= ‖xn − x∗‖2 − σ 2n‖gn‖2. (20)

J Optim Theory Appl

Hence, obviously we obtain (18). Noting that this zn verifies Step 1 of Algorithm 1and Algorithm 2, it follows from Theorem 3.2 (Theorem 3.3, respectively) that for alln ≥ 1 and every x∗ ∈ Ω , we can write

‖xn − x0‖ ≤ ‖x∗ − x0‖, ‖tn − x∗‖ ≤ ‖zn − x∗‖ (21)

and

‖Tnzn − zn‖ → 0, ‖tn − xn‖ → 0 as n → ∞. (22)

We shall now show that ‖Tnxn − xn‖ → 0 as n → ∞, i.e. assumption (ii) of Theo-rem 3.1 (General Strong Convergence Theorem) holds true. For this purpose, it willbe sufficient, in virtue of conclusion (iii) of Theorems 3.2 (Theorem 3.3, respec-tively), to show that ‖zn − xn‖ → 0 as n → +∞. Indeed, using (22) and (20), wehave

‖tn − x∗‖2 ≤ ‖zn − x∗‖2 ≤ ‖xn − x∗‖2 − σ 2n‖gn‖2 (23)

and thus,

σ 2n‖gn‖2 ≤ ‖xn − tn‖

(‖xn − x∗‖ + ‖tn − x∗‖). (24)

Now the two sequences {xn} and {tn} being bounded, and limn→∞ ‖xn − tn‖ = 0, wecan conclude from (24) that limn→∞ σn‖gn‖ = 0. However, by the definition of xn

and zn, and the nonexpansiveness of the projection onto C, we can write

‖zn − xn‖ = ‖PC(xn − σngn) − PC(xn)‖ ≤ ‖σngn‖and thus ‖zn − xn‖ → 0 as n → ∞. So, from Theorems 3.2 (Theorem 3.3, respec-tively), we can conclude that assumption (ii), and also assumption (i) of Theorem 3.1(because we can take x∗ = PΩx0 in (21)), are satisfied.

In order to finish the proof, we shall now prove that conditions (iii) and (iv) ofTheorem 3.1 hold true with the sequence {un} = {xn}. Let {xni

} be any subsequenceof {xn}. Since {xn} is bounded, we can suppose that xni

⇀ x. Then we can proveexactly as in Step 4 of the proof of [23, Theorem 4.4] that the sequences {yni

} and{wni

} are bounded. So there exist subsequences of {xni}, {yni

}, and {wni}, again de-

noted {xni}, {yni

}, and {wni}, that converge weakly to x, y, and w, respectively. Then,

using Proposition 5.1, we obtain the result that the sequence {gni} is bounded. Con-

sequently, we can deduce that

f (wni, xni

) = f (wni, xni

)

‖gni‖ ‖gni

‖ = σni‖gni

‖ → 0 as i → +∞.

Furthermore, the function f (wni, ·) being convex, and wni

= (1 − ρni)xni

+ ρniyni

with ρni∈ (0,1), we can write that

ρni

[

f (wni, xni

) − f (wni, yni

)] ≤ f (wni

, xni).

Now, by the linesearch property (17), we have, after multiplication by ρni,

ρniα

2λni

‖xni− yni

‖2 ≤ ρni

[

f (wni, xni

) − f (wni, yni

]

J Optim Theory Appl

≤ f (wni, xni

) → 0. (25)

Since 0 < λ ≤ λni≤ 1 for all k, and xni

⇀ x, we deduce that

‖xni− yni

‖ → 0 and yni⇀ x as i → +∞. (26)

Indeed, in the case when lim supρni> 0, there exist ρ > 0 and a subsequence of

{ρni}, denoted again {ρni

}, such that ρni→ ρ. Then, by (25), we obtain ‖xni

−yni‖ →

0 as i → ∞.In the case when ρni

→ 0, let {mi} be the sequence of the smallest nonnegativeintegers of the linesearch such that, for all i,

f (wni, xni

) − f (wni, yni

) ≥ α

2λni

‖xni− yni

‖2,

where wni= (1 − γ mi ) xni

+ γ mi yni. Since ρni

= γ mi → 0, it follows that mi > 1for i sufficiently large and, consequently, that

f (wni, xni

) − f (wni, yni

) <α

2λni

‖xni− yni

‖2, (27)

where wni= (1 − γ mi−1)xni

+ γ mi−1yni. On the other hand, by virtue of Lemma

4.2(i), we have

〈xn − yn, y − yn〉 ≤ λnf (xn, y) − λnf (xn, yn) (28)

for all n ∈ IN . Taking n = ni and y = xniin (28), we can write

‖xni− yni

‖2 ≤ −λnif (xni

, yni). (29)

Combining (27) with (29), we obtain

f (wni, xni

) − f (wni, yni

) < −α

2f (xni

, yni). (30)

Taking the limit in (30) as i → ∞, using the weak continuity of f , and recalling thatxni

⇀ x, yni⇀ y, and γ mi → 0, we have wni

⇀ x and

−f (x, y) ≤ −α

2f (x, y).

So f (x, y) ≥ 0 because α ∈ ]0,2[. Then it follows from (29) that (26) is satisfied.To finish the proof, by definition of yni

(see (28)), we can write

f (xni, y) ≥ f (xni

, yni) + 1

λni

〈xni− yni

, y − yni〉 ∀y ∈ C.

Taking the superior limit of both sides as i → ∞, we obtain lim supi→∞ f (xni, y) ≥

0. As we only consider subsequences of subsequences, we can say that the inequalityof the superior limit is also true for the subsequence chosen at the beginning of thesecond part of the proof. So, from Theorems 3.2 (Theorem 3.3, respectively), we

J Optim Theory Appl

can conclude that assumption (iv), and also assumption (iii) of Theorem 3.1 (because{xn} = {un}), are satisfied. �

A similar theorem can be derived when Step 4 of Algorithm 3 is replaced byStep 3 of Algorithm 2. The corresponding algorithm will be called Algorithm 4 innext section.

6 Numerical Results

In this section, we consider some numerical examples to compare the behavior ofAlgorithm 2 when a pure extragradient iteration is done in Step 1 with Algorithm 4where a linesearch is incorporated into the extragradient iteration. In all the examples,the Hilbert space H is R

2 and the operators Tn are defined for each n as in (2). Inparticular, the sequence {Tn} satisfies the NST-condition (I) with T = {PH }, where

H = {

x ∈R2 | 〈x − d, d〉 ≥ 0

}

.

For the equilibrium problem, we consider two academic examples proposed in [34],Examples 1 and 3. The corresponding equilibrium functions f1 and f2 are defined,for every x = (x1, x2), y = (y1, y2) ∈ C = [0,1] × [0,1], by

f1(x, y) := (y1 − y2)2 − (x1 − x2)

2 and f2(x, y) := (y1 − x1) (2y1 + x1).

The function f1 is monotone on C × C while the function f2 is pseudomonotone onC × C. Then it is easy to see that the corresponding solution sets are

EP(f1,C) = {x ∈ C |x1 = x2} and EP(f2,C) = {x ∈ C |x1 = 0}.The problem is to find the projection of a vector x0 ∈ C onto the sets Ωi , i = 1,2,where

Ωi = EP(fi,C) ∩(

n≥0

Hn

)

and Hn = {

x ∈ H | ⟨x − (

1 − 2−n−1)d, d⟩ ≥ 0

}

.

In Example 1, we take f = f1 and d = (0.5,0), while in Example 2, we considerf = f2 and d = (−0.25,0.25). The intersection set is given, for Example 1, by

Ω1 = {

(x1, x2) ∈ C |x1 = x2 and x1 ≥ 0.5}

and, for Example 2, by

Ω2 = {

(x1, x2) ∈ C |x1 = 0 and x2 ≥ 0.5}

.

On the other hand, we choose five starting points and for each of these points, weuse Algorithm 2 (with a pure extragradient step) and Algorithm 4 (with a linesearchincorporated into the extragradient step). Moreover, in Example 1, the parameters arechosen, for all n, as follows:

λn = 0.75, αn = 0.5, α = 0.5, γ = 0.5

J Optim Theory Appl

Table 1 The results of Example 1 for extragradient and linesearch methods

Startingpoint

Solution Number of iterations CPU in sec.

Extragradient Linesearch Extragradient Linesearch

(0, 0.5) (0.5, 0.5) 29 45 1.14 2.42

(0, 1) (0.5, 0.5) 19 39 0.45 1.23

(0.5, 0) (0.5, 0.5) 215 553 5.40 9.20

(0.5, 1) (0.75, 0.75) 13 22 0.45 0.76

(1, 0) (0.5, 0.5) 14 25 0.53 0.71

Table 2 The results of Example 2 for extragradient and linesearch methods

Startingpoint

Solution Number of iterations CPU in sec.

Extragradient Linesearch Extragradient Linesearch

(0.5,0) (0,0.5) 15 15 0.62 0.42

(1,0) (0,0.5) 26 77 0.92 1.68

(1,0.5) (0,0.5) 22 47 0.81 0.81

(1,1) (0,1) 16 31 0.62 0.65

(0.25,0.75) (0,0.75) 13 25 0.40 0.43

and, in Example 2, as

λn = 1., αn = 0.5, α = 1.99, γ = 0.5.

We have implemented Algorithms 2 and 4 in MATLAB 7.0 and we have solved theoptimization subproblems with the solver FMINCON from the Optimization Tool-box. Since the solution set SOL is known for each test problem, we have used asstopping criterion, the following error measure:

‖x − SOL‖ ≤ 10−3. (31)

The numerical results, namely the number of iterations and the CPU time to get anapproximate solution satisfying condition (31), are reported for each starting point inTable 1 for Example 1 and in Table 2 for Example 2. From these preliminary results,it seems that the numerical behavior of Algorithm 2 with pure extragradient steps isbetter than the one of Algorithm 4 where linesearches are considered.

7 Conclusions

In this paper, we have presented a general algorithmic framework for finding theprojection of a point onto the common solutions of an equilibrium problem and a se-quence of fixed-point problems in real Hilbert spaces. The convergence results havebeen developed under the assumption that the solution set Ω is nonempty. Only pre-liminary numerical results have been reported. Many other test problems should be

J Optim Theory Appl

considered and other choices of parameter values should be studied to improve theperformance of these algorithms.

A future work aims to modify our present method to obtain a convergence resultlike the following trichotomy: (a) Ω = ∅ and {xn} → PΩ(x0), (b) Ω = ∅ and ‖xn‖ →+∞, and (c) Ω = ∅ and the algorithm terminates.

Acknowledgements The authors would like to thank the Editor, the Associate Editor, and the twoanonymous referees for their comments and suggestions on improving significantly the presentation ofan earlier version of the paper. This research is funded by the Department of Science and Technology atHo Chi Minh City, Vietnam. Computing resources and support provided by the Institute for ComputationalScience and Technology—Ho Chi Minh City (ICST) is gratefully acknowledged. The work of Thi Thu VanNguyen is also supported by the Vietnam National University at HCMC.

References

1. Kimura, Y., Nakajo, K.: Viscosity approximations by the shrinking projection method in Hilbertspaces. Comput. Math. Appl. 63, 1400–1408 (2012)

2. Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for familiesof nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276–286 (2008)

3. Takahashi, W.: Viscosity approximation methods for countable families of nonexpansive mappings inBanach spaces. Nonlinear Anal. 70, 719–734 (2009)

4. Takahashi, W., Yao, J.-C.: Strong convergence theorems by hybrid methods for countable families ofnonlinear operators in Banach spaces. J. Fixed Point Theory Appl. 11, 333–353 (2012)

5. Nakajo, K., Takahashi, W.: Strong convergence to common fixed points of families of nonexpansivemappings in Banach spaces. J. Math. Anal. Appl. 279, 248–264 (2003)

6. Nakajo, K., Shimoji, K., Takahashi, W.: Strong convergence to common fixed points of families ofnonexpansive mappings in Banach spaces. J. Nonlinear Convex Anal. 8, 11–34 (2007)

7. Blum, E., Oettli, W.: From optimization and variational inequalities. Math. Stud. 63, 123–145 (1994)8. Giannessi, F., Maugeri, A., Pardalos, P.: Equilibrium Problems: Nonsmooth Optimization and Varia-

tionnal Inequality Models. Kluwer Academic, Dordrecht (2001)9. Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Daniele, P., Giannessi, F.,

Maugeri, A. (eds.) Equilibrium Problems and Variational Models, pp. 289–298. Kluwer Academic,Dordrecht (2003)

10. Tran, D.Q., Le Dung, M., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems.Optimization 57, 749–776 (2008)

11. Fan, K.: A minimax inequality and applications. In: Shisha, O. (ed.) Inequality III, pp. 103–113.Academic Press, New York (1972)

12. Brézis, H., Nirenberg, L., Stampacchia, G.: A remark on Ky Fan’s minimax principle. Boll. UnioneMat. Ital. III VI, 129–132 (1972)

13. Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Existence and solution methods for equi-libria. Eur. J. Oper. Res. 227, 1–11 (2013)

14. Iiduka, H.: A new iterative algorithm for the variational inequality problem over the fixed point set ofa firmly nonexpansive mapping. Optimization 59, 873–885 (2010)

15. Sahu, D.R., Wong, N.C., Yao, J.C.: A unified hybrid iterative method for solving variational in-equalities involving generalized pseudocontractive mappings. SIAM J. Control Optim. 50, 2335–2354(2012)

16. Iusem, A.N., Sosa, W.: On the proximal point method for equilibrium problems in Hilbert spaces.Optimization 59, 1259–1274 (2010)

17. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: A bundle method for solving equilibrium problems.Math. Program. 116, 529–552 (2009)

18. Nguyen, T.T.V., Strodiot, J.J., Nguyen, V.H.: The interior proximal extragradient method for solvingequilibrium problems. J. Glob. Optim. 44, 175–192 (2009)

19. Ceng, L.C., Ansari, Q.H., Yao, J.C.: Hybrid pseudoviscosity approximation schemes for equilibriumproblems and fixed point problems of infinitely many nonexpansive mappings. Nonlinear Anal. Hy-brid Syst. 4, 743–754 (2010)

J Optim Theory Appl

20. Ceng, L.C., Guu, S.M., Hu, H.Y., Yao, J.C.: Hybrid shrinking projection method for a generalizedequilibrium problem, a maximal monotone operator and a countable family of relatively nonexpansivemappings. Comput. Math. Appl. 61, 2468–2479 (2011)

21. Ceng, L.C., Petrusel, A., Lee, C., Wong, M.M.: Two extragradient approximation methods for varia-tional inequalities and fixed point problems of strict pseudo-contractions. Taiwan. J. Math. 13, 607–632 (2009)

22. Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solvingKy Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605–627 (2012)

23. Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: On extragradient-viscosity methods for solving equilib-rium and fixed point problems in a Hilbert space. Optimization (2013). doi:10.1080/02331934.2012.759327

24. Strodiot, J.J., Nguyen, V.H., Vuong, P.T.: Strong convergence of two hybrid extragradient methods forsolving equilibrium and fixed point problems. Vietnam J. Math. 40, 371–389 (2012)

25. Peng, J.-W., Yao, J.-C.: Some new iterative algorithms for generalized mixed equilibrium problemswith strict pseudo-contractions and monotone mappings. Taiwan. J. Math. 13, 1537–1582 (2009)

26. Peng, J.-W., Yao, J.-C.: Strong convergence theorems of iterative scheme based on the extragradientmethod for mixed equilibrium problems and fixed point problems. Math. Comput. Model. 49, 1816–1828 (2009)

27. Shehu, Y.: Hybrid iterative scheme for fixed point problem, infinite systems of equilibrium problemsand variational inequality problems. Comput. Math. Appl. 63, 1089–1103 (2012)

28. Yao, Y., Postolache, M.: Iterative methods for pseudomonotone variational inequalities and fixed-pointproblems. J. Optim. Theory Appl. 155, 273–287 (2012)

29. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive map-pings. Bull. Am. Math. Soc. 73, 591–597 (1967)

30. Jaiboon, C., Kumam, P.: Strong convergence theorems for solving equilibrium problems and fixedpoint problems of ξ -strict pseudo-contraction mappings by two hybrid projection methods. J. Comput.Appl. Math. 230, 722–732 (2010)

31. Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejér-monotone meth-ods in Hilbert spaces. Math. Oper. Res. 26, 248–264 (2001)

32. Martinez-Yanes, C., Xu, H.K.: Strong convergence of the CQ method for fixed point iteration pro-cesses. Nonlinear Anal. 64, 2400–2411 (2006)

33. Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear ConvexAnal. 6, 117–136 (2005)

34. Mordukhovich, B., Panicucci, B., Pappalardo, M., Passacantando, M.: Hybrid proximal methods forequilibrium problems. Optim. Lett. 6, 1535–1550 (2012)