# Convergence results for a common solution of a finite family of variational inequality problems for monotone mappings with Bregman distance function

## Abstract

In this paper, we introduce an iterative process which converges strongly to a common solution of a finite family of variational inequality problems for monotone mappings with Bregman distance function. Our convergence theorem is applied to the convex minimization problem. Our theorems extend and unify most of the results that have been proved for the class of monotone mappings.

MSC:47H05, 47J05, 47J25.

## 1 Introduction

Throughout this paper, E is a real reflexive Banach space with ${E}^{\ast }$ as its dual and $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ is a proper, lower semicontinuous and convex function. We denote by domf the domain of f, defined by $domf:=\left\{x\in E:f\left(x\right)<+\mathrm{\infty }\right\}$. For any $x\in int\left(domf\right)$ and $y\in E$, the right-hand derivative of f at x in the direction of y is defined by

${f}^{0}\left(x,y\right):=\underset{t\to {0}^{+}}{lim}\frac{f\left(x+ty\right)-f\left(x\right)}{t}.$
(1.1)

The function f is said to be Gâteaux differentiable at x if ${lim}_{t\to {0}^{+}}\left(f\left(x+ty\right)-f\left(x\right)\right)/t$ exists for any $y\in E$. In this case, ${f}^{0}\left(x,y\right)$ coincides with $\mathrm{\nabla }f\left(x\right)$, the value of the gradient f of f at x. The function f is said to be Gâteaux differentiable if it is Gâteaux differentiable for any $x\in int\left(domf\right)$. The function f is said to be Fréchet differentiable at x if this limit is attained uniformly in $\parallel y\parallel =1$. We say that f is uniformly Fréchet differentiable on a subset C of E if the limit is attained uniformly for $x\in C$ and $\parallel y\parallel =1$.

Let $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a Gâteaux differentiable function. The function ${D}_{f}:domf×int\left(domf\right)\to \left[0,+\mathrm{\infty }\right)$ defined by

${D}_{f}\left(x,y\right):=f\left(x\right)-f\left(y\right)-〈\mathrm{\nabla }f\left(y\right),x-y〉$

is called the Bregman distance with respect to f .

A Bregman projection with respect to f  of $x\in int\left(domf\right)$ onto the nonempty, closed and convex set $C\subset int\left(domf\right)$ is the unique vector ${P}_{C}^{f}\left(x\right)\in C$ satisfying

${D}_{f}\left({P}_{C}^{f}\left(x\right),x\right)=inf\left\{{D}_{f}\left(y,x\right):y\in C\right\}.$

Remark 1.1 If E is a smooth and strictly convex Banach space and $f\left(x\right)={\parallel x\parallel }^{2}$ for all $x\in E$, then we have that $\mathrm{\nabla }f\left(x\right)=2J\left(x\right)$ for all $x\in E$, where J the normalized duality mapping from E into ${E}^{\ast }$, and hence ${D}_{f}\left(x,y\right)$ becomes ${D}_{f}\left(x,y\right)={\parallel x\parallel }^{2}-2〈x,Jy〉+{\parallel y\parallel }^{2}$ for all $x,y\in E$, which is the Lyapunov function introduced by Alber  and studied by many authors (see, e.g.,  and the references therein). In addition, under the same condition, the Bregman projection ${P}_{C}^{f}\left(x\right)$ reduces to the generalized projection ${\mathrm{\Pi }}_{C}\left(x\right)$ (see, e.g., ) which is defined by

$\varphi \left({\mathrm{\Pi }}_{C}\left(x\right),x\right)=\underset{y\in C}{min}\varphi \left(y,x\right).$
(1.2)

If $E=H$, a Hilbert space, J is the identity mapping, and hence the Bregman projection ${P}_{C}^{f}\left(x\right)$ reduces to the metric projection of H on to C, ${P}_{C}\left(x\right)$.

A mapping $A:D\left(A\right)\subset E\to {E}^{\ast }$ is said to be γ-inverse strongly monotone if there exists a positive real number γ such that

(1.3)

A is said to be monotone if, for each $x,y\in D\left(A\right)$, the following inequality holds:

$〈Ax-Ay,x-y〉\ge 0.$
(1.4)

Clearly, the class of monotone mappings includes the class of γ-inverse strongly monotone mappings.

Let C be a nonempty, closed and convex subset of E and $A:C\to {E}^{\ast }$ be a monotone mapping. The problem of finding

(1.5)

is called the variational inequality problem. The set of solutions of the variational inequality is denoted by $\mathit{VI}\left(C,A\right)$.

Variational inequality problems are related with the convex minimization problem, the zero of monotone mappings and the complementarity problem. Consequently, many researchers (see, e.g., [3, 5, 1015]) have made efforts to obtain iterative methods for approximating solutions of variational inequality problems.

If $E=H$, a Hilbert space, Iiduka et al.  introduced the following projection algorithm:

(1.6)

where ${P}_{C}$ is the metric projection from H onto C and $\left\{{\alpha }_{n}\right\}$ is a sequence of positive real numbers. They proved that the sequence $\left\{{x}_{n}\right\}$ generated by (1.6) converges weakly to some element of $\mathit{VI}\left(C,A\right)$ provided that A is a γ-inverse strongly monotone mapping.

If E is a 2-uniformly convex and uniformly smooth Banach space, and A is γ-inverse strongly monotone, Iiduka and Takahashi  introduced the following iteration scheme for finding a solution of the variational inequality problem:

(1.7)

where ${\mathrm{\Pi }}_{C}$ is the generalized projection from E onto C, J is the normalized duality mapping from E into ${E}^{\ast }$ and $\left\{{\alpha }_{n}\right\}$ is a sequence of positive real numbers. They proved that the sequence $\left\{{x}_{n}\right\}$ generated by (1.7) converges weakly to some element of $\mathit{VI}\left(C,A\right)$.

It is worth mentioning that the convergence obtained above is weak convergence. Our concern now is to look for an iteration scheme which converges strongly to a solution of the variational inequality problem for a monotone mapping A.

In this regard, when E is a 2-uniformly convex and uniformly smooth Banach space and A is a γ-inverse strongly monotone mapping satisfying $\parallel Au\parallel \le \parallel Ay-Au\parallel$ for all $y\in C$ and $u\in \mathit{VI}\left(C,A\right)$ for $\mathit{VI}\left(C,A\right)\ne \mathrm{\varnothing }$, Iiduka and Takahashi  studied the following iterative scheme for a solution of the variational inequality problem:

$\left\{\begin{array}{l}{x}_{0}\in K\phantom{\rule{1em}{0ex}}\text{chosen arbitrarily},\\ {y}_{n}={\mathrm{\Pi }}_{C}{J}^{-1}\left(J{x}_{n}-{\alpha }_{n}A{x}_{n}\right),\\ {C}_{n}=\left\{z\in E:\varphi \left(z,{y}_{n}\right)\le \varphi \left(z,{x}_{n}\right)\right\},\\ {Q}_{n}=\left\{z\in E:〈{x}_{n}-z,J{x}_{0}-J{x}_{n}〉\ge 0\right\},\\ {x}_{n+1}={\mathrm{\Pi }}_{{C}_{n}\cap {Q}_{n}}\left({x}_{0}\right),\phantom{\rule{1em}{0ex}}n\ge 1,\end{array}$
(1.8)

where $\left\{{\alpha }_{n}\right\}$ is a positive real sequence satisfying certain mild conditions and ${\mathrm{\Pi }}_{{C}_{n}\cap {Q}_{n}}$ is the generalized projection from E onto ${C}_{n}\cap {Q}_{n}$, J is the duality mapping from E into ${E}^{\ast }$. Then they proved that the sequence $\left\{{x}_{n}\right\}$ converges strongly to an element of $\mathit{VI}\left(C,A\right)$.

Recently, Zegeye and Shahzad  studied the following iterative scheme for a common point of a solution of two variational inequality problems for continuous monotone mappings in a uniformly smooth and strictly convex real Banach space E which also enjoys the Kadec-Klee property:

$\left\{\begin{array}{l}{x}_{0}\in {C}_{0}=C\phantom{\rule{1em}{0ex}}\text{chosen arbitrarily},\\ {u}_{n}={T}_{1,{\gamma }_{n}}{x}_{n};\phantom{\rule{2em}{0ex}}{v}_{n}={T}_{2,{\gamma }_{n}}{x}_{n},\\ {w}_{n}={J}^{-1}\left(\beta J{u}_{n}+\left(1-\beta \right)J{v}_{n}\right),\\ {C}_{n+1}=\left\{z\in {C}_{n}:\varphi \left(z,{w}_{n}\right)\le \varphi \left(z,{x}_{n}\right)\right\},\\ {x}_{n+1}={\mathrm{\Pi }}_{{C}_{n+1}}\left({x}_{0}\right),\phantom{\rule{1em}{0ex}}n\ge 0,\end{array}$
(1.9)

where ${T}_{i,\gamma }x:=\left\{z\in C:〈{A}_{i}z,y-z〉+\frac{1}{\gamma }〈y-z,Jz-Jx〉\ge 0,\mathrm{\forall }y\in C\right\}$ for all $x\in E$, $i=1,2$, and $\beta ,{\gamma }_{n}\in \left(0,1\right)$ satisfy certain mild conditions. Then they proved that the sequence $\left\{{x}_{n}\right\}$ converges strongly to ${\mathrm{\Pi }}_{F}\left({x}_{0}\right)$, where ${\mathrm{\Pi }}_{F}$ is the generalized projection from E onto $F:={\bigcap }_{i=1}^{2}\mathit{VI}\left(C,{A}_{i}\right)\ne \mathrm{\varnothing }$.

In 1967, Bregman  discovered an elegant and effective technique for using the so-called Bregman distance function ${D}_{f}\left(\cdot ,\cdot \right)$ in the process of designing and analyzing feasibility and optimization algorithms. Using Bregman’s distance function and its properties, authors have opened a growing area of research not only for iterative algorithms of solving feasibility and optimization problems but also for algorithms of solving nonlinear, equilibrium, variational inequality, fixed point problems and others (see, e.g.,  and the references therein).

In 2010, Reich and Sabach  proposed an algorithm for finding a common zero point of a finite family of maximal monotone mappings ${A}_{i}:E\to {2}^{{E}^{\ast }}$ ($i=1,2,\dots ,N$) in a general reflexive Banach space E as follows:

$\left\{\begin{array}{l}{x}_{0}\in E\phantom{\rule{1em}{0ex}}\text{chosen arbitrarily},\\ {y}_{n}^{i}={Res}_{{\lambda }_{n}^{i}{A}_{i}}\left({x}_{n}+{e}_{n}^{i}\right),\\ {C}_{n}^{i}=\left\{z\in E:{D}_{f}\left(z,{y}_{n}^{i}\right)\le {D}_{f}\left(z,{x}_{n}+{e}_{n}^{i}\right)\right\},\\ {C}_{n}={\bigcap }_{i=1}^{N}{C}_{n}^{i},\\ {Q}_{n}^{i}=\left\{z\in E:〈\mathrm{\nabla }f\left({x}_{0}\right)-\mathrm{\nabla }f\left({x}_{n}\right),z-{x}_{n}〉\le 0\right\},\\ {x}_{n+1}={P}_{{C}_{n}\cap {Q}_{n}}^{f}\left({x}_{0}\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,\end{array}$
(1.10)

where ${\left\{{\lambda }_{n}^{i}\right\}}_{i=1}^{N}\subset \left(0,\mathrm{\infty }\right)$, ${\left\{{e}_{n}^{i}\right\}}_{i=1}^{N}$ are error sequences in E with ${e}_{n}^{i}\to 0$ and ${P}_{C}^{f}$ is the Bregman projection with respect to f from E onto a closed and convex subset C of E. Those authors showed that the sequence $\left\{{x}_{n}\right\}$ defined by (1.10) converges strongly to a common element in ${\bigcap }_{i=1}^{N}{A}^{-1}\left({0}^{\ast }\right)={\bigcap }_{i=1}^{N}VI\left(E,{A}_{i}\right)$ under some mild conditions. Similar results are also available in [26, 27].

Remark 1.2 But it is worth mentioning that the iteration processes (1.8)-(1.10) seem difficult in the sense that at each stage of iteration, the set(s) ${C}_{n}$ and (or) ${Q}_{n}$ is (are) computed and the next iterate is taken as the Bregman projection of ${x}_{0}$ onto the intersection of ${C}_{n}$ and ${Q}_{n}$ (or ${Q}_{n}$). This seems difficult to do in applications.

It is our purpose in this paper to introduce an iterative scheme $\left\{{x}_{n}\right\}$ which converges strongly to a common solution of a finite family of variational inequality problems for monotone mappings in real reflexive Banach spaces. Our scheme does not involve computations of ${C}_{n}$ or ${Q}_{n}$ for each $n\ge 1$. Furthermore, we apply our convergence theorem to a convex minimization problem. Our theorems extend and unify most of the results that have been proved for this important class of nonlinear operators.

## 2 Preliminaries

Let $x\in int\left(domf\right)$. The subdifferential of f at x is the convex set defined by $\partial f\left(x\right)=\left\{{x}^{\ast }\in {E}^{\ast }:f\left(x\right)+〈{x}^{\ast },y-x〉\le f\left(y\right),\mathrm{\forall }y\in E\right\}$, where the Fenchel conjugate of f is the function ${f}^{\ast }:{E}^{\ast }\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ defined by ${f}^{\ast }\left({x}^{\ast }\right)=sup\left\{〈{x}^{\ast },x〉-f\left(x\right):x\in E\right\}$.

The function f is said to be:

1. (i)

Essentially smooth if ∂f is both locally bounded and single-valued on its domain.

2. (ii)

Essentially strictly convex if ${\left(\partial f\right)}^{-1}$ is locally bounded on its domain and f is strictly convex on every convex subset of domf.

3. (iii)

Legendre if it is both essentially smooth and essentially strictly convex.

We remark that we have the following:

1. (i)

f is essentially smooth if and only if ${f}^{\ast }$ is essentially strictly convex (see , Theorem 5.4).

2. (ii)

${\left(\partial f\right)}^{-1}=\partial {f}^{\ast }$ (see ).

3. (iii)

f is Legendre if and only if ${f}^{\ast }$ is Legendre (see , Corollary 5.5).

4. (iv)

If f is Legendre, then f is a bijection satisfying $\mathrm{\nabla }f={\left(\mathrm{\nabla }{f}^{\ast }\right)}^{-1}$, $ran\mathrm{\nabla }f=dom\mathrm{\nabla }{f}^{\ast }=int\left(dom{f}^{\ast }\right)$ and $ran\mathrm{\nabla }{f}^{\ast }=dom\mathrm{\nabla }f=int\left(domf\right)$ (see , Theorem 5.10).

When the subdifferential of f is single-valued, then $\partial f=\mathrm{\nabla }f$ (see ).

A function f on E is coercive  if the sublevel set of f is bounded; equivalently, ${lim}_{\parallel x\parallel \to \mathrm{\infty }}f\left(x\right)=\mathrm{\infty }$.

Let $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a convex and Gâteaux differentiable function. The modulus of total convexity of f at x domf is the function ${\nu }_{f}\left(x,\cdot \right):\left[0,+\mathrm{\infty }\right)\to \left[0,+\mathrm{\infty }\right]$ defined by

${\nu }_{f}\left(x,t\right):=inf\left\{{D}_{f}\left(y,x\right):y\in domf,\parallel y-x\parallel =t\right\}.$

The function f is called totally convex at x if ${\nu }_{f}\left(x,t\right)>0$, whenever $t>0$. The function f is called totally convex if it is totally convex at any point $x\in int\left(domf\right)$ and it is said to be totally convex on bounded sets if ${\nu }_{f}\left(B,t\right)>0$ for any nonempty bounded subset B of E and $t>0$, where the modulus of total convexity of the function f on the set B is the function ${\nu }_{f}:int\left(domf\right)×\left[0,+\mathrm{\infty }\right)\to \left[0,+\mathrm{\infty }\right]$ defined by

${\nu }_{f}\left(B,t\right):=inf\left\{{V}_{f}\left(x,t\right):x\in B\cap domf\right\}.$

We know that f is totally convex on bounded sets if and only if f is uniformly convex on bounded sets (see , Theorem 2.10). The following lemmas will be useful in the proof of our main result.

Lemma 2.1 

The function $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ is totally convex on bounded subsets of E if and only if for any two sequences $\left\{{x}_{n}\right\}$ and $\left\{{y}_{n}\right\}$ in $int\left(domf\right)$ and domf, respectively, such that the first one is bounded,

$\underset{n\to \mathrm{\infty }}{lim}{D}_{f}\left({y}_{n},{x}_{n}\right)=0\phantom{\rule{1em}{0ex}}⇒\phantom{\rule{1em}{0ex}}\underset{n\to \mathrm{\infty }}{lim}\parallel {y}_{n}-{x}_{n}\parallel =0.$

Lemma 2.2 

Let C be a nonempty, closed and convex subset of E. Let $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a Gâteaux differentiable and totally convex function, and let $x\in E$. Then:

1. (i)

$z={P}_{C}^{f}\left(x\right)$ if and only if $〈\mathrm{\nabla }f\left(x\right)-\mathrm{\nabla }f\left(z\right),y-z〉\le 0$, $\mathrm{\forall }y\in C$.

2. (ii)

${D}_{f}\left(y,{P}_{C}^{f}\left(x\right)\right)+{D}_{f}\left({P}_{C}^{f}\left(x\right),x\right)\le {D}_{f}\left(y,x\right)$, $\mathrm{\forall }y\in C$.

Lemma 2.3 

Let $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a proper, lower semi-continuous and convex function, then ${f}^{\ast }:{E}^{\ast }\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ is a proper, weak lower semicontinuous and convex function. Thus, for all $z\in E$, we have

${D}_{f}\left(z,\mathrm{\nabla }{f}^{\ast }\left(\sum _{i=1}^{N}{t}_{i}\mathrm{\nabla }f\left({x}_{i}\right)\right)\right)\le \sum _{i=1}^{N}{t}_{i}{D}_{f}\left(z,{x}_{i}\right).$
(2.1)

Lemma 2.4 

Let $f:E\to \mathbb{R}$ be Gâteaux differentiable on $int\left(domf\right)$ such that $\mathrm{\nabla }{f}^{\ast }$ is bounded on bounded subsets of $dom{f}^{\ast }$. Let ${x}^{\ast }\in X$ and $\left\{{x}_{n}\right\}\subset E$. If $\left\{{D}_{f}\left(x,{x}_{n}\right)\right\}$ is bounded, so is the sequence $\left\{{x}_{n}\right\}$.

Let $f:E\to \mathbb{R}$ be a Legendre and Gâteaux differentiable function. Following  and , we make use of the function ${V}_{f}:E×{E}^{\ast }\to \left[0,+\mathrm{\infty }\right)$ associated with f, which is defined by

${V}_{f}\left(x,{x}^{\ast }\right)=f\left(x\right)-〈x,{x}^{\ast }〉+{f}^{\ast }\left({x}^{\ast }\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in E,{x}^{\ast }\in {E}^{\ast }.$
(2.2)

Then ${V}_{f}$ is nonnegative and

(2.3)

Moreover, by the subdifferential inequality,

${V}_{f}\left(x,{x}^{\ast }\right)+〈{y}^{\ast },\mathrm{\nabla }{f}^{\ast }\left({x}^{\ast }\right)-x〉\le {V}_{f}\left(x,{x}^{\ast }+{y}^{\ast }\right),$
(2.4)

$\mathrm{\forall }x\in E$ and ${x}^{\ast },{y}^{\ast }\in {E}^{\ast }$ (see ).

Lemma 2.5 

Let $\left\{{a}_{n}\right\}$ be a sequence of nonnegative real numbers satisfying the following relation:

${a}_{n+1}\le \left(1-{\alpha }_{n}\right){a}_{n}+{\alpha }_{n}{\delta }_{n},\phantom{\rule{1em}{0ex}}n\ge {n}_{0},$

where $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$ and $\left\{{\delta }_{n}\right\}\subset \mathbb{R}$ satisfy the following conditions: ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$, ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$, and ${lim sup}_{n\to \mathrm{\infty }}{\delta }_{n}\le 0$. Then ${lim}_{n\to \mathrm{\infty }}{a}_{n}=0$.

Lemma 2.6 

Let $\left\{{a}_{n}\right\}$ be a sequence of real numbers such that there exists a subsequence $\left\{{n}_{i}\right\}$ of $\left\{n\right\}$ such that ${a}_{{n}_{i}}<{a}_{{n}_{i}+1}$ for all $i\in \mathbb{N}$. Then there exists an increasing sequence $\left\{{m}_{k}\right\}\subset \mathbb{N}$ such that ${m}_{k}\to \mathrm{\infty }$ and the following properties are satisfied by all (sufficiently large) numbers $k\in \mathbb{N}$:

${a}_{{m}_{k}}\le {a}_{{m}_{k}+1}\phantom{\rule{1em}{0ex}}\mathit{\text{and}}\phantom{\rule{1em}{0ex}}{a}_{k}\le {a}_{{m}_{k}+1}.$

In fact, ${m}_{k}$ is the largest number n in the set $\left\{1,2,\dots ,k\right\}$ such that the condition ${a}_{n}\le {a}_{n+1}$ holds.

Following the agreement in , we have the following lemma.

Lemma 2.7 Let $f:E\to \left(-\mathrm{\infty },+\mathrm{\infty }\right]$ be a coercive Legendre function and C be a nonempty, closed and convex subset of E. Let $A:C\to {E}^{\ast }$ be a continuous monotone mapping. For $r>0$ and $x\in E$, define the mapping ${F}_{r}:E\to C$ as follows:

${F}_{r}x:=\left\{z\in C:〈Az,y-z〉+\frac{1}{r}〈\mathrm{\nabla }f\left(z\right)-\mathrm{\nabla }f\left(x\right),y-z〉\ge 0,\mathrm{\forall }y\in C\right\}$

for all $x\in E$. Then the following hold:

1. (1)

${F}_{r}$ is single-valued;

2. (2)

$F\left({F}_{r}\right)=\mathit{VI}\left(C,A\right)$;

3. (3)

$\varphi \left(p,{F}_{r}x\right)+\varphi \left({F}_{r}x,x\right)\le \varphi \left(p,x\right)$ for $p\in F\left({F}_{r}\right)$;

4. (4)

$\mathit{VI}\left(C,A\right)$ is closed and convex.

## 3 Main result

Let C be a nonempty, closed and convex subset of E. Let ${A}_{i}:C\to {E}^{\ast }$, for $i=1,2,\dots ,N$, be continuous monotone mappings. For $r>0$, define ${T}_{i,r}x:=\left\{z\in C:〈{A}_{i}z,y-z〉+\frac{1}{r}〈\mathrm{\nabla }f\left(z\right)-\mathrm{\nabla }f\left(x\right),y-z〉\ge 0,\mathrm{\forall }y\in C\right\}$ for all $x\in E$ and $i\in \left\{1,2,\dots ,N\right\}$, where f is a Legendre and convex function from E into $\left(-\mathrm{\infty },+\mathrm{\infty }\right)$. Then, in what follows, we shall study the following iteration process:

$\left\{\begin{array}{l}{x}_{0}=u\in C\phantom{\rule{1em}{0ex}}\text{chosen arbitrarily},\\ {w}_{n}={T}_{N,{r}_{n}}\circ {T}_{N-1,{r}_{n}}\circ \cdots \circ {T}_{1,{r}_{n}}{x}_{n},\\ {x}_{n+1}={P}_{C}^{f}\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,\end{array}$
(3.1)

where $\left\{{\alpha }_{n}\right\}\subset \left(0,1\right)$ satisfies ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}=\mathrm{\infty }$, and $\left\{{r}_{n}\right\}\subset \left[{c}_{1},\mathrm{\infty }\right)$ for some ${c}_{1}>0$.

Theorem 3.1 Let $f:E\to \mathbb{R}$ be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of $int\left(domf\right)$ and ${A}_{i}:C\to {E}^{\ast }$, for $i=1,2,\dots ,N$, be a finite family of continuous monotone mappings with $\mathcal{F}:={\bigcap }_{i=1}^{N}\mathit{VI}\left(C,{A}_{i}\right)\ne \mathrm{\varnothing }$. Let ${\left\{{x}_{n}\right\}}_{n\ge 0}$ be a sequence defined by (3.1). Then $\left\{{x}_{n}\right\}$ converges strongly to ${x}^{\ast }={P}_{\mathcal{F}}^{f}\left(u\right)$.

Proof By Lemma 2.7 we have that each $\mathit{VI}\left(C,{A}_{i}\right)$ for each $i\in \left\{1,2,\dots ,N\right\}$ and hence are closed and convex. Thus, we can take ${x}^{\ast }:={P}_{\mathcal{F}}^{f}u$. Let ${u}_{n,1}={T}_{1,{r}_{n}}{x}_{n},{u}_{n,2}={T}_{2,{r}_{n}}{u}_{n,1},\dots ,{u}_{n,N-1}={T}_{N-1,{r}_{n}}{u}_{n,N-2}$ and ${u}_{n,N}={T}_{N,{r}_{n}}{u}_{n,N-1}={w}_{n}$. Then, from (3.1), Lemmas 2.2, 2.3 and the property of ϕ, we get that

$\begin{array}{rcl}{D}_{f}\left({x}^{\ast },{x}_{n+1}\right)& =& {D}_{f}\left({x}^{\ast },{P}_{C}^{f}\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right)\right)\\ \le & {D}_{f}\left({x}^{\ast },\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right)\right)\\ \le & {\alpha }_{n}{D}_{f}\left({x}^{\ast },u\right)+\left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{w}_{n}\right)\\ =& {\alpha }_{n}{D}_{f}\left({x}^{\ast },u\right)+\left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{T}_{N,{r}_{n}}\circ {T}_{N-1,{r}_{n}}\circ \cdots \circ {T}_{1,{r}_{n}}{x}_{n}\right)\\ \le & {\alpha }_{n}{D}_{f}\left({x}^{\ast },u\right)+\left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{x}_{n}\right).\end{array}$
(3.2)

Thus, by induction,

${D}_{f}\left({x}^{\ast },{x}_{n+1}\right)\le max\left\{{D}_{f}\left({x}^{\ast },{x}_{n}\right),{D}_{f}\left({x}^{\ast },u\right)\right\},\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,$

which implies by Lemma 2.4 that $\left\{{x}_{n}\right\}$ and hence $\left\{{w}_{n}\right\}$ are bounded. Now let ${z}_{n}=\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right)$. Then we have from (3.1) that ${x}_{n+1}={P}_{C}^{f}{z}_{n}$. Using Lemmas 2.2, 2.3, 2.7(3), (2.3) and (2.4), we obtain that

$\begin{array}{rcl}{D}_{f}\left({x}^{\ast },{x}_{n+1}\right)& =& {D}_{f}\left({x}^{\ast },{P}_{C}^{f}{z}_{n}\right)\le {D}_{f}\left({x}^{\ast },{z}_{n}\right)=V\left({x}^{\ast },\mathrm{\nabla }f\left({z}_{n}\right)\right)\\ \le & V\left({x}^{\ast },\mathrm{\nabla }f\left({z}_{n}\right)-{\alpha }_{n}\left(\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right)\right)\right)+〈{\alpha }_{n}\left(\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right)\right),{z}_{n}-{x}^{\ast }〉\\ =& {D}_{f}\left({x}^{\ast },\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left({x}^{\ast }\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right)\right)\\ +{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉\\ \le & {\alpha }_{n}\varphi \left({x}^{\ast },{x}^{\ast }\right)+\left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{w}_{n}\right)+{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉\\ \le & \left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{T}_{N,{r}_{n}}{u}_{n,N-1}\right)+{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉,\end{array}$

which implies that

$\begin{array}{rcl}{D}_{f}\left({x}^{\ast },{x}_{n+1}\right)& \le & \left(1-{\alpha }_{n}\right)\left[{D}_{f}\left({x}^{\ast },{u}_{n,N-1}\right)-{D}_{f}\left({w}_{n},{u}_{n,N-1}\right)\right]\\ +{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉\\ \le & \left(1-{\alpha }_{n}\right)\left[{D}_{f}\left({x}^{\ast },{u}_{{n}_{N}-2}\right)-{D}_{f}\left({u}_{{n}_{N}-1},{u}_{{n}_{N}-2}\right)\right]\\ -\left(1-{\alpha }_{n}\right){D}_{f}\left({w}_{n},{u}_{n,N-1}\right)+{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉\\ \cdots \\ \le & \left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{x}_{n}\right)-\left(1-{\alpha }_{n}\right)\left[{D}_{f}\left({u}_{n,1},{x}_{n}\right)+{D}_{f}\left({u}_{n,2},{u}_{n,1}\right)\\ +\cdots +{D}_{f}\left({u}_{n,N-1},{u}_{n,N-2}\right)+{D}_{f}\left({w}_{n},{u}_{n,N-1}\right)\right]\\ +{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉\\ \le & \left(1-{\alpha }_{n}\right){D}_{f}\left({x}^{\ast },{x}_{n}\right)+{\alpha }_{n}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉.\end{array}$
(3.3)

Now, we consider two possible cases.

Case 1. Suppose that there exists ${n}_{0}\in N$ such that $\left\{{D}_{f}\left({x}^{\ast },{x}_{n}\right)\right\}$ is decreasing. Then we obtain that $\left\{{D}_{f}\left({x}^{\ast },{x}_{n}\right)\right\}$ is convergent. Thus, from (3.3) we have that ${D}_{f}\left({u}_{n,1},{x}_{n}\right),{D}_{f}\left({u}_{n,2},{u}_{n,1}\right),\dots ,{D}_{f}\left({w}_{n},{u}_{n,N-1}\right)\to 0$ as $n\to \mathrm{\infty }$, and hence by Lemma 2.1 we get that

(3.4)

Furthermore, from the property of ${D}_{f}\left(\cdot ,\cdot \right)$ and the fact that ${\alpha }_{n}\to 0$ as $n\to \mathrm{\infty }$, we have that

and hence from Lemma 2.1 we have that ${w}_{n}-{z}_{n}\to 0$ and this with (3.4) implies that

(3.5)

Since $\left\{{z}_{n}\right\}$ is bounded and E is reflexive, we choose a subsequence $\left\{{z}_{{n}_{k}}\right\}$ of $\left\{{z}_{n}\right\}$ such that ${z}_{{n}_{k}}⇀z$ and ${lim sup}_{n\to \mathrm{\infty }}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉={lim}_{k\to \mathrm{\infty }}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{n}_{k}}-{x}^{\ast }〉$. Then, from (3.5) and (3.4), we get that ${u}_{{n}_{k},i}⇀z$ for each $i\in \left\{1,2,\dots ,N\right\}$.

Now, we show that $z\in \mathit{VI}\left(C,{A}_{i}\right)$ for each $i\in \left\{1,2,\dots ,N\right\}$. But from the definition of ${u}_{n,i}$, we have that

$〈{A}_{i}{u}_{n,i},y-{u}_{n,i}〉+〈\frac{\mathrm{\nabla }f\left({u}_{n,i}\right)-\mathrm{\nabla }f\left({x}_{n}\right)}{{r}_{n}},y-{u}_{n,i}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C,$

and hence

$〈{A}_{i}{u}_{{n}_{k},i},y-{u}_{{n}_{k},i}〉+〈\frac{\mathrm{\nabla }f\left({u}_{{n}_{k},i}\right)-\mathrm{\nabla }f\left({x}_{{n}_{k}}\right)}{{r}_{{n}_{k}}},y-{u}_{{n}_{k},i}〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C$
(3.6)

for each $i\in \left\{1,2,\dots ,N\right\}$. Set ${v}_{t}=ty+\left(1-t\right)z$ for all $t\in \left(0,1\right]$ and $y\in C$. Consequently, we get that ${v}_{t}\in C$. Now, from (3.6) it follows that

$\begin{array}{rcl}〈{A}_{i}{v}_{t},{v}_{t}-{u}_{{n}_{k},i}〉& \ge & 〈{A}_{i}{v}_{t},{v}_{t}-{u}_{{n}_{k},i}〉-〈{A}_{i}{u}_{{n}_{k},i},{v}_{t}-{u}_{{n}_{k},i}〉\\ -〈\frac{\mathrm{\nabla }f\left({u}_{{n}_{k},i}\right)-\mathrm{\nabla }f\left({x}_{{n}_{k}}\right)}{{r}_{{n}_{k}}},{v}_{t}-{u}_{{n}_{k},i}〉\\ =& 〈{A}_{i}{v}_{t}-{A}_{i}{u}_{{n}_{k},i},{v}_{t}-{u}_{{n}_{k},i}〉\\ -〈\frac{\mathrm{\nabla }f\left({u}_{{n}_{k},i}\right)-\mathrm{\nabla }f\left({x}_{{n}_{k}}\right)}{{r}_{{n}_{k}}},{v}_{t}-{u}_{{n}_{k},i}〉.\end{array}$

In addition, since f is uniformly Fréchet differentiable and bounded, we have that f is uniformly continuous (see ). Thus, from (3.4) and the uniform continuity of f, we obtain that

and since A is monotone, we also have that $〈{A}_{i}{v}_{t}-{A}_{i}{u}_{{n}_{k},i},{v}_{t}-{u}_{{n}_{k},i}〉\ge 0$. Thus, it follows that

$0\le \underset{k\to \mathrm{\infty }}{lim}〈{A}_{i}{v}_{t},{v}_{t}-{u}_{{n}_{k},i}〉=〈{A}_{i}{v}_{t},{v}_{t}-z〉,$

and hence

If $t\to 0$, the continuity of ${A}_{i}$ implies that

$〈{A}_{i}z,y-z〉\ge 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in C.$

This implies that $z\in \mathit{VI}\left(C,{A}_{i}\right)$ for all $i\in \left\{1,2,\dots ,N\right\}$.

Therefore, we obtain that $z\in {\bigcap }_{i=1}^{N}\mathit{VI}\left(C,{A}_{i}\right)$. Thus, by Lemma 2.2, we immediately obtain that ${lim sup}_{n\to \mathrm{\infty }}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{n}-{x}^{\ast }〉=〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),z-{x}^{\ast }〉\le 0$. It follows from Lemma 2.5 and (3.3) that ${D}_{f}\left({x}^{\ast },{x}_{n}\right)\to 0$ as $n\to \mathrm{\infty }$. Consequently, ${x}_{n}\to {x}^{\ast }$.

Case 2. Suppose that there exists a subsequence $\left\{{n}_{j}\right\}$ of $\left\{n\right\}$ such that

${D}_{f}\left({x}^{\ast },{x}_{{n}_{j}}\right)<{D}_{f}\left({x}^{\ast },{x}_{{n}_{j}+1}\right)$

for all $j\in \mathbb{N}$. Then, by Lemma 2.6, there exists a nondecreasing sequence $\left\{{m}_{k}\right\}\subset \mathbb{N}$ such that ${m}_{k}\to \mathrm{\infty }$, ${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)\le {D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)$ and ${D}_{f}\left({x}^{\ast },{x}_{k}\right)\le {D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)$ for all $k\in \mathbb{N}$. From (3.3) and ${\alpha }_{n}\to 0$, we have

which implies that ${D}_{f}\left({u}_{{m}_{k},1},{x}_{{m}_{k}}\right),\dots ,{D}_{f}\left({w}_{{m}_{k}},{u}_{{m}_{k},N-1}\right)\to 0$, and hence ${u}_{{m}_{k},1}-{x}_{{m}_{k}}\to 0,\dots ,{w}_{{m}_{k}}-{u}_{{m}_{k},N-1}\to 0$ as $k\to \mathrm{\infty }$. Thus, as in Case 1, we obtain that

$\underset{k\to \mathrm{\infty }}{lim sup}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{m}_{k}}-{x}^{\ast }〉\le 0.$
(3.7)

Furthermore, from (3.3) we have that

${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)\le \left(1-{\alpha }_{{m}_{k}}\right){D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)+{\alpha }_{{m}_{k}}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{m}_{k}}-{x}^{\ast }〉.$
(3.8)

Thus, since ${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)\le {D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)$, we get that

$\begin{array}{rcl}{\alpha }_{{m}_{k}}{D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)& \le & {D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)-{D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)\\ +{\alpha }_{{m}_{k}}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{m}_{k}}-{x}^{\ast }〉\\ \le & {\alpha }_{{m}_{k}}〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{m}_{k}}-{x}^{\ast }〉.\end{array}$

Moreover, since ${\alpha }_{{m}_{k}}>0$, we obtain that

${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)\le 〈\mathrm{\nabla }f\left(u\right)-\mathrm{\nabla }f\left({x}^{\ast }\right),{z}_{{m}_{k}}-{x}^{\ast }〉.$

It follows from (3.7) that ${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}}\right)\to 0$ as $k\to \mathrm{\infty }$. This together with (3.8) implies that ${D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)\to 0$. Therefore, since ${D}_{f}\left({x}^{\ast },{x}_{k}\right)\le {D}_{f}\left({x}^{\ast },{x}_{{m}_{k}+1}\right)$ for all $k\in \mathbb{N}$, we conclude that ${x}_{k}\to {x}^{\ast }$ as $k\to \mathrm{\infty }$. Hence, both cases imply that $\left\{{x}_{n}\right\}$ converges strongly to ${x}^{\ast }={P}_{F}^{f}u$ and the proof is complete. □

If in Theorem 3.1 $N=1$, then we get the following corollary.

Corollary 3.2 Let $f:E\to \mathbb{R}$ be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of $int\left(domf\right)$, and let $A:C\to {E}^{\ast }$ be a continuous monotone mapping with $\mathit{VI}\left(C,A\right)\ne \mathrm{\varnothing }$. Let ${\left\{{x}_{n}\right\}}_{n\ge 0}$ be a sequence defined by (3.1),

$\left\{\begin{array}{l}{x}_{0}=u\in C\phantom{\rule{1em}{0ex}}\mathit{\text{chosen arbitrarily}},\\ {w}_{n}={T}_{{r}_{n}}{x}_{n},\\ {x}_{n+1}={P}_{C}^{f}\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right),\end{array}$
(3.9)

where ${T}_{\gamma }x:=\left\{z\in C:〈Az,y-z〉+\frac{1}{\gamma }〈\mathrm{\nabla }f\left(z\right)-\mathrm{\nabla }f\left(x\right),y-z〉\ge 0,\mathrm{\forall }y\in C\right\}$ for all $x\in E$; ${\alpha }_{n}\in \left(0,1\right)$ satisfies ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$ and $\left\{{r}_{n}\right\}\subset \left[{c}_{1},\mathrm{\infty }\right)$ for some ${c}_{1}>0$. Then the sequence ${\left\{{x}_{n}\right\}}_{n\ge 0}$ converges strongly to a point ${x}^{\ast }={P}_{\mathit{VI}\left(C,A\right)}\left(u\right)$.

If $C=E$, then $\mathit{VI}\left(C,A\right)={A}^{-1}\left(0\right)$ and hence the following corollary holds.

Corollary 3.3 Let $f:E\to \mathbb{R}$ be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let ${A}_{i}:E\to {E}^{\ast }$, for $i=1,2,\dots ,N$, be a finite family of continuous monotone mappings. Let $\mathcal{F}:={\bigcap }_{i=1}^{N}\mathit{VI}\left(C,{A}_{i}\right)={\bigcap }_{i=1}^{N}{A}^{-1}\left(0\right)\ne \mathrm{\varnothing }$. Let ${\left\{{x}_{n}\right\}}_{n\ge 0}$ be a sequence defined by (3.1). Then $\left\{{x}_{n}\right\}$ converges strongly to ${x}^{\ast }={P}_{\mathcal{F}}^{f}\left(u\right)$.

If in Theorem 3.1 we assume $u=0$, then the scheme converges strongly to the common minimum-norm zero of a finite family of continuous monotone mappings. In fact, we have the following corollary.

Corollary 3.4 Let $f:E\to \mathbb{R}$ be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let C be a nonempty, closed and convex subset of $int\left(domf\right)$, and let ${A}_{i}:C\to {E}^{\ast }$, for $i=1,2,\dots ,N$, be a finite family of continuous monotone mappings with $\mathcal{F}:={\bigcap }_{i=1}^{N}\mathit{VI}\left(C,{A}_{i}\right)\ne \mathrm{\varnothing }$. Let ${\left\{{x}_{n}\right\}}_{n\ge 0}$ be a sequence defined by (3.1) with $u=0$. Then $\left\{{x}_{n}\right\}$ converges strongly to ${x}^{\ast }={P}_{\mathcal{F}}^{f}\left(0\right)$, which is the common minimum-norm (with respect to the Bregman distance) solution of the variational inequalities.

## 4 Application

In this section, we study the problem of finding a minimizer of a continuously Fréchet differentiable convex functional in Banach spaces.

Let ${g}_{i}$, for $i=1,2,\dots ,N$, be continuously Fréchet differentiable convex functionals such that the gradients of ${g}_{i}$, $\left(\mathrm{\nabla }{g}_{i}\right){|}_{C}$ are continuous and monotone. For $r>0$, let ${K}_{i,r}x:=\left\{z\in C:〈\mathrm{\nabla }{g}_{i}\left(z\right),y-z〉+\frac{1}{r}〈\mathrm{\nabla }f\left(z\right)-\mathrm{\nabla }f\left(x\right),y-z〉\ge 0,\mathrm{\forall }y\in C\right\}$ for all $x\in E$ and for each $i\in \left\{1,2,\dots ,N\right\}$. Then the following theorem holds.

Theorem 4.1 Let $f:E\to \mathbb{R}$ be a strongly coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on bounded subsets of E. Let ${g}_{i}$, $i=1,2,\dots ,N$, be continuously Fréchet differentiable convex functionals such that the gradients of ${g}_{i}$, $\left(\mathrm{\nabla }{g}_{i}\right){|}_{C}$ are continuous, monotone and $\mathcal{F}:={\bigcap }_{i=1}^{N}arg{min}_{y\in C}{g}_{i}\left(y\right)\ne \mathrm{\varnothing }$, where $arg{min}_{y\in C}{g}_{i}\left(y\right):=\left\{z\in C:{g}_{i}\left(z\right)={min}_{y\in C}{g}_{i}\left(y\right)\right\}$. Let ${\left\{{x}_{n}\right\}}_{n\ge 0}$ be a sequence defined by

$\left\{\begin{array}{l}{x}_{0}=u\in C\phantom{\rule{1em}{0ex}}\mathit{\text{chosen arbitrarily}},\\ {w}_{n}={K}_{N,{r}_{n}}\circ {K}_{N-1,{r}_{n}}\circ \cdots \circ {K}_{1,{r}_{n}}{x}_{n},\\ {x}_{n+1}={P}_{C}^{f}\mathrm{\nabla }{f}^{\ast }\left({\alpha }_{n}\mathrm{\nabla }f\left(u\right)+\left(1-{\alpha }_{n}\right)\mathrm{\nabla }f\left({w}_{n}\right)\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }n\ge 0,\end{array}$
(4.1)

where ${\alpha }_{n}\in \left(0,1\right)$ satisfies ${lim}_{n\to \mathrm{\infty }}{\alpha }_{n}=0$ and ${\sum }_{n=1}^{\mathrm{\infty }}{\alpha }_{n}=\mathrm{\infty }$ and $\left\{{r}_{n}\right\}\subset \left[{c}_{1},\mathrm{\infty }\right)$ for some ${c}_{1}>0$. Then the sequence $\left\{{x}_{n}\right\}$ converges strongly to $p={P}_{\mathcal{F}}^{f}\left(u\right)$.

Proof We note that from the convexity and Fréchet differentiability of f, we have $\mathit{VI}\left(C,\left(\mathrm{\nabla }{g}_{i}\right){|}_{C}\right)=arg{min}_{y\in C}{g}_{i}\left(y\right)$ for each $i\in \left\{1,2,\dots ,N\right\}$. Thus, by Theorem 3.1, $\left\{{x}_{n}\right\}$ converges strongly to $p={P}_{\mathcal{F}}^{f}\left(u\right)$. □

Remark 4.2 Our results are new even if the convex function f is chosen to be $f\left(x\right)=\frac{1}{p}{\parallel x\parallel }^{p}$ ($1) in uniformly smooth and uniformly convex spaces.

Remark 4.3 Our theorems extend and unify most of the results that have been proved for this important class of nonlinear operators. In particular, Theorem 3.1 extends Theorem 3.3 of , Theorem 3.1 of , Theorem 3.1 of  and Theorem 3.3 of  and Theorem 4.2 of  either to a more general class of continuous monotone operators or to a more general Banach space E. Moreover, in all our theorems and corollaries, the computation of ${C}_{n}$ or ${Q}_{n}$ for each $n\ge 1$ is not required.

## References

1. Bregman LM: The relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7: 200–217.

2. Alber YI: Metric and generalized projection operators in Banach spaces: properties and applications. Lect. Notes Pure Appl. Math. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type 1996, 15–50.

3. Kamimura S, Takahashi W: Strong convergence of proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13: 938–945. 10.1137/S105262340139611X

4. Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and non-strictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z

5. Zegeye H, Ofoedu EU, Shahzad N: Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings. Appl. Math. Comput. 2010, 216: 3439–3449. 10.1016/j.amc.2010.02.054

6. Zegeye H, Shahzad N: Strong convergence theorems for a finite family of nonexpansive mappings and semi-groups via the hybrid method. Nonlinear Anal. 2010, 72: 325–329. 10.1016/j.na.2009.06.056

7. Zegeye H, Shahzad N: A hybrid approximation method for equilibrium, variational inequality and fixed point problems. Nonlinear Anal. Hybrid Syst. 2010, 4: 619–630. 10.1016/j.nahs.2010.03.005

8. Zegeye H, Shahzad N: A hybrid scheme for finite families of equilibrium, variational inequality and fixed point problems. Nonlinear Anal. 2011, 74: 263–272. 10.1016/j.na.2010.08.040

9. Zegeye H, Shahzad N: Convergence theorems for a common point of solutions of equilibrium and fixed point of relatively nonexpansive multi-valued mapping problems. Abstr. Appl. Anal. 2012., 2012: Article ID 859598

10. Iiduka H, Takahashi W: Strong convergence studied by a hybrid type method for monotone operators in a Banach space. Nonlinear Anal. 2008, 68: 3679–3688. 10.1016/j.na.2007.04.010

11. Kinderlehrer D, Stampaccia G: An Iteration to Variational Inequalities and Their Applications. Academic Press, New York; 1990.

12. Lions JL, Stampacchia G: Variational inequalities. Commun. Pure Appl. Math. 1967, 20: 493–517. 10.1002/cpa.3160200302

13. Zegeye H, Shahzad N: Strong convergence for monotone mappings and relatively weak nonexpansive mappings. Nonlinear Anal. 2009, 70: 2707–2716. 10.1016/j.na.2008.03.058

14. Zegeye H, Shahzad N, Alghamdi MA: Strong convergence theorems for a common point of solution of variational inequality, solutions of equilibrium and fixed point problems. Fixed Point Theory Appl. 2012., 2012: Article ID 119

15. Zegeye H, Shahzad N: An iteration to a common point of solution of variational inequality and fixed point-problems in Banach spaces. J. Appl. Math. 2012., 2012: Article ID 504503

16. Iiduka H, Takahashi W, Toyoda M: Approximation of solutions of variational inequalities for monotone mappings. Panam. Math. J. 2004, 14: 49–61.

17. Iiduka H, Takahashi W: Weak convergence of projection algorithm for variational inequalities in Banach spaces. J. Math. Anal. Appl. 2008, 339: 668–679. 10.1016/j.jmaa.2007.07.019

18. Zegeye H, Shahzad N: Approximating common solution of variational inequality problems for two monotone mappings in Banach spaces. Optim. Lett. 2011, 5: 691–704. 10.1007/s11590-010-0235-5

19. Bauschke HH, Borwein JM, Combettes PL: Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3: 615–647. 10.1142/S0219199701000524

20. Bauschke HH, Borwein JM, Combettes PL: Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42: 596–636. 10.1137/S0363012902407120

21. Butnariu D, Iusem AN, Zalinescu C: On uniform convexity, total convexity and convergence of the proximal point and outer Bregman projection algorithms in Banach spaces. J. Convex Anal. 2003, 10: 35–61.

22. Butnariu D, Resmerita E: Bregman distances, totally convex functions and a method for solving operator equations in Banach spaces. Abstr. Appl. Anal. 2006., 2006: Article ID 84919

23. Reich S: A weak convergence theorem for the alternating method with Bregman distances. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Dekker, New York; 1996:313–318.

24. Reich S, Sabach S: A projection method for solving nonlinear problems in reflexive Banach spaces. J. Fixed Point Theory Appl. 2011, 9: 101–116. 10.1007/s11784-010-0037-5

25. Reich S, Sabach S: Two strong convergence theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31: 22–44. 10.1080/01630560903499852

26. Reich S, Sabach S: Two strong convergence theorems for Bregman strongly nonexpansive operators in reflexive Banach spaces. Nonlinear Anal. TMA 2010, 73: 122–135. 10.1016/j.na.2010.03.005

27. Reich S, Sabach S: Existence and approximation of fixed points of Bregman firmly nonexpansive mappings in reflexive Banach spaces. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer, New York; 2011:301–316.

28. Bonnans JF, Shapiro A: Perturbation Analysis of Optimization Problems. Springer, New York; 2000.

29. Phelps RP Lecture Notes in Mathematics 1364. In Convex Functions, Monotone Operators, and Differentiability. 2nd edition. Springer, Berlin; 1993.

30. Hiriart-Urruty J-B, Lemarchal C Grundlehren der Mathematischen Wissenschaften 306. In Convex Analysis and Minimization Algorithms II. Springer, Berlin; 1993.

31. Butnariu D, Iusem AN: Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Kluwer Academic, Dordrecht; 2000.

32. Martin-Marquez V, Reich S, Sabach S: Bregman strongly nonexpansive operators in reflexive Banach spaces. J. Math. Anal. Appl. 2013, 400: 597–614. 10.1016/j.jmaa.2012.11.059

33. Censor Y, Lent A: An iterative row-action method for interval convex programming. J. Optim. Theory Appl. 1981, 34: 321–353. 10.1007/BF00934676

34. Kohsaka F, Takahashi W: Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6: 505–523.

35. Xu HK: Another control condition in an iterative method for nonexpansive mappings. Bull. Aust. Math. Soc. 2002, 65: 109–113. 10.1017/S0004972700020116

36. Reich S, Sabach S: A strong convergence theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10: 471–485.

## Acknowledgements

This article was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. The first and third authors acknowledge with thanks DSR for financial support. The second author undertook this work when he was visiting the Abdus Salam International Center for Theoretical Physics (ICTP), Trieste, Italy, as a regular associate.

## Author information

Authors

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors contributed equally to this work. All authors read and approved final manuscript.

## Rights and permissions

Reprints and Permissions

Shahzad, N., Zegeye, H. & Alotaibi, A. Convergence results for a common solution of a finite family of variational inequality problems for monotone mappings with Bregman distance function. Fixed Point Theory Appl 2013, 343 (2013). https://doi.org/10.1186/1687-1812-2013-343

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-1812-2013-343

### Keywords

• Bregman projection
• monotone mappings
• strong convergence
• variational inequality problems 