In this section, we present and demonstrate the controllability results for the problem (1.1)(1.3). In order to demonstrate the main theorem of this section, we list the following hypotheses.

(H1)
A(t) is a family of linear operators, A(t):D(A)\to X, D(A) not depending on t and a dense subset of X, generating an equicontinuous evolution system \{U(t,s):0\le s\le t\le b\}, i.e., (t,s)\to \{U(t,s)x:x\in B\} is equicontinous for t>0 and for all bounded subsets B and {M}_{1}=sup\{\parallel U(t,s)\parallel :(t,s)\in T\}.

(H2)
The function f:J\times \mathcal{D}\times X\times X\to X satisfies the following:

(i)
For t\in J, the function f(t,\cdot ,\cdot ,\cdot ):\mathcal{D}\times X\to X is continuous, and for all (\varphi ,x)\in \mathcal{D}\times X, the function f(\cdot ,\varphi ,x,y):J\to X is strongly measurable.

(ii)
For every positive integer {k}_{1}, there exists {\alpha}_{{k}_{1}}\in {L}^{1}([0,b];{\mathbb{R}}^{+}) such that
\underset{{\parallel \varphi \parallel}_{\mathcal{D}}\le {k}_{1}}{sup}\parallel f(t,\varphi )\parallel \le {\alpha}_{{k}_{1}}(t)\phantom{\rule{1em}{0ex}}\text{for a.e.}t\in J,
and
\underset{r\to \mathrm{\infty}}{lim}inf{\int}_{0}^{b}\frac{{\alpha}_{{k}_{1}}(t)}{{k}_{1}}\phantom{\rule{0.2em}{0ex}}dt=\sigma <\mathrm{\infty}.

(iii)
There exists an integrable function \eta :[0,b]\to [0,\mathrm{\infty}) such that
where D(\theta )=\{v(\theta ):v\in D\}.

(H3)
The function h:T\times \mathcal{D}\to X satisfies the following:

(i)
For each (t,s)\in T, the function h(t,s,\cdot ):\mathcal{D}\to X is continuous, and for each x\in \mathcal{D}, the function h(\cdot ,\cdot ,x):T\to X is strongly measurable.

(ii)
There exists a function m\in {L}^{1}(T,{\mathbb{R}}^{+}) such that
\parallel h(t,s,{x}_{s})\parallel \le m(t,s){\parallel {x}_{s}\parallel}_{\mathcal{D}}.

(iii)
There exists an integrable function \zeta :T\to [0,\mathrm{\infty}) such that
\beta (h(t,s,H))\le \zeta (t,s)\underset{r\le \theta \le 0}{sup}H(\theta )\phantom{\rule{1em}{0ex}}\text{for a.e}t\in J
and H\subset \mathcal{D}, where H(\theta )=\{w(\theta ):w\in H\} and β is the Hausdorff MNC.
For convenience, let us take {L}_{0}=max{\int}_{0}^{t}m(t,s)\phantom{\rule{0.2em}{0ex}}ds and {\zeta}^{\ast}=max{\int}_{0}^{s}\zeta (t,s)\phantom{\rule{0.2em}{0ex}}ds.

(H4)
The function k:T\times \mathcal{D}\to X satisfies the following:

(i)
For each (t,s)\in T, the function k(t,s,\cdot ):\mathcal{D}\to X is continuous, and for each x\in \mathcal{D}, the function k(\cdot ,\cdot ,x):T\to X is strongly measurable.

(ii)
There exists a function m\in {L}^{1}(T,{\mathbb{R}}^{+}) such that
\parallel k(t,s,{x}_{s})\parallel \le {m}^{\star}(t,s){\parallel {x}_{s}\parallel}_{\mathcal{D}}.

(iii)
There exists an integrable function \gamma :T\to [0,\mathrm{\infty}) such that
\beta (k(t,s,H))\le \gamma (t,s)\underset{r\le \theta \le 0}{sup}H(\theta )\phantom{\rule{1em}{0ex}}\text{for a.e.}t\in J
and H\subset \mathcal{D}, where H(\theta )=\{w(\theta ):w\in H\}.
For convenience, let us take {L}_{1}=max{\int}_{0}^{t}{m}^{\star}(t,s)\phantom{\rule{0.2em}{0ex}}ds and {\gamma}^{\ast}=max{\int}_{0}^{s}\gamma (t,s)\phantom{\rule{0.2em}{0ex}}ds.

(H5)
g:\mathcal{PC}([0,b]:X)\to X is a continuous compact operator such that
\underset{{\parallel y\parallel}_{PC}\to \mathrm{\infty}}{lim}\frac{\parallel g(y)\parallel}{{\parallel y\parallel}_{PC}}=0.

(H6)
The linear operator W:{L}^{2}(J,V)\to X is defined by
W={\int}_{0}^{b}U(t,s)Bu(s)\phantom{\rule{0.2em}{0ex}}ds\phantom{\rule{1em}{0ex}}\text{such that}

(i)
W has an invertible operator {W}^{1} which takes values in {L}^{2}(J,V)/kerW, and there exist positive constants {M}_{2} and {M}_{3} such that
\parallel B\parallel \le {M}_{2},\phantom{\rule{2em}{0ex}}\parallel {W}^{1}\parallel \le {M}_{3}.

(ii)
There is {K}_{W}\in {L}^{1}(J,{\mathbb{R}}^{+}) such that, for every bounded set Q\subset X,
\beta \left({W}^{1}Q\right)(t)\le {K}_{W}(t)\beta (Q).

(H7)
{I}_{i}:\mathcal{D}\to X, i=1,2,\dots ,s, is a continuous operator such that

(i)
There are nondecreasing functions {L}_{i}:{\mathbb{R}}^{+}\to {\mathbb{R}}^{+} such that
\parallel {I}_{i}(x)\parallel \le {L}_{i}\left({\parallel x\parallel}_{\mathcal{D}}\right),\phantom{\rule{1em}{0ex}}i=1,2,\dots ,s,x\in \mathcal{D},
and
\underset{\rho \to \mathrm{\infty}}{lim}inf\frac{{L}_{i}(\rho )}{\rho}={\lambda}_{i}<\mathrm{\infty},\phantom{\rule{1em}{0ex}}i=1,2,\dots ,s.

(ii)
There exist constants {K}_{i}\ge 0 such that
\beta ({I}_{i}(S))\le {K}_{i}\underset{r\le \theta \le 0}{sup}\beta (S(\theta )),\phantom{\rule{1em}{0ex}}i=1,2,\dots ,s,
for every bounded subset S of \mathcal{D}.

(H8)
The following estimation holds true:
\begin{array}{rcl}N& =& [({M}_{1}+2{M}_{1}^{2}{M}_{2}{\parallel {K}_{W}\parallel}_{{L}^{1}})\sum _{i=1}^{s}{K}_{i}\\ +[1+2({\zeta}^{\ast}+{\gamma}^{\ast})](2{M}_{1}+4{M}_{1}^{2}{M}_{2}{\parallel {K}_{W}\parallel}_{{L}^{1}}){\parallel \eta \parallel}_{{L}^{1}}]<1.\end{array}
Theorem 3.1 Assume that the hypotheses (H1)(H8) are satisfied. Then the impulsive differential system (1.1)(1.3) is controllable on J provided that
{M}_{1}(1+{M}_{1}{M}_{2}{M}_{3}{b}^{\frac{1}{2}})[\sigma (1+{L}_{0}+{L}_{1})]+\sum _{i=1}^{s}{\lambda}_{i}<1.
(3.1)
Proof Using the hypothesis (H6)(i), for every x\in \mathcal{PC}([r,b],X), define the control
\begin{array}{rcl}{u}_{x}(t)& =& {W}^{1}[{x}_{1}U(b,0)\phi (0){\int}_{0}^{b}U(b,s)\\ \times f(s,{x}_{s},{\int}_{0}^{s}h(s,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{b}k(s,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )\phantom{\rule{0.2em}{0ex}}ds\sum _{0<{t}_{i}<b}U(b,{t}_{i}){I}_{i}({x}_{{t}_{i}})](t).\end{array}
We shall now show that when using this control, the operator defined by
(Fx)(t)=\{\begin{array}{c}\varphi (t),\phantom{\rule{1em}{0ex}}t\in [r,0],\hfill \\ U(t,0)[\varphi (0)+gx(0)]+{\int}_{0}^{t}U(t,s)\hfill \\ \phantom{\rule{1em}{0ex}}\times [f(s,{x}_{s},{\int}_{0}^{t}h(t,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{b}k(s,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )\phantom{\rule{0.2em}{0ex}}ds+(B{u}_{x})(s)]\phantom{\rule{0.2em}{0ex}}ds\hfill \\ \phantom{\rule{1em}{0ex}}+{\sum}_{0<{t}_{i}<t}U(t,{t}_{i}){I}_{i}({x}_{{t}_{i}}),\phantom{\rule{1em}{0ex}}t\in J,\hfill \end{array}
has a fixed point. This fixed point is then a solution of (1.1)(1.3). Clearly, x(b)=(Fx)(b)={x}_{1}, which implies the system (1.1)(1.3) is controllable. We rewrite the problem (1.1)(1.3) as follows.
For \varphi \in \mathcal{D}, we define \stackrel{\u02c6}{\varphi}\in \mathcal{PC} by
\stackrel{\u02c6}{\varphi}(t)=\{\begin{array}{cc}U(t,0)[\varphi (0)+gx(0)],\hfill & t\in J,\hfill \\ \varphi (t),\hfill & t\in [r,0].\hfill \end{array}
Then \stackrel{\u02c6}{\varphi}\in \mathcal{PC}. Let x(t)=y(t)+\stackrel{\u02c6}{\varphi}(t), t\in [r,b]. It is easy to see that y satisfies {y}_{0}=0 and
\begin{array}{rcl}y(t)& =& {\int}_{0}^{t}U(t,s)\\ \times [f(s,{y}_{s}+{\stackrel{\u02c6}{\varphi}}_{s},{\int}_{0}^{s}h(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{s}k(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )+B{u}_{y}(s)]\phantom{\rule{0.2em}{0ex}}ds\\ +\sum _{0<{t}_{i}<t}U(t,{t}_{i}){I}_{i}({y}_{{t}_{i}}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}}),\end{array}
where
\begin{array}{rcl}{u}_{y}(s)& =& {W}^{1}[{x}_{1}U(b,0)[\varphi (0)+gx(0)]\\ {\int}_{0}^{b}U(b,s)f(s,{y}_{s}+{\stackrel{\u02c6}{\varphi}}_{s},{\int}_{0}^{s}h(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{b}k(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )\phantom{\rule{0.2em}{0ex}}ds\\ \sum _{i=1}^{s}U(b,{t}_{i}){I}_{i}({y}_{{t}_{i}}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}})](s)\end{array}
if and only if x satisfies
\begin{array}{rcl}x(t)& =& U(t,0)[\varphi (0)+gx(0)]\\ +{\int}_{0}^{t}U(t,s)[f(s,{x}_{s},{\int}_{0}^{s}h(s,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{b}k(s,\tau ,{x}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )+B{u}_{x}(s)]\phantom{\rule{0.2em}{0ex}}ds\\ +\sum _{0<{t}_{i}<t}U(t,{t}_{i}){I}_{i}({x}_{{t}_{i}}),\end{array}
and x(t)=\varphi (t)+gx(t), t\in [r,0]. Define {\mathcal{PC}}_{0}=\{y\in \mathcal{PC}:{y}_{0}=0\}. Let G:{\mathcal{PC}}_{0}\to {\mathcal{PC}}_{0} be an operator defined by
(Gy)(t)=\{\begin{array}{c}0,\phantom{\rule{1em}{0ex}}t\in [r,0],\hfill \\ {\int}_{0}^{t}U(t,s)[f(s,{y}_{s}+{\stackrel{\u02c6}{\varphi}}_{s},{\int}_{0}^{s}h(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,\hfill \\ \phantom{\rule{1em}{0ex}}{\int}_{0}^{b}k(s,\tau ,{y}_{\tau}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )+B{u}_{y}(s)]\phantom{\rule{0.2em}{0ex}}ds\hfill \\ \phantom{\rule{1em}{0ex}}+{\sum}_{0<{t}_{i}<t}U(t,{t}_{i}){I}_{i}({y}_{{t}_{i}}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}}),\phantom{\rule{1em}{0ex}}t\in J.\hfill \end{array}
(3.2)
Obviously, the operator F has a fixed point is equivalent to G has one. So, it turns out to prove G has a fixed point. Let G={G}_{1}+{G}_{2}, where
Step 1: There exists a positive number q\ge 1 such that G({B}_{q})\subseteq {B}_{q}, where {B}_{q}=\{y\in {\mathcal{PC}}_{0}:{\parallel y\parallel}_{\mathcal{PC}}\le q\}.
Suppose the contrary. Then for each positive integer q, there exists a function {y}^{q}(\cdot )\in {B}_{q} but G({y}^{q})\notin {B}_{q}, i.e., \parallel G({y}^{q})(t)\parallel >q for some t\in J.
We have from (H1)(H7)
\begin{array}{rcl}q& <& \parallel \left(G{y}^{q}\right)(t)\parallel \\ \le & {M}_{1}{\int}_{0}^{b}\parallel f(s,{y}_{s}^{q}+{\stackrel{\u02c6}{\varphi}}_{s},{\int}_{0}^{s}h(s,\tau ,{y}_{\tau}^{q}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau ,{\int}_{0}^{b}k(s,\tau ,{y}_{\tau}^{q}+{\stackrel{\u02c6}{\varphi}}_{\tau})\phantom{\rule{0.2em}{0ex}}d\tau )+B{u}_{{y}^{q}}(s)\parallel \phantom{\rule{0.2em}{0ex}}ds\\ +{M}_{1}\sum _{i=1}^{s}{L}_{i}\left({\parallel {y}_{{t}_{i}}^{q}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}}\parallel}_{\mathcal{D}}\right).\end{array}
Since
where, {q}^{\ast}=(1+{L}_{0}){q}^{\mathrm{\prime}} and {q}^{\mathrm{\prime}}=q+{\parallel \stackrel{\u02c6}{\varphi}\parallel}_{\mathcal{PC}}, we have
q\le {M}_{1}{\int}_{0}^{b}{\alpha}_{{q}^{\ast}}(s)\phantom{\rule{0.2em}{0ex}}ds+{M}_{1}{M}_{2}{b}^{\frac{1}{2}}{\parallel {u}_{{y}^{q}}\parallel}_{{L}^{2}}+{M}_{1}\sum _{i=1}^{s}{L}_{i}\left({q}^{\mathrm{\prime}}\right),
(3.5)
where
{\parallel {u}_{{y}^{q}}\parallel}_{{L}^{2}}\le {M}_{3}[\parallel {x}_{1}\parallel +{M}_{1}{\parallel \varphi \parallel}_{\mathcal{D}}+{M}_{1}{\int}_{0}^{b}{\alpha}_{{q}^{\ast}}(s)\phantom{\rule{0.2em}{0ex}}ds+{M}_{1}\sum _{i=1}^{s}{L}_{i}\left({q}^{\mathrm{\prime}}\right)].
(3.6)
Hence by (3.5)
\begin{array}{rcl}q& <& {M}_{1}{\int}_{0}^{b}{\alpha}_{{q}^{\ast}}(s)\phantom{\rule{0.2em}{0ex}}ds+{M}_{1}{M}_{2}{b}^{\frac{1}{2}}{M}_{3}[\parallel {x}_{1}\parallel +{M}_{1}{\parallel \varphi \parallel}_{\mathcal{D}}+{M}_{1}{\int}_{0}^{b}{\alpha}_{{q}^{\ast}}(s)\phantom{\rule{0.2em}{0ex}}ds+{M}_{1}\sum _{i=1}^{s}{L}_{i}\left({q}^{\mathrm{\prime}}\right)]\\ +{M}_{1}\sum _{i=1}^{s}{L}_{i}\left({q}^{\mathrm{\prime}}\right)\\ \le & (1+{M}_{1}{M}_{2}{M}_{3}{b}^{\frac{1}{2}}){M}_{1}[{\int}_{0}^{b}{\alpha}_{{q}^{\ast}}(s)\phantom{\rule{0.2em}{0ex}}ds+\sum _{i=1}^{s}{L}_{i}\left({q}^{\mathrm{\prime}}\right)]+M,\end{array}
where M={M}_{1}{M}_{2}{M}_{3}{b}^{\frac{1}{2}}(\parallel {x}_{1}\parallel +{M}_{1}{\parallel \varphi \parallel}_{\mathcal{D}}) is independent of q and {q}^{\mathrm{\prime}}=q+{\parallel \stackrel{\u02c6}{\varphi}\parallel}_{\mathcal{PC}}.
Dividing both sides by q and noting that {q}^{\mathrm{\prime}}=q+{\parallel \stackrel{\u02c6}{\varphi}\parallel}_{\mathcal{PC}}\to \mathrm{\infty} as q\to \mathrm{\infty}, we obtain
Thus we have
1\le {M}_{1}(1+{M}_{1}{M}_{2}{M}_{3}{b}^{\frac{1}{2}})(\sigma (1+{L}_{0}+{L}_{1})+\sum _{i=1}^{s}{\lambda}_{i}).
This contradicts (3.1). Hence, for some positive number q, G({B}_{q})\subseteq {B}_{q}.
Step 2: G:{\mathcal{PC}}_{0}\to {\mathcal{PC}}_{0} is continuous.
Let {\{{y}^{(n)}(t)\}}_{n=1}^{\mathrm{\infty}}\subseteq {\mathcal{PC}}_{0} with {y}^{(n)}\to y in {\mathcal{PC}}_{0}. Then there is a number q>0 such that \parallel {y}^{(n)}(t)\parallel \le q for all n and t\in J, so {y}^{(n)}\in {B}_{q} and y\in {B}_{q}.
From (H2) and (H5) we have

(i)
and
(ii) {I}_{i}({y}_{{t}_{i}}^{(n)}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}})\to {I}_{i}({y}_{{t}_{i}}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}}), i=1,2,\dots ,s.
Then we have
{\parallel {G}_{1}{y}^{(n)}{G}_{1}y\parallel}_{\mathcal{PC}}\le \sum _{i=1}^{s}\parallel {I}_{i}({y}_{{t}_{i}}^{(n)}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}}){I}_{i}({y}_{{t}_{i}}+{\stackrel{\u02c6}{\varphi}}_{{t}_{i}})\parallel
(3.7)
and
where
Observing (3.7)(3.9), by the dominated convergence theorem, we have that
{\parallel G{y}^{(n)}Gy\parallel}_{\mathcal{PC}}\le {\parallel {G}_{1}{y}^{(n)}Gy\parallel}_{\mathcal{PC}}+{\parallel {G}_{2}{y}^{(n)}{G}_{2}y\parallel}_{\mathcal{PC}}\to 0\phantom{\rule{1em}{0ex}}\text{as}n\to +\mathrm{\infty}.
That is, G is continuous.
Step 3: G is equicontinuous on every {J}_{i}, i=1,2,\dots ,s. That is, G({B}_{q}) is piecewise equicontinuous on J.
Indeed, for {t}_{1},{t}_{2}\in {J}_{i}, {t}_{1}<{t}_{2} and y\in {B}_{q}, we deduce that
By the equicontinuity of U(\cdot ,s) and the absolute continuity of the Lebesgue integral, we can see that the righthand side of (3.10) tends to zero and is independent of y as {t}_{2}\to {t}_{1}. Hence G({B}_{q}) is equicontinuous on {J}_{i} (i=1,2,\dots ,s).
Step 4: Mon̈ch’s condition holds.
Suppose W\subseteq {B}_{q} is countable and W\subseteq \overline{co}(\{0\}\cup G(W)). We shall show that \beta (W)=0, where β is the Hausdorff MNC.
Without loss of generality, we may assume that W={\{{y}^{(n)}\}}_{n=1}^{\mathrm{\infty}}. Since G maps {B}_{q} into an equicontinuous family, G(W) is equicontinuous on {J}_{i}. Hence W\subseteq \overline{co}(\{0\}\cup G(W)) is also equicontinuous on every {J}_{i}.
By (H7)(ii) we have
By Lemma 2.3 and from (H3)(iii), (H4)(iii), (H6)(ii) and (H7)(ii), we have that
This implies that
From (3.11) and (3.13) we obtain that
for each t\in J.
Since W and G(W) are equicontinuous on every {J}_{i}, according to Lemma 2.2, the inequality (3.14) implies that
That is, \beta (GW)\le N\beta (W), where N is defined in (H8). Thus, from Mon̈ch’s condition, we get that
\beta (W)\le \beta (\overline{co}(\{0\}\cup G(W))=\beta (G(W))\le N\beta (W)
since N<1, which implies that \beta (W)=0. So, we have that W is relatively compact in {\mathcal{PC}}_{0}. In the view of Lemma 2.5, i.e., Mon̈ch’s fixed point theorem, we conclude that G has a fixed point y in W. Then x=y+\stackrel{\u02c6}{\varphi} is a fixed point of F in \mathcal{PC}, and thus the system (1.1)(1.3) is nonlocally controllable on the interval [0,b]. This completes the proof. □
Here we must remark that the conditions (H1)(H8) given above are at least sufficient, because it is an open problem to prove that they are also necessary or to find an example which points out clearly that the mentioned conditions are not necessary to get the main result proved in this section.