Stability in distribution and stabilization of
switching jump diffusions

. This paper aims to study stability in distribution of Markovian switching jump diﬀu-sions. The main motivation stems from stability and stabilizing hybrid systems in which there is no trivial solution. An explicit criterion for stability in distribution is derived. The stabilizing eﬀects of Markov chains, Brownian motions, and Poisson jumps are revealed. Based on these criteria, stabilization problems of stochastic diﬀerential equations with Markovian switching and Poisson jumps are developed


Introduction
This work focuses on stability in distribution of a class of jump diffusions with Markovian switching. The underlying process is a two-component process (X(·), α(·)), where X(·) describes the jump diffusion behavior and α(·) is a continuous-time Markov chain having a finite state space. Recently, such a class of stochastic processes has received much attention in various settings for different domain of applications; see [13,20] and references therein for comprehensive treatments and coverage of switching diffusions and [7,9,[14][15][16]18] for more recent progress in the fields.
Why is the consideration of stability in distribution important; why is it necessary? It is well known that in deterministic systems of differential equations, an important starting point is examination of equilibria. When one considers stochastic systems, in lieu of the equilibria, one often has to begin with stationary distributions. Thus to some extent, stationary distributions are frequently the primary concerns, especially when the systems have no equilibria. In addition, the stability in distribution is closely related to the concept of weak stability, which is a term used by Wonham [17]. Such weak stability concept implies the so-called recurrence under suitable conditions. That is, for a stochastic system given as the solutions of a differential equation, starting from a point outside an open set with compact closure, one wish to see if the trajectories will return to the open set in finite time infinitely often.
Most of the work in stability of switching diffusions and switching jump diffusions to date are concerned with stability in probability, moment stability, or almost sure stability, in which x = 0 is a trivial solution (an equilibrium point) to the corresponding equations and any other solution will converge to trivial solution in probability, in the pth moment for some p > 0, or in the almost sure sense. In contrast, we are interested in the cases that there is no equilibrium point of the differential equation, but there is still stability in the sense that all solutions converge in distribution to some probability measure. In [3], the authors considered stability in distribution of a semi-linear stochastic differential equation with Markovian switching of the form dX(t) = A(α(t))X(t)dt + σ(X(t), α(t))dw(t), where w(·) is a standard Brownian motion. In an important development [24], Yuan and Mao provided sufficient conditions guaranteeing stability in distribution for nonlinear Markovian switching diffusions of the form dX(t) = b(X(t), α(t))dt + σ(X(t), α(t))dw(t), (1.1) where α(·) is a finite-state Markov chain. Subsequently, in the work of [8], Nguyen provided much weaker conditions of by using localization arguments to further improve the criteria for stability in distribution. In [19], the authors have considered stability in distribution of a switching jump diffusion dX(t) = b(X(t), α(t))dt + σ(X(t), α(t))dw(t) + dJ(t), where b(·), σ(·), and g(·) are suitable functions, and N (t, ·) is a Poisson measure. For existence and uniqueness of solutions as well as the related maximum principles and Harnack inequalities, we refer to [6]. Related works on stability in distribution of the aforementioned systems can be found in [2,4,5]. Some criteria for invariant measures and stability in distribution of equation (1.1) and its generalizations with path-dependent and pathindependent switching can be found in [1,11,14]. We refer to [15,16,22,25,26] for related works on stability in probability and exponential stability of equation (1.2). Recent efforts on stabilization in distribution of hybrid systems by certain feedback controls can be found in [12,23]. Regarding equation (1.2), the criteria in [19] are given in terms of the existence of a set of Lyapunov functions V (x, i) for i ∈ M, where M is the state space of α(·). Consequently, it is nontrivial to apply these criteria. We are not aware any work on explicit criteria for stability in distribution of equation (1.2). In this work, our first aim is to construct a general criterion for stability in distribution of equation (1.2). The novelty of our work lies in that in order to apply our criterion, one need only construct at most two Lyapunov functions U (x) and V (x) and in most common cases, it is sufficient to construct U (x) only. Moreover, we reveal the contribution of the Markov chain α(·) in the sense that equation . . , ν m ) is the stationary distribution of α(·) and η = (η 1 , . . . , η m ) and ζ = (ζ 1 , . . . , ζ m ) are certain vectors. Another distinct feature of our work is the construction of an explicit and easily verifiable criterion for stability for switching jump diffusions.
Treating stability of hybrid systems, motivated by [3,12,21,23,26], the following question arises. Can we apply feedback controls (or perturbations using Brownian motions and/or Poisson jumps) to stabilize a given system? Moreover, if a given system is not regular (not having global solutions), can we design certain feedback rules to regularize and stabilize it? To the best of our knowledge, these topics have not been well understood for stability in distribution. Using the criteria for stability in distribution developed in this work, we address these questions. We show that given any scalar switching differential equations, one can design feedback strategies so that the resulting switching jump diffusions are stable in distribution. Nevertheless, the multi-dimensional counterpart is rather challenging. By designing a novel treatment, we are able to treat a wide class of such stochastic dynamic systems so that we can regularize and stabilize the systems in distribution.
The contributions of our work in this paper can be summarized as follows.
(1) We focus on nonlinear stochastic differential equations with jumps and Markov switching, and provide sufficient conditions that are substantially weaker than the existing results and extend and further improve the results in [8].
(2) We give insight on how each of the components, namely, Brownian motion, switching, and jump process can contribute in a positive way to stability in distribution. (3) We further obtain strategies to stabilize randomly switching ordinary differential equations. (4) When the jump disappears in the dynamic systems, our results cover that of switching diffusions; when the Brownian motion also disappears, our results cover that of switching differential equations.
The rest of the work is organized as follows. Section 2 presents the problem formulation. Section 3 proceeds with criteria for stability in distribution. Section 4 develops strategies for stabilization in the sense of stability in distribution for the stochastic dynamic systems that we are interested in. Section 5 provides some examples for illustration. Finally, Section 6 concludes the paper with a few more remarks.

Formulation
We begin this section with the following notation. Notation. Let R + = [0, ∞) and N be the set of positive integers. Let C 2 R d , R + be the set of all functions and a probability measure π defined on Γ, denote by Γ b the family of all bounded positive functions h(y) on Γ with Γ ln h(y) π(dγ) < ∞.
We work with a complete filtered probability space (Ω, F, P, {F t }) with the filtration {F t } satisfying the usual condition (i.e., it is right-continuous and F 0 contains all the null sets). Assume that the Markov chain α(·) and the d-dimensional standard Brownian motion w(·) are defined on (Ω, F, P, {F t }). Moreover, α(·) and w(·) are {F t }-adapted and independent.
Suppose α(·) takes values in M = {1, . . . , m} with the generator Q = (q ij ) ∈ R m×m , where m ∈ N. Hence, α(·) is described by a transition probability specification of the form Note that q ij ≥ 0 if i = j and j∈M q ij = 0 for any i ∈ M. Let Γ be a subset of R d \ {0} that is the range space of the impulsive jumps. For any subset B in Γ, N (t, B) counts the number of impulses on [0, t] with values in B, b(·, ·) : and g(·, ·, ·) : R d × M × Γ → R d are suitable Borel functions under some precise conditions to be specified later.
Consider the dynamic system given by dX(t) = b(X(t), α(t))dt + σ(X(t), α(t))dw(t) + dJ(t), with initial condition X(0) = x 0 , α(0) = i 0 , where N (t, B) is a Poisson measure such that the jump process N (·) is independent of the Brownian motion w(·) and the switching process α(·). The compensated Poisson measure is defined by where λ ∈ (0, ∞) is known as the jump rate and π(·) is the jump distribution with π(Γ) = 1. We have used the set up as in [22]. We define an operator G as follows. where and QV (x, ·)(i) = j∈M q ij V (x, j). For notational simplicity, we also write (QV )(x, i) = j∈M q ij V (x, j). Thus, Next, we introduce the functions b : We also define an operator G as follows. If U : To proceed, we pose the following conditions.
The regularity and stability in distribution of the process (X(·), α(·)) are defined as follows. A system being regular essentially means that it has no finite explosion time, whereas stability in distribution is a weaker sense notion of stability for a stochastic dynamic system.
(c) Equation (2.2) is said to have property (P3) if for any T > 0, any ε > 0 and any compact subset D of R d , there exists a constant R > 0 such that Remark 2.6. Properties (P1) and (P2) are essentially those used in [24] and [19]. Nevertheless, in [24] and [19], the authors assume that the drift and diffusion coefficients satisfy the linear growth condition, which guarantees that property (P3) holds (see [24], Eq. (3.10), p. 282). Therefore, property (P3) is not stated explicitly in the aforementioned references. In this paper, we drop the linear growth condition, hence, we need property (P3).

Criteria for stability in distribution
As a preparation of the subsequent study, we first state a lemma, which is more or less a restatement of the Fredholm alternative; see Lemma A.12 of [20] for a proof.  Proof. The proof is divided into three steps.
Step 2: Let (x 0 , i 0 ) ∈ R d × M and ε > 0. To establish property (P1), we use the same steps as in the proof of Lemma 4.1 of [24]. A sketch is given as follows. By using (3.4) and the Dynkin formula, we can show that This together with the Chebyshev inequality implies Thus, equation (2.2) has property (P1).
The following lemma indicates that if x 0 , y 0 ∈ R d , i 0 ∈ M and x 0 = y 0 , then almost all sample paths of X x0,i0 (t) and X y0,i0 (t) will never intersect.
Indeed, we have In view of (3.2), we have It follows from (3.6), (3.7), and condition (A4) that Using this representation in (3.8), we obtain Since j∈M ν j ζ j < 0, we can choose δ > 0 and β > 0 so that Let ε > 0 and D be a compact subset of R d . Let (x 0 , y 0 , i 0 ) ∈ D × D × M. For notational simplicity, we denote X(t) = X x0,i0 (t), Y (t) = X y0,i0 (t), and α(t) = α i0 (t). By (2.3), there exists a constant r > 0 such that (3.10) Let { τ n } n be the sequence of stopping times defined by Since the solutions of equation (2.2) are regular, τ n → ∞ almost surely as n → ∞. By Lemma 3.3 and the Dynkin formula, we obtain that for each t > 0, This together with (3.9) implies Let T > 0 be such that Then for any t ≥ T , we have from (3.10), (3.11), and (3.12) that We are in a position to state our main results in this section.
Then there exists a constant ρ > 0 such that E|X x0,i0 (t) − X y0,i0 (t)| ρ converges to zero exponentially fast as t → ∞ for any Proof. . For the rest of the proof, we use essentially the same steps as in the proof of Theorem 3.1 in [24], hence, we omit it for brevity.
Remark 3.6. Under certain conditions, Theorem 3.5 states that there exists a constant ρ > 0 so that E|X x0,i0 (t) − X y0,i0 (t)| ρ converges to zero exponentially fast for any (x 0 , y 0 , i 0 ) ∈ R d × R d × M. In this case, it is said that equation (2.2) is asymptotically flat in the ρth mean (see [3]).
Now we apply the general criterion established above to derive an explicit and verifiable criterion for stability in distribution. This criterion is obtained when we take U (x) = V (x) = |x| in conditions (A3) and (A4). More criteria can be constructed if we choose U (x) = V (x) = (x Bx) 1/2 for some positive definite matrix B ∈ R d×d . Theorem 3.7. Assume (A1)-(A2). Moreover, for each i ∈ M, there are constants K b (i), K σ (i), K d (i), and a function K g (i, ·) ∈ Γ b such that for all x, y ∈ R d , γ ∈ Γ and i ∈ M. Define Suppose i∈M ν i ζ i < 0. Then the following assertions hold.
(b) It is shown in part (a) that condition (A4) holds with U (x) = |x| for x ∈ R d . Hence, by virtue of Theorem 3.5, there exists a constant ρ > 0 such that E|X x0,i0 (t) − X y0,i0 (t)| ρ converges to zero exponentially fast as t → ∞ for any (x 0 , y 0 , i 0 ) ∈ R d × R d × M.

Stabilization of switching jump diffusions
Based on the criteria developed in the preceding sections, we proceed to investigate stabilizing effects owing to Markov chains, Brownian motions, and Poisson jumps, respectively. Because of our work in this paper focuses on stability in distribution, the stabilization, in fact, is in the sense of the so-called weak stabilization. Such a term was probably first coined in the earlier work of Wonham [17].
Note that the switching jump diffusion X(·) may viewed as m jump diffusions that interact and switch back and forth due to the switching mechanism. These jump diffusions are denoted by X (1) (·), X (2) (·), . . . , X (m) (·) given by By virtue of Theorem 3.7, the stability of overall system (2.2) does not require all ζ i < 0, but only their average i∈M ν i ζ i < 0.
For each jump diffusion X (i) (·) above, we can apply Theorem 3.7 to verify the stability in distribution. In particular, suppose there exists i 0 ∈ M such that ζ i0 < 0. Then X (i0) (·) is stable in distribution. In such a case, we can design a suitable Markov switching α(·) so that the i 0 th subsystem is dominant and i∈M ν i ζ i < 0. Thus, the switching process α(·) can work as a stabilizing force.
Can we add a suitable noise to equation (4.1) so that the resulting system given by equation (2.2) is stable in distribution? The answer is positive since we can always choose σ(·, ·) and g(·, ·) so that (3.16) holds. For simplicity, we can take where λ(i) ∈ R, µ(i) > 0 and C(i) ∈ R d×d for each i ∈ M. Then σ(·, ·) and g(·, ·) satisfies (3.16) with If we choose λ(i) and µ(i) such that Consider a switching ordinary differential equation given by equation (4.1). In [21], the authors have shown that if the solutions of equation (4.1) are not regular, one can add a feedback control term, which is of the form σ(X(t), α(t))dw(t) to suppress the finite explosion time. Then to ensure stability (in the sense of almost sure exponential stability), one can add another feedback control σ(X(t), α(t))d w(t). Here, a question arises naturally: can we use the same strategy to regularize and stabilize a given system in the sense of stability in distribution? We proceed to provide an affirmative answer for any given scalar switching differential equations.
Then equation (4.2) is stable in distribution.

(4.4)
Observe that for any a ∈ R, This together with (4.4) implies that We have By (4.3), condition (A4) holds.
We proceed to verify condition (A3).
Next, we address the question of stabilization of multi-dimensional systems. Using the idea as in [21], we add two feedback terms, one of them regularizes the system and the other ensures the stability. Nevertheless, when such stochastic feedback strategies are used, we cannot find a function U (·) to verify condition (A4). Hence, a new approach is needed to establish property (P2).
Consider equation ( Suppose there exists a differential function b : Remark 4.5. (a) If b(·, ·) has a polynomial growth together with its left-side Lipschitz coefficient b L (·), we can choose b(r) = c(r n + 1) for some c > 0 and n > 0. Particularly, if b j (·, i) is differentiable and its gradient has a polynomial growth for each (i, j), then we can choose b(r) = c(r n + 1) for some c > 0 and n > 0.
(b) We slightly modify the proofs below, to show that we can in fact, improve (B1) by using (B1') below.
The rest of the section is devoted to proving that equation (4.9) has property (P2) when µ M := max{µ i : i ∈ M} is sufficiently large. Because the drift coefficient b(·, ·) and its left-side Lipschitz coefficient b L (·) can have a highly nonlinear growth rate, it seems practically impossible to construct a function U (·) satisfying condition (A4) for a general nonlinear function b(·, ·). As a result, Lemma 3.4 is not applicable here. We overcome this technical difficulty by proving a number of lemmas below.
Lemma 4.7. Let µ M = max{µ i : i ∈ M}. Then there exist positive constants C 1 , C 2 , C 3 , and λ independent of µ M and initial data ( and Proof. In view of (4.13), there exist positive constants C 1 and C 2 independent of {µ i } i∈M and initial data (x 0 , i 0 ) ∈ R d × M such that By Itô's formula, we obtain which leads to Then there exist positive constants C 1 and λ so that (4.14) holds. We proceed to prove (4.15). In view of (4.12), we have Then, substituting (4.14) into (4.16), we obtain (4.15) for some positive constants C 2 and C 3 independent of {µ i } i∈M and initial data (x 0 , i 0 ). This completes the proof. Proof. Define events Ω n := ω : sup 0≤t≤n 2 |M (t)| ≥ n 2 ε 2 /2 , n = 1, 2, . . .

By the Markov inequality and the Burkholder-Davis-Gundy inequality,
Now, for any ε 1 > 0, there exists n 0 > 3 such that Note that for ω ∈ Ω \ ∪ ∞ n=n0 Ω n , we have sup 0≤t≤n 2 |M (t)| ≤ n 2 ε 2 /2 for all n ≥ n 0 . As a result, if ω ∈ Ω \ ∪ ∞ n=n0 Ω n , we have for any t > n 2 0 that where N t is the greatest integer smaller than t. Thus, we have This completes the proof.
It is easy to see that Λ < 0 if we choose H and then µ M to be sufficiently large. Indeed, we can first select sufficiently large H that C6C5 H δ < min i∈M {νi} 2 and then select one µ i * satisfying Then we can verify that Λ < 0.
The conclusion follows.
The following theorem summarizes our results above.

Concluding remarks
This paper has been devoted to the study of Markovian switching jump diffusions. We have further explored the asymptotic behaviors of switching diffusions with Poisson jumps in which there might be no equilibrium point. The criteria for stability in distribution are established. The stabilization effects of Markov chains, Brownian motions, and Poisson jumps are investigated. Our results offer new insight and effective treatments to regularization and stabilization of switching jump diffusion systems. Although the paper is devoted to Markovian switching jump diffusions, when the jump part disappear, our results cover that of hybrid systems with a Markovian switching.