key: cord-0283902-8moz4g71 authors: Zou, Xiaoling; Wang, Ke title: Optimal harvesting for a stochastic regime-switching logistic diffusion system with jumps date: 2014-08-31 journal: Nonlinear Analysis: Hybrid Systems DOI: 10.1016/j.nahs.2014.01.001 sha: eb68173ed9ade0c27573673e9a45ef1be8191edc doc_id: 283902 cord_uid: 8moz4g71 Abstract The optimization problem of fishing for a stochastic logistic model is studied in this paper. Besides a standard geometric Brownian motion, another two driving processes are taken into account: a stationary Poisson point process and a continuous-time finite-state Markov chain. The classical harvesting problem for this model is a big difficult puzzle since the corresponding Fokker–Planck equations with three types of noise are very difficult to solve. Our main goal of this paper is to work out the optimization problem with respect to stationary probability density. One of the main contributions is to provide a new equivalent approach to overcome this problem. More precisely, an ergodic method is used to show the almost surely equivalency between the time averaging yield and sustainable yield. Results show that the optimal strategy changes with environment. An interesting thing is that the optimal strategy for each state is equivalent to the global optimality. Investigations on various logistic-type systems are one of the most important themes in mathematical ecology. Optimal harvesting problem is an important and interesting topic from both biological and mathematical point of view. The optimal harvesting problems have received a lot of attention since Clark's [1, 2] classical works on the following deterministic logistic system with catch-per-unit-effort (CPUE) hypothesis: dX (t) = X (t) (a − bX (t)) dt − EX (t)dt. (1.1) In Eq. (1.1), X (t) represents the size of population at time t, a denotes the intrinsic growth rate of X (t), a/b is the carrying capacity of the environment, EX (t) is the CPUE hypothesis of harvesting function. Clark showed the optimal harvesting effort for Eq. (1.1) is E * = a/2 and the maximum sustainable yield is a 2 /(4b). The fact that population systems are inevitably subject to various environmental noises is accepted by a large number of scholars, such as [3] [4] [5] [6] [7] [8] and references cited there in. Beddington and May [9] showed the optimal harvesting for natural populations in a randomly fluctuating environment in Science. In their paper, they proved the optimal harvesting effort for dX (t) = X (t) (a − bX (t)) dt − EX (t)dt + σ X (t)dB(t) (1.2) is (a − σ 2 /2)/2 and the maximum sustainable yield is (a − σ 2 /2) 2 /(4b). Here, B(t) is a standard Brownian motion defined on a complete probability space (Ω, F , {F t } t≥0 , P). σ 2 /2 is the intensity of white noise. In addition, population systems may be affected by another type of environmental noise, namely, color noise or say telegraph noise. This telegraph noise can be illustrated as a switching between two or more sub-regimes of different environments [10, 11] . For instance, the growth rate for some fish in dry season will be much different from it in rainy season. Wang [12] considered color noise on a stochastic logistic model with CPUE hypothesis. He studied the following system dX (t) = X (t) (a (r (t)) − bX (t)) dt − EX (t) dt + σ (r (t)) X (t) dB(t), (1.3) where r(t) is a continuous-time finite-state Markov chain with values in the finite state space S = {1, 2, . . . , N}. For any state i ∈ S, a(i) and σ (i) are both positive constants. That is to say, Eq. (1.3) can be considered as a switching between N regimes of Eq. (1.2) . For each regime i ∈ S, it has different parameters. Results show that the optimal harvesting effort for Eq. (1.3) is where π = (π 1 , π 2 , . . . , π N ) is the stationary distribution for the color noise r(t). For Eq. (1.3), the author only obtained the optimization in time averaging sense, but did not get the classical optimal harvesting problem with sustainable yield function. Anything else, population dynamics may suffer sudden environmental shocks, such as: earthquakes, epidemics, floods, toxic pollutants, hurricanes and so on. e.g., Tangshan Earthquake (China) in 1976, the Three Mile Island nuclear accident of waste furnace (US) in 1979, the Severe Acute Respiratory Syndromes emerged in 2003, the earthquake and the corresponding nuclear threat to Japan in 2011 and so on. These sudden environmental shocks will cause jumps in population size, and model (1.3) cannot describe these phenomena exactly. To describe this type of environmental noise, scholars introduce a jumping process into the underlying population dynamics, and use stochastic differential equations (SDEs) driven by jumping processes to explain these phenomena. We regard this type of noise as jumping noise. Rong Situ [13] (see page 32) said ''stochastic perturbation of jumps in a dynamical system usually can be modeled as a stochastic integral with respect to some point processes, i.e. its martingale measure or counting measure''. Motivated by this, we will use a stationary Poisson point process as the driven jumping processes in this paper. Let p (t, ω) represent a stationary F t -adapted Poisson point process. N is the Poisson counting measure generated by p (t, ω). λ is the intensity measure of N which is defined on a finite measurable subset Y ⊂ (0, ∞).  N is the compensated random measure defined by  N (dt, du) = N (dt, du) − λ (du) dt. Consequently, the corresponding stochastic logistic hybrid jump-diffusion process driven by p (t, ω) has the following form: Here, X  t −  is the left limit of X (t). γ (r(t), u) reflects the birth rate (γ > 0) or death rate (γ < 0) caused by the jumps to some extent. So, for each i ∈ S, we ask for γ (i, u) to be a bounded function such that γ (r (t) , u) > −1, u ∈ Y. For more information, please see paper [14] . The other parameters have the same meanings as models (1.1)-(1.3). Brownian motion B(t), Markov chain r(t) and stationary Poisson point process p (t, ω) are always assumed to be mutually independent and defined on the same complete probability space (Ω, F , {F t } t≥0 , P). Markov chain r(t) is assumed to be irreducible since this standing hypothesis implies the hybrid system will switch from any sub-state to any other sub-state. In addition, this assumption also means that Markov chain has a unique stationary distribution [15] π = (π 1 , π 2 , . . . , π N ) ∈ R 1×N . Eq. (1.4) can be regarded as switchings from one sub-system to another according to Markov Chain. The switching between these N sub-regimes is governed by the Markov chain on S = {1, 2, . . . , N}. The main contributions of this paper are listed as follows: • the model includes three types of environmental noise which is more grounded in reality. The explicit solution for a logistic model with three types of environmental noise is given in Lemma 3.1; • we provide a technique to handle the case when all the parameters in Eq. (1.4) are assumed to depend on the Markov chain r(t) (see the proof of Theorem 3.1). This assumption is more reasonable than model (1.3) since all the parameters will change with the environment. Take harvesting effort as an example, E(r(t)) implies the harvesting effort will change with the environment; • a new approach is given to do the optimal harvesting problems which can avoid the puzzle of solving the complicated Fokker-Planck equations; • we give the numerical simulations for the stochastic logistic model with three types of noises, and give a brief introduction of its principle (see Section 6). As far as we know, there are few results about Eq. (1.4), and this is the first attempt to study the optimal harvesting problems for Eq. (1.4). For any twice continuously differentiable function V (·, i), the infinitesimal generator operator L t for Eq. (1.4) is defined by where Γ = (γ ij ) N×N is the generator matrix of the Markov chain r(t). The probability density ρ i (x, t), i = 1, 2, . . . , N for the solution process X (t) of Eq. (1.4) is defined as follows: From the literature [16] [17] [18] [19] we know the probability densities ρ i (x, t), i = 1, 2, . . . , N satisfy the following N Fokker-Planck where L * t is the adjoint operator for L t . The initial condition lim t→0 + ρ i (x, t) is determined by the initial distribution of X 0 . Consider the stationary solution ρ i (x) for Eq. (2.2), i.e. the solutions which are independent of time t. Then the stationary probability densities ρ i (x), i = 1, 2, . . . , N satisfy the following N equations The classical sustainable yield function )) is an integration with respect to the stationary probability densities ρ i ( The purpose is to find the optimal strategies E * (i), i = 1, 2, . . . , N under the premise that the population can survival for a long time. However, it is very difficult to solve Eq. (2.3) even to find the expression of adjoint operator L * t as far as we know, and there is not any effective methods to process this problem. So, it is very difficult to get the classical optimal harvesting problem. In this paper, we will give a new approach to figure out the optimal harvesting problems with sustainable yield function. Instead of finding the explicit solution for Eq. (2.3), we give an ergodic method to avoid many troubles. First, we study the optimal harvesting problems with time averaging yield function. Second, we show the almost surely equivalency between the time averaging yield and classical sustainable yield function. Third, we give solution to the classical optimal harvesting problems with sustainable yield and make some interesting discussions. First of all, we need to find the expression of the time averaging yield function. To this end, we will give the explicit positive solution for Eq. (1.4) and study its asymptotic properties. Consequently, the time averaging yield function can be obtained by using these asymptotic properties. Bao [20] have given an explicit solution for the following logistic population dynamics with jumps: Motivated by their investigations we will give an explicit solution for Eq. (1.4) with harvesting driven by both Poisson point process and Markov chain (1.4). This explicit solution will be used to prove some asymptotic properties, and it may be useful in studying multi-population model with jumps. It is also a generalization of Bao's work since we take another type of noise (Markov chain) into account. To keep things simple, let Moreover, X X 0 ,i (t) admits the following explicit expression: Proof. Recall that almost every sample path of r(t) is a right-continuous step function with a finite number of simple jumps on any finite sub-interval ofR + . So there exists a sequence of stopping times {τ k } k≥0 such that 0 = τ 0 < τ 1 < · · · < τ k < · · · and r(t) is a constant on every random interval where τ 1 is the first switching time and the Markov chain switches to state j from state i. Then, by the same procedure, it is also consistent on the other random intervals This completes the proof. In this section, we will give some asymptotic properties which will be used to find the time averaging yield function. For simpleness, we denote X X 0 ,i (t) as X (t) in the following. The detailed proof of these lemmas can be found in Appendix A. Then lim t→∞ sup [ln X (t)] /t ≤ 0, a.s. Lemma 3.3. Suppose for any i ∈ S and t ≥ 0, and there exists a constant c 1 > 0 such that that we have a fish population resource in a lake whose size X (t) at time t is described by stochastic hybrid population dynamics with jumps (1.4). First, we will use the time averaging Y T (E) := lim t→∞ (1/t)  t 0 E (r (s)) X (s) ds as the yield function. Our aim is to find an optimal harvesting effort which maximizes the Y T (E) under the almost surely sense. s., where π = (π 1 , π 2 , . . . , π N ) is the stationary distribution of Markov chain r(t). Proof. The generalized Itô's formula implies Multiplying E(r (t))/b (r (t)) and then integrating for 0 to t on the both sides of Eq. (3.7) we deduce that where ⟨·⟩ t is the bracket processes introduced in [21] . It follows from the strong law of large numbers for local martingales Noting that E (r (t)) /b (r (t)) is a constant on every random interval For any t ≥ 0, defině Then bothQ (t) andQ (t) are bounded functions on t ≥ 0. So, the following relationship holds on every random interval And therefore, Dividing t on the both sides of Eq. Furthermore, from the ergodic theory of Markov chain r(·) it follows that To complete the proof, we only need to find the harvesting effort E * . In order to do this, we must find E * the optimal fishing efforts are and , a.s. i = 1, 2, . . . , N. In this section, we will study the optimal harvesting problem with sustainable yield function, i.e. the problem mentioned in Section 2. A new ergodic approach is proposed to solve this problem. In order to do this, we will begin with the asymptotically stable in distribution for Eq. (1.4) . The proof of Lemma 4.1 is listed in Appendix A. For Eq. (1.4), its stationary probability density (ρ 1 (x), ρ 2 (x), . . . , ρ N (x)) T is the solution of the corresponding Fokker-Planck equation (2.3), where L * t is the adjoint operator of L t mentioned in Eq. (2.1). Denote (µ 1 (x), µ 2 (x), . . . , µ N (x)) T as the stationary distribution introduced by (ρ 1 (x), ρ 2 (x), . . . , ρ N (x)) T . Eq. (1.4) is asymptotically stable in distribution implies that there is a unique invariant measure (ν 1 (·), ν 2 (·), . . . , ν N (·)) T for Eq. (1.4) . By the fact that (µ 1 (x), µ 2 (x), . . . , µ N (x)) T is also an invariant measure for Eq. (1.4) , we obtain (ν 1 (·), ν 2 (·), . . . , ν N (·)) T = (µ 1 (·), µ 2 (·), . . . , µ N (·)) T . Then, Theorem 3.2.6 in [23] yields this invariant measure is ergodic, for any initial values X 0 > 0, r 0 ∈ S. Combining Eqs. (4.1) and (4.2), we can drive that Thereby, the optimal harvesting effort can be obtained similar as Theorem 3.1. Surely Equivalency) . From the proof of the Theorem 4.1, we have the following almost surely equivalency between time averaging yield and sustainable yield This almost surely equivalency provides us a new idea to solve the classical optimal harvesting problem. This approach can avoid the trouble of finding solution to the complicated Fokker-Planck equations. In the following, we will use the optimal harvesting strategy E * to represent both E * T and E * S ; use the maximum yield Y (E * ) to represent both the maximum time averaging yield Y T (E * ) and the maximum sustainable yield Y S (E * ) since they are all equal almost surely. Next, we will discuss the effect of harvesting hypothesis and environmental noises on the optimization problem. Models introduced in the introduction were all devoted to the harvesting with CPUE hypothesis, however, there are some other types of harvesting functions. For example, Li et al. [24] studied the optimal harvesting for a stochastic logistic system with an increasing harvesting function h(E) which satisfied h(0) = 0. Then, we can see that CPUE hypothesis is a special case of this harvesting hypothesis. By the same method in this paper, we give the optimal harvesting results for the following system with an increasing harvesting function h (E (r(t))). Exceptionally, if all the sub-systems have the same harvesting hypothesis, Eq. (5.1) becomes: harvesting strategy for Eq.  and the maximum yield is also Then the optimal harvesting strategy E * is Conclusions in Section 4 of [12] are partly contained in this remark. These conclusions coincide with results obtained by Li et al. in paper [24] . ) ≡ E and σ (i) ≡ σ for any i ∈ S. Then the optimal harvesting These claims coincide with the famous results given by Beddington and May [9] in Science. Then the optimal harvesting strategy E * is E * = a/2 and the maximum yield is Y (E * ) = a 2 / (4b), which coincide with the classical results obtained by Clark [1, 2] . Remark 5.5. From Corollary 1 and these remarks we can see that the harvesting hypothesis cannot influence the optimal yield, but can influence the optimal harvesting effort. However, the environmental noise has an influence on both. We have shown that the optimal harvesting strategy for Eq. (1.4) is E * (i) = A(i)/2, where A(i)/2 is the optimal harvesting strategy for the ith sub-state. Namely, the optimal harvesting strategy for every sub-state is also optimality for the hybrid switching system. Naturally, we want to know if this conclusion is valid for all the Markov-switching systems. From the process of proof, it is exactly right when the Markov chain is irreducible. Furthermore, the optimal yield for a hybrid system is equal to the weighted average of every sub-system according to the Markov chain's stationary distribution. This remark will be explained by some examples in the next section. , then the stationary distribution for r(t) is π = (π 1 , π 2 ) = (5/12, 7/12). Consequently, we can calculate that A(1) = 0.6374, A(2) = 0.4811. So, the optimal harvesting strategy is E * (1) = A(1)/2 = 0.3187, E * (2) = A(2)/2 = 0.2406. It is easy to verify conditions (3.4)-(3.6) are all satisfied. Therefore, the maximum yield becomes Y (E * ) =  N i=1 π i A 2 (i)/(4b(i)) = 0.076. The optimal harvesting strategy is E * 1 = 0.3187, and the optimal yield is Y 1 (E * 1 ) = 0.1015. Consider another system The optimal harvesting strategy is E * 2 = 0.2406, the optimal yield is Y 2 (E * 2 ) = 0.0578. It is observed that Eq. (6.1) is the first sub-system of Example 1 and Eq. (6.2) is the second sub-system of Example 1. Remark 5.6 yields the optimal harvesting effort for Example 1 is always This also tests and verifies the correctness of Remark 5.6. Now, we will give some numerical simulations for Eq. (1.4) with parameters in Example 1. For every sub-system of Eq. (1.4) , it becomes a SDE with jumps. We divide the simulation into two parts: the continuous part is based on the existing numerical methods on the SDE driven by standard geometric Brownian motion; the jumping part is based on some definition (Section 1.9 in [13] ). The numerical simulations for each state are based on the method mentioned in paper [14] . Then, for Eq. (1.4) , it follows different sub-equations but with continuous initial points on every random interval generated by the Markov chain. Let step = 0.01, T max = 80, initial values are r(0) = 2, X 0 = 0.6. Fig. 1 shows that Eq. Fig. 2 shows that Eq. (1.4) has a stationary distribution. We can find that curves of stationary probability density is moved to the left since we assume the jumping noise has a negative effect on the biological population. In this paper, we studied the optimization for a stochastic hybrid population dynamics with jumps. For this system, we gave an explicit solution and analyzed some asymptotic properties. Using the time averaging yield and an equivalent approach, we obtained the optimal harvesting effort and the maximum sustainable yield for Eq. (1.4). We show that our results contain the previous works strictly. We can see that the effect of Brownian motion on optimal harvesting is σ 2 (i)/2 which has long been recognized as the intensity of white noise; the effect of Markov chain on optimal harvesting is π = (π 1 , π 2 , . . . , π N ) which is the stationary distribution for the color noise; from our results, we find that the effect of jumping noise on optimal harvesting is lim t→∞ (1/t)  t 0  Y  γ (·, u) λ (du) ds, we may regard it as a characteristic of the jumping noise. We would like to mention that results reported here are not exhaustive, for example, the optimal harvesting problems for a n-dimensional Lotka-Volterra stochastic hybrid population dynamics with jumps; the optimization problem for a yield function with discount rate and so on. We will continue to study these interesting topics. By the fact ln x ≤ x − 1 on x > 0 and ln By the exponential martingale inequality with jumps (Theorem 5.2.9 in [16] ), for any α, β, T , Let k ∈ N, 0 < ε < 1, δ > 0, θ > 1 and choose T = kδ, α = εe −kδ , β = θe kδ ln k/ε. Then e −αβ = k −θ and Σ ∞ k=1 k −θ < ∞. Borel-Cantelli lemma implies that there is a set Ω 1 ⊆ Ω such that P (Ω 1 ) = 1 and for any ω ∈ Ω 1 , there is an integer k(ω) such that Condition (3.6) and Proposition 2.4 of Kunita [17] imply bracket processes ⟨M 1 ⟩ t and ⟨M 2 ⟩ t satisfy: It follows from the strong law of large numbers for local martingales [22] that Then there is a set Ω 2 ⊆ Ω with P (Ω 2 ) = 1 such that for any ω ∈ Ω 2 , lim t→∞ M 1 (t, ω)/t = 0 and lim t→∞ M 2 (t, ω)/t = 0. As a result, we have Mathematical Bioeconomics: The Optimal Management of Renewal Resources Mathematical Bioeconomics: The Optimal Management of Renewal Resources Stability and Complexity in Model Ecosystems Persistence in stochastic food web models Stability for multispecies population models in random environments Principal eigenvalues, topological pressure, and stochastic stability of equilibrium states The quasi-stationary distribution for small random perturbations of certain one-dimensional maps Optimal harvesting policy for general stochastic logistic population model Harvesting natural populations in a randomly fluctuating environment Dynamical behaviour of Lotka-Volterra competition systems: nonautonomous bistable case and the effect of telegraph noise The dynamics of a population in a Markovian environment Stochastic Biomathematics Models Theory of Stochastic Differential Equations with Jumps and Applications: Mathematical and Analytical Techniques with Applications to Engineering Numerical simulations and modeling for stochastic biological systems with jumps Continuous-Time Markov Chains Lévy Processes and Stochastics Calculus Itôs stochastic calculus: its surprising power for applications Stochastic Differential Equations and Diffusion Processes Degenerate irregular sdes with jumps and application to integro-differential equations of Fokker-Planck type Competitive Lotka-Volterra population dynamics with jumps On a class of additive functionals of Markov processes A strong law of large numbers for local martingales Ergodicity for Infinite Dimensional Systems Optimal harvesting policy for stochastic logistic population model Systemes déquations différentielles doscillations non linéaires The authors would like to thank the editor and the referees for their suggestions which improved the presentation of this paper. This research was partially supported by grants from the National Natural Science Foundation of P.R. China