key: cord-0059587-0irykjx3 authors: Spel, Jip; Junges, Sebastian; Katoen, Joost-Pieter title: Finding Provably Optimal Markov Chains date: 2021-03-01 journal: Tools and Algorithms for the Construction and Analysis of Systems DOI: 10.1007/978-3-030-72016-2_10 sha: 2db3eaa3a22ec844fa868d879670816c35295918 doc_id: 59587 cord_uid: 0irykjx3 Parametric Markov chains (pMCs) are Markov chains with symbolic (aka: parametric) transition probabilities. They are a convenient operational model to treat robustness against uncertainties. A typical objective is to find the parameter values that maximize the reachability of some target states. In this paper, we consider automatically proving robustness, that is, an [Formula: see text] -close upper bound on the maximal reachability probability. The result of our procedure actually provides an almost-optimal parameter valuation along with this upper bound. We propose to tackle these ETR-hard problems by a tight combination of two significantly different techniques: monotonicity checking and parameter lifting. The former builds a partial order on states to check whether a pMC is (local or global) monotonic in a certain parameter, whereas parameter lifting is an abstraction technique based on the iterative evaluation of pMCs without parameter dependencies. We explain our novel algorithmic approach and experimentally show that we significantly improve the time to determine almost-optimal synthesis. Background and problem setting. Probabilistic model checking [3, 20] is a wellestablished field and has various applications but assumes probabilities to be fixed constants. To deal with uncertainties, symbolic parameters are used. Parametric Markov chains (pMCs, for short) define a family of Markov chains with uncountably many family members, called instantiations, by having symbolic (aka: parametric) transition probabilities [10, 22] . We are interested in determining optimal parameter settings: which instantiation meets a given objective the best? The typical objective is to maximize the reachability probability of a set of target states. This question is inspired by practical applications such as: what are the optimal parameter settings in randomised controllers to minimise power consumption?, and what is the optimal bias of coins in a randomised distributed algorithm to maximise the chance of achieving mutual exclusion? For most applications, it suffices to achieve parameters that attain a given quality of service that is ε-close to the unknown optimal solution. More precisely, this paper concentrates on automatically proving ε-robustness, i.e., determine an upper bound which is ε-close to the maximal reachability probability. The by-product of our procedure actually provides an almost-optimal parameter valuation too. Existing parameter synthesis techniques. Efficient techniques have been developed in recent years for the feasibility problem: given a parametric Markov chain, and a reachability objective, find an instantiation that reaches the target with at least a given probability. To solve this problem, it suffices to "guess" a correct family member, i.e., a correct parameter instantiation. Verifying the "guessed" instantiation against the reachability objective is readily done using off-theshelf Markov chain model-checking algorithms. Most recent progress is based on advanced techniques that make informed guesses: This ranges from using sampling techniques [14] , guided sampling such as particle swarm optimisation [7] , by greedy search [24] , or by solving different variants of a convex optimisation problem around a sample [8, 9] . Sampling has been accelerated by reusing previous model checking results [25] , or by just in time compilation of the parameter function [12] . These methods are inherently inadequate for finding optimal parameter settings. To the best of our knowledge, optimal parameter synthesis has received scant attention so far. A notable exception is the analysis (e.g., using SMT techniques) of rational functions, typically obtained by some form of state elimination [10, 12, 15] , that symbolically represent reachability probabilities in terms of the parameters. These functions are exponential in the number of parameters [16] and become infeasible for more than two parameters. Parameter lifting [5, 6, 25] remedies this by using an abstraction technique, but due to an exponential blow-up of region splitting, is limited to a handful of parameters. The challenge is to solve optimal parameter synthesis problems with more parameters. Approach. We propose to tackle the optimal synthesis problem by a deep integration of two seemingly unrelated techniques: monotonicity checking [27] and parameter lifting [25] . The former builds a partial order on the state space to check whether a pMC is (local or global) monotonic in a certain parameter, while the latter is an abstraction technique that "lifts" the parameter dependencies, obtaining interval MCs [17, 21] , and solves them in an iterative manner. To construct an efficient combination, we extend both methods such that they profit from each other. This is done by combining them with a tailored divide-and-conquer component, see Fig. 1 . To prove bounds on the induced reachability probability, parameter lifting has been the undisputed state-of-the-art, despite the increased attention that parameter synthesis has received over recent years. This paper improves parameter lifting with more advanced reasoning capabilities that involve properties of the derivative, rather than the actual probabilities. These reasoning methods enable reducing the exponent of the inherently exponential-time procedure. This conceptual advantage is joined with various engineering efforts. Parameter lifting is accelerated by using side products of monotonicity analysis such as local monotonicity and shrinked parameter regions. Furthermore, bounds obtained by parameter lifting are used to obtain a cheap rule accelerating the monotonicity checker. The interplay between the two advanced techniques is tricky and requires a careful treatment. Note that we are not the first to exploit monotonicity in the context of pMCs. Hutschenreiter et al. [16] showed that the complexity of model checking (a monotone fragment of) PCTL on monotonic pMC is lower than on general pMCs. Pathak et al. [24] provided an efficient greedy approach to repair monotonic pMCs. Recently, Gouberman et al. [13] used monotonicity for hitting probabilities in perturbed continuous-time MCs. Experimental results. We realised the integrated approached on top of the Storm [11] model checker. Experiments on several benchmarks show that optimal synthesis is possible: (1) on benchmarks with up to about a few hundred parameters, (2) on benchmarks that cannot be handled without monotonicity, (3) while accelerating pure parameter lifting by up to two orders of magnitude. Our approach induces a bit of overhead on small instances for some benchmarks, and starts to pay off when increasing the number of parameters. Main contribution. In summary, the main contribution of this paper is a tight integration of parameter lifting and monotonicity checking. Experiments indicate that this novel combination substantially improves upon the state-of-the-art in optimal parameter synthesis. Organisation of the paper. Section 2 provides the necessary technical background and formalises the problem. Section 3 explains the approach-in particular the meaning of the arrows in Fig. 1 . Section 4 discusses how to state bounds can be exploited in the monotonicity checker. Section 5 details how to exploit local monotonicity in parameter lifting. Section 6 then considers the tight interplay via the divide-and-conquer method. Section 7 reports on the experimental results of our prototypical implementation in Storm while Section 8 concludes the paper. A probability distribution over a finite or countably infinite set X is a function μ : X → [0, 1] ⊆ R with x∈X μ(x) = 1. The set of all distributions on X is denoted by Distr (X). Let a ∈ R n denote (a 1 , . . . , a n ). The set of multivariate polynomials over ordered variables x = (x 1 , . . . , x n ) is denoted Q [ x] . For a polynomial f and variable x, we write x ∈ f if the variable occurs in the polynomial f . An instantiation for a finite set V of real-valued variables is a function u : V → R. We often denote u as a vector u ∈ R n with u i := u(x i ) for if the topology is preserved, i.e., P(s, s ) = 0 implies P(s, s )( u) = 0 for all states s and s . A set of instantiations is called a region. A region R is well-defined (graph-preserving) if u is well-defined (graph-preserving) for all u ∈ R. In this paper, we consider only graph-preserving regions. The closed-form of Pr s→T on a graph-preserving region is a rational function over V , i.e., a fraction of two polynomials over V . On a graph-preserving region, the function Pr s→T is continuously differentiable [25] . We call Pr s→T M the solution function, and for conciseness, we often omit the subscript M. Graph-preserving instantiations u, u preserve zero-one probabilities, i.e., Pr s→T ( u) = 0 implies Pr s→T ( u ) = 0, and analogous for =1. We simply write Pr s→T = 0 (or =1). Let ( ) denote all states s ∈ S with Pr s→T = 1 (Pr s→T = 0). By a standard preprocessing [4] , we may safely assume a single and state. Problem statement. This paper is concerned with the following questions for a given pMC M with target states T , and region R: Optimal synthesis. Find the instantiation u * such that -Robustness. Given tolerance ε ≥ 0, find an instantiation u * such that The optimal synthesis problem is ETR-hard [28] , i.e., as hard as finding a root of a multivariate polynomial. It is thus NP-hard and in PSPACE. The same applies to ε-robustness. The value of λ can be viewed as the optimal reachability probability of T -up to the robustness tolerance ε -over all possible parameter values while u * is the instantiation that maximises the probability to reach T . Like [28] , we assume pMCs to be simple, i.e., P(s, s ) ∈ {x, 1−x | x ∈ V } ∪ Q for all s, s ∈ S and s P(s, s ) = 1. Theoretically, the above problem for simple pMCs is as hard as for general pMCs, and practically, most pMCs are simple. For simple pMCs, the graph-preserving instantiations are in (0, 1) |V | . Regions are assumed to be well-defined, rectangular and closed, i.e., a region is a Cartesian Example 1. Fig. 2 (a) depicts a pMC. A region R is given by p ∈ [ 1 /4, 1 /2]. An instantiation u = {p → 1 /3} ∈ R yields the pMC in Fig. 2 To solve the problem statement, we consider an iterative method which analyzes regions, and, if necessary, splits these regions. In particular, we combine two approaches -parameter lifting and monotonicity checking -as shown in Fig. 1 . We consider local and global monotonicity. We start with defining the latter. f on region R is monotonic increasing in variable x, denoted f ↑ R x , if ∂ ∂x f ( u) ≥ 0 for all u ∈ R 3 . The pMC M = (S, s I , T , V , P) is monotonic increasing in parameter x ∈ V on graph-preserving region R, written M↑ R x , if Pr sI →T ↑ R x . s0 s1 p 1−q 1−p q (a) M3 s0 s1 [ 1 /3, 1 /2] [ 1 /4, 3 /5] [ 1 /2, 2 /3] [ 2 /5, 3 /4] (b) M3, R s.t. p ∈ [ 1 /3, 1 /2], q ∈ [ 2 /5, 3 /4] Monotonic decreasing, written M↓ R x , is defined analogously. Let succ(s) = {s ∈ S | P(s, s ) = 0} be the set of direct successors of s. Given the recursive equation for all u ∈ R. Rather than checking global monotonicity, the monotonicity checker determines a subset of the locally monotone state-parameter pairs. Such pairs intuitively capture monotonicity of a parameter only locally at a state s. Thus, while global monotonicity considers the derivative of the entire solution function, local monotonicity (in s) only considers the derivative of the first transition (emanating from s). Local monotonicity of parameter x in every state implies global monotonicity of x, as shown in [27] . As checking global monotonicity is co-ETR hard [27] , a practical approach is to check sufficient conditions for monotonicity. These conditions are based on constructing a pre-order on the states of the pMC; this is explained in detail in Section 4. Fig. 2(a) is locally monotonic increasing in p at s 0 and locally monotonic decreasing in p at s 1 . From this, we cannot conclude anything about global monotonicity of p on R. Indeed, the pMC is not globally monotonic on R. M 1 is globally monotonic on R = { u(p) ∈ [ 1 /10, 1 /2]}, but this cannot be concluded from the statement above. Contrarily, the pMC M 2 in Fig. 2 (c) is locally monotonic increasing in p at both s 0 and s 1 , and is therefore globally monotonic increasing in p. The key idea of parameter lifting [25] is to drop all parameter dependenciesparameters that occur at multiple states in a pMC-by introducing fresh param-eters. The outcome is an interval Markov chain [17, 21] , which can be considered a special case of pMCs in which no parameter occurs at multiple states. , if occur(s) ∩ occur(s ) = ∅ for all states s = s . All iMCs in this paper are simple. We typically label transitions emanating from state s in an iMC with Example 3. The pMC in Fig. 3(a) is an iMC. For a fixed R, the typical notation is given in Fig. 3(b) . For the pMC M 1 in Fig. 2(a) , the parameter p occurs at states s 0 and s 1 , so that this pMC is not an iMC. Extremal reachability probabilities on iMCs are reached at the extremal values of a region. Formally [25] , for each state s and region R in pMC M: This result is a direct consequence of local monotonicity at all states implying global monotonicity. The extremal values for the reachability probabilities in the obtained iMCs are obtained by interpreting the iMCs as MDPs and applying off-the-shelf MDP model checking. We denote the right-hand side of (1) as upper bound on R, denoted U R (s). Analogously we define a lower bound L R (s). Figure 4 shows how the extremal value for region R ι , pMC M, reachability property ϕ and precision can be computed using only parameter lifting [25] : This paper extends this iterative approach to include monotonicity checking. The main idea is to analyze regions and split them if the result is inconclusive. The approach uses a queue of regions that need to be checked and the current extremal value CurMax found so far. In particular, we maintain a lower bound on CurMax and know a (potentially trivial) upper bound: (CurMax+ε) ≥ maxR ∈Q UR(s I ). We iteratively check regions and improve both bounds until a satisfactory solution is found. Initially, the queue only contains R ι . For a selected R from the queue we compute an upper bound U R with parameter lifting. If U R at the initial state In particular, let the maximum so far be bounded by maxR ∈Q∪{R} UR(s I ). If the upper bound is below CurMax+ε, we are done, and return CurMax together with the u associated with CurMax. Otherwise, we continue and split R into smaller regions. By default, parameter lifting splits R along all dimensions. This algorithm converges in the limit [25] . As discussed in Section 3.1, we aim to analyse whether for a given region R, parameter x is locally monotonic at state s. The key ingredient is a pre-order on the states of the pMC at hand that is used for checking sufficient conditions for being local monotonic. We define the pre-order and recap the "cheap" rules for efficiently determining the pre-order as adopted from [27] . We add a new, simple rule to this repertoire that lets us avoid the computationally "expensive" rules using assumptions from [27] . The information needed to apply this new rule readily comes from parameter lifting as we will see. Ordering states for local monotonicity. Let us consider a conceptual example showing how a pre-order on states can be used for determining local monotonicity. Example 6. Consider the pMC M 2 in Fig. 2(c) . We reason backwards that both states are locally monotone increasing in p. First, observe that has a higher probability to reach the target (1) than (0). Now, in s 1 , increasing p will move more probability mass to , and hence, it is locally monotone. Furthermore, we know that the probability from s 1 is between and . Now, for s 0 we can use that increasing p moves more probability mass to s 1 , which we know has a higher probability to reach the target than . As in [27] , we determine local monotonicity by ordering states according to their reachability probability. The order R,T is called exhaustive if the reverse implication also holds. The relation R,T is a reflexive (aka: non-strict) pre-order. The exhaustive reachability order is the union of all reachability orders, and always exists. Unless stated differently, let denote the exhaustive reachability order. If the successor states of a state s are ordered, we can conclude local monotonicity in s: This result suggests to look for a so-called "sufficient" reachability order: A reachability order is sufficient for parameter x if for all states s with occur(s) = {x} and s 1 , s 2 ∈ succ(s) it holds: (s 1 s 2 ∨ s 2 s 1 ). Phrased differently, the reachability order is sufficient for x ∈ V if (succ(s), ) is a total order for all s that have transitions labelled with x. Observe that in contrast to an exhaustive order, a sufficient order does not need to exist. Ordering states efficiently. Def. 6 provides a conceptually simple scheme to order states s 1 and s 2 : compute the rational functions Pr s1→T and Pr s2→T , and compare them. As the size of these multivariate rational functions can be exponential in the number of parameters [16] , this is not practically viable. To avoid this, [27] has identified a set of rules that provide sufficient criteria to order states. Some of these rules are conceptually based on the underlying graph of a pMC and are computationally cheap; other rules reason about (a partial representation of) the full rational function Pr s1→T and are computationally expensive. Example 7. Using bounds avoids expensive rules: See M 4 in Fig. 5(a) . Let Using the solution functions p 2 + (1−p) · q and q · (1−q) for s 1 and s 2 yields s 2 s 1 on R. Such a rule is expensive, but the cheaper graph-based rules analogous to Ex. 6 are not applicable. However, when we use bounds from parameter lifting, we obtain U R (s 2 ) = 3 /8 and L R (s 1 ) = 1 /2, we observe U R (s 2 ) ≤ L R (s 1 ) and thus s 2 s 1 on R. Bounds also just simplify graph-based reasoning, in particular in the presence of cycles. Consider M 5 : As L R (s 3 ) ≥ U R (s 4 ), with reasoning similar to Ex. 6, it follows that s 2 s 1 , and we immediately get results about monotonicity. Our aim is to avoid applying the expensive rules from [27] by imposing a newand thanks to parameter lifting -cheap rule. To obtain this rule, we assume for state s and region R to have bounds L R (s) and U R (s) at our disposal satisfying Such bounds can be trivially assumed to be 0 and 1 respectively, but the idea is to obtain tighter bounds by exploiting the parameter lifter. This will be further detailed in Section 5. A simple observation on these bounds yields a cheap rule (provided these bounds can be easily obtained). For s 1 , s 2 ∈ S and region R: In the remainder of this section, we elaborate some technical details. Algorithmic reasoning. The pre-order is stored by a representation of its Hasse diagram, referred to as RO-graph. Evaluating whether two states are ordered amounts to a graph search in the RO-graph. We start off with the initial order . Then we attempt to apply one of the cheap rules to a state s. Lemma 2 provides us with more potential to apply a cheap rule. The typical approach is to do this in a reverse topological order over the RO-graph, such that the successors of s are already ordered as much as possible. If the successor states of s are ordered, then s can be added as a vertex and directed edges can be added between s and its successors. Otherwise, state s is added between and . This often allows for reasoning analogous to the example. To deal with strongly connected components, rules exist [27] that add states to the order even when not all successors are in the graph. If no cheap rule can be applied, more expensive rules using the rational functions from above or SMT-solvers are applied 5 . Recall that our aim is to compute some λ ≥ max u∈R Pr s→T M ( u) − ε for some fixed region R. In order to do so, we compute λ := max u∈relax(R) Pr s→T relax(M) ( u) on the iMC relax(M) obtained by relaxing the pMC M. We discuss how to speed up this computation using local monotonicity information. In the remainder, let D denote relax(M) and I denote relax(R). Thus, to maximise the probability to reach T , in every state s either the lower or the upper bound of parameter x s has to be chosen. This induces O(2 |S| ) choices. They can be efficiently navigated by interpreting these choices as nondeterministic choices, interpreting the iMC as a Markov decision process (MDP) [25] . Local monotonicity helps. Assume local monotonicity establishes s 1 s 2 , i.e., the reachability probability from s 2 is at least as high as from s 1 . To maximise the reachability probability from s, the parameter x s should be minimised. Contrary, if s 2 s 1 , parameter x s should be maximised. Thus, every local monotonicity result halves the amount of vertices that we are maximising over. Fig. 3(a) , which is the relaxation of the pMC M 1 in Fig. 2(a) . There are four combinations of lower and upper bounds that need to be investigated to compute the upper bound. Using local monotonicity, we deduce that q should be as low as possible and p as high as possible. Rather than evaluating a MDP, we thus obtain the same upper bound on the reachability probability in M 1 by evaluating a single parameter-free Markov chain. Accelerating value iteration. Parameter lifting [25] creates a single MDP -a comparatively expensive operation -and instantiates this MDP based on the region R to be checked. For computing the bound λ, specifically, it uses value iteration. Roughly, this means that for each state we start with either its lower or upper bound. The instantiated MC is then checked. Then, all bounds that can be improved by switching from lower to upper bound or vice versa are swapped. This procedure terminates with the optimal assignment to all bounds. We exploit the local monotonicity in this value iteration procedure by fixing the chosen bounds at locally monotonic states. In this section, we give a more detailed account of our approach, i.e., we will zoom in into Fig. 1 resulting in Fig. 6 . In particular, we detail the divide-and-conquer block. This loop is a refinement (indicated in red in Fig. 6 ) of Fig. 4 . We first give an overview, before discussing some aspects in greater detail. Overall algorithm The approach considers extended regions, i.e., a region R is equipped with state bounds L R (s) and U R (s) such that L R (s) ≤ Pr s→T M ( u) ≤ U R (s) for every state s, and with monotonicity information about the monotonic increasing (and decreasing) parameters on R. Initially the input region R is extended with L R (s) = 0, U R (s) = 1 for every s, and empty monotonicity information. Additionally, we initialize a conservative approximation for the maximum probability CurMax so far as 0. Extended regions are stored in the priority queue Q where U R (s I ) are used as priority. We discuss details below. Once initialised, we start an iterative process to update the conservative approximation of L R and U R . First, (1) a region R and the associated reachability order stored as RO-graph is taken from the queue Q and (2) its monotonicity is computed while using the annotated bounds L R and U R . Let X R ↑ denote globally monotonic increasing parameters on R, and similarly, X R ↓ denote decreasing parameters on R. For brevity, we omit the superscript R in the following. As a next step, we (3) shrink a region based on global monotonicity. We define the region Shrink X ↑ ,X ↓ (R) as follows: x ∈ X ↑ , and Shrink(R)(x) = R(x) otherwise. In the remainder of this section, let R denote Shrink X ↑ ,X ↓ (R). Observe that we can safely discard instantiations in R \ R , as max u∈R Pr s→T M ( u) = max u∈R Pr s→T M ( u). Next, we (4) analyse the region R to get bounds L R , U R using parameter lifting and using the local monotonicity information from the monotonicity check. We make two observations: First, it holds that L R (s) ≤ L R (s) and U R (s) ≤ U R (s) for every s: Thus, there is no regret in analysing R rather than R. Also, consider that if all parameters are globally monotone, the region R is a singleton and straightforward to analyse. If (5) U R (s I ) ≤ CurMax, then we discard R altogether and go to (1) . Otherwise, we (6) guess a candidate u ∈ R , and set CurMax to max(CurMax, Pr s→T M ( u)). If (7) CurMax + ε ≥ maxR ∈Q∪{R } UR(s I ), then we have solved our problem statement by returning CurMax. Otherwise, we cannot yet give a conclusive answer, and need to refine our analysis. To that end, we (8) split the region R into smaller (rectangular) regions R 1 , . . . , R n . Note that these sub-regions first inherit the bounds of the region R ; their bounds are refined in a subsequent iteration (if any). Termination in the limit (i.e., convergence of the lower and upper bound to the limit) follows from the termination of monotonicity checking and the termination of the loop in Fig. 4 . Incrementality A key aspect in tuning iterative approaches is the concept of incrementality; i.e., reusing previously computed information in later computation steps. Parameter lifting is already incremental [25] by reusing the MDP structure in an efficient manner. Let us address incrementality for the monotonicity checker. Notice that all monotonicity information and all bounds that are computed for region R carry over to anyR ⊆ R. In particular, s R,T s implies s R ,T s . Furthermore, our monotonicity checker may give up in an iteration if no cheap rules to determine monotonicity can be applied. In that case, we annotate the current reachability order such that after refining bounds, in a subsequent iteration, we can quickly check where we gave up in a last iteration, and whether refined bounds allow progress in constructing the reachability order. Notice that in principle, we have to duplicate the order for each region. However, we do this only until the monotonicity checker does not stabilize. The checker stabilizes, e.g., if an order is sufficient. Once the checker stabilized, we do not duplicate the order anymore (as no more local or global monotonicity can be deduced). Heuristics Our approach allows for several choices in the implementation. Whereas the correctness of the approach does not depend on how to resolve these choices, they have a significant influence on the performance. We discuss (what we believe to be) the most important choices, and how we resolved these choices in the current implementation. Initialising CurMax. Previously Storm was applicable only to few parameters and generously initialized CurMax by sampling all vertices V(R), which is exponential in the number of parameters. To scale to more parameters, we discard this sampling. Instead, we sample for each parameter independently to find out which parameters are definitely not monotone. Naturally, we skip parameters already known to be monotone. We select sample points as follows. We distribute the 50 points evenly along the dimension of the parameter. All other parameter values are fixed: Non-monotonic parameters are set to their middle point in their interval (as described by the region). Monotone parameters are set at the upper (lower) bound when possibly monotone increasing (decreasing). Updating CurMax. To prove that CurMax is close to the maximum, it is essential to find a large value for CurMax fast. In our experience, sampling at too many places within regions yields significant overhead, but taking L(s I ) is a too pessimistic way to update CurMax. To update CurMax, we select a single u ∈ R in the middle of region R . As we may have shrunk the region R, the middle of R does not need to coincide with the middle of R, which yields behavior different from the vanilla refinement loop. How and where to split? There are two important splitting decisions to be made. First, we need to select the dimensions (aka: parameters) in which we split. Second, we need to decide where to split along these dimensions. We had little success with trivial attempts to split at better places, so the least informative split in the middle remains our choice for splitting. However, we have changed where (in which parameter or dimensions) to split. Naturally, we do not (need to) split in monotonic parameters. Previously, parameter lifting split in every dimension at once. Let us illustrate that this quickly becomes infeasible: Assume 10 parameters. Splitting the initial region once yields 1024 regions. Splitting half of them again yields > 500,000 regions. Instead, we use region estimates, which are heuristic values for every parameter, based on the implementation of [19] . These estimates, provided by the parameter lifter, essentially consider how well the policy on the MDP (selecting upper or lower bounds in the iMC) agrees with the dependencies induced by a parameter: The more it agrees, the lower the value. The key idea is that one obtains tighter bounds if the policy adheres to the dependencies induced by the parameters 6 . We split in the dimension with the largest estimate. If the region estimate is smaller than 10 −4 , then we split in the dimension of R with the widest interval. Priorities in the region queue. Contrary to [25] , we want to find the extremal value within the complete region, rather than partitioning the state space. Consequently, the standard technique splits based on the size of the region, and de-facto, a breadth-first search. When we split a region, we prioritize the subregionsR ⊆ R with U R (s I ), as UR(s I ) ≤ U R (s I ). We use the age of a region to break ties. Here, a wild range of exploration strategies is possible. To avoid overfitting, we refrain in the experiments from weighting different aspects of the region, but the current choice is likely not the final answer. Setup. We investigate the performance of the extended divide-and-conquer approach presented in Fig. 6 . We have implemented the algorithm explained above in the probabilistic model checker Storm [11] . We compare its performance with vanilla parameter lifting, outlined in Fig. 4 , as baseline. Both versions use the same underlying data structures and version of Storm. All experiments were executed on a single core Intel Xeon Platinum 8160 CPU. We did neither use any parallel processing nor randomization. We used a time out of 1800s and a memory limit of 32GB. We exclude model-building times from all experiments and emphasize that they coincide for the vanilla and new implementations. Benchmarks and results. The common benchmarks Crowds, BRP, and Zeroconf have only globally monotonic parameters (and only two). Using monotonicity, they become trivial. The structure of NAND and Consensus makes them not amenable to monotonicity checking, and the performance mostly resembles the baseline. We selected additional benchmarks from [2] , [23] , and [18] , see below. The models from the latter two sources are originally formulated as partially observable MDPs and were translated into pMC using the approach in [19] . Table 1 summarizes the results for benchmarks identified by their name and instance. We list the number of states, transitions and parameters of the pMC. For each benchmark, we consider two values for ε: ε=0.05 and ε=0.1. For each ε, we consider the time t required and the number (i) of iterations that the integrated loop and the baseline require. For the integrated loop, we additionally provide the number (i b ) of extra (lower bound) parameter lifting invocations needed to assist the monotonicity checker. Discussion of the results. We make the following observations. -NRP: this model is globally monotonic in all its parameters. Our monotonicity checker can find this one parameter. The integrated approach is an order of magnitude faster on all instances, scaling to more parameters. -Evade: this model is globally monotonic in some of its parameters. Our monotonicity check can find this monotonicity for a subset. The integrated approach is faster on all instances, as a better initial CurMax is guessed based on the results from the monotonicity checker. -Herman's protocol: this is a less favourable benchmark for the integrated approach as only one parameter is not globally monotonic. The calculation of the bounds for the monotonicity checking yields significant overhead. -Maze: this model is globally monotonic in all its parameters. This can be found directly by the monotonicity checker, so we are left to check a single valuation. This valuation is also provably the optimal valuation. In general, for ε=0.1, the number of regions that need to be considered is relatively small and guessing an (almost) optimal value is not that important. This means that the results are less volatile to changes in the heuristic. For ε=0.05, it is significantly trickier to get this right. Monotonicity helps us in guessing a good initial point. Furthermore, it tells us in which parameters we should and should not split. Therefore, we prevent unnecessary splitting in some of the parameters. This paper has presented a new technique for tackling the optimal synthesis problem: what is the instance of a parametric Markov chain that satisfies a reachability objective in an optimal manner? The key concept is a deep interplay between parameter lifting, the favourable technique so far for this problem, and monotonicity checking. Experiments showed encouraging results: speed ups of up to two orders of magnitude for various benchmarks, and an increased number of parameters. Future work consists including advanced sampling techniques and applying this approach to other application areas such as optimal synthesis and monotonicity in probabilistic graphical models [26] and hyper-properties in security [1] . Parameter synthesis for probabilistic hyperproperties Automated fine tuning of probabilistic self-stabilizing algorithms Model checking probabilistic systems Principles of Model Checking Precise parameter synthesis for stochastic biochemical systems Shepherding hordes of markov chains Model repair for Markov decision processes Sequential convex programming for the efficient verification of parametric MDPs Synthesis in pMDPs: A tale of 1001 parameters Symbolic and parametric model checking of discrete-time Markov chains A storm is coming: A modern probabilistic model checker Accelerated model checking of parametric Markov chains Markov chains with perturbed rates to absorption: Theory and application to model repair Synthesis for PCTL in parametric markov decision processes Probabilistic reachability for parametric Markov models Parametric Markov chains: PCTL complexity and fraction-free Gaussian elimination Specification and refinement of probabilistic processes Enforcing almost-sure reachability in POMDPs Finite-state controllers of POMDPs using parameter synthesis The probabilistic model checking landscape Interval-valued finite Markov chains. Reliab Parametric probabilistic transition systems for system design and analysis Verification and control of partially observable probabilistic systems A greedy approach for the efficient repair of stochastic models Parameter synthesis for Markov models: Faster than ever Attaining monotonicity for Bayesian networks Are parametric Markov chains monotonic? On the complexity of reachability in parametric Markov decision processes We currently run these bounds until the monotonicity checker has stabilized. We observe that, mostly due to numerical computations, the time that the lower bounds take can be significant, but the overhead and the merits of getting larger lower bounds are hard to forecast.Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (ihttps://creativecommons.org/licenses/by/ 4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.