key: cord-026513-3myuf5q2 authors: Feo-Arenis, Sergio; Vujinović, Milan; Westphal, Bernd title: On Implementable Timed Automata date: 2020-05-13 journal: Formal Techniques for Distributed Objects, Components, and Systems DOI: 10.1007/978-3-030-50086-3_5 sha: doc_id: 26513 cord_uid: 3myuf5q2 Generating code from networks of timed automata is a well-researched topic with many proposed approaches, which have in common that they not only generate code for the processes in the network, but necessarily generate additional code for a global scheduler which implements the timed automata semantics. For distributed systems without shared memory, this additional component is, in general, undesired. In this work, we present a new approach to the generation of correct code (without global scheduler) for distributed systems without shared memory yet with (almost) synchronous clocks if the source model does not depend on a global scheduler. We characterise a set of implementable timed automata models and provide a translation to a timed while language. We show that each computation of the generated program has a network computation path with the same observable behaviour. Automatic code generation from real-time system models promises to avoid human implementation errors and to be cost and time efficient, so there is a need to automatically derive (at least parts of) an implementation from a model. In this work, we consider a particular class of distributed real-time systems consisting of multiple components with (almost) synchronous clocks, yet without shared memory, a shared clock, or a global scheduler. Prominent examples of such systems are distributed data acquisition systems such as data aggregation in satellite constellations [16, 18] , the wireless fire alarm system [15] , IoT sensors [30] , or distributed database systems (e.g. [12] ). For these systems, a common notion of time is important (to meet real-time requirements or for energy efficiency) and is maintained up to a certain precision by clock synchronisation protocols, e.g., [17, 23, 24] . Global scheduling is undesirable because schedulers are expensive in terms of network bandwidth and computational power and the number of components in the system may change dynamically, thus keeping track of all components requires large computational resources. Timed automata, in particular in the flavour of Uppaal [7] , are widely used to model real-time systems (see, for example, [14, 32] ) and to reason about the correctness of systems as the ones named above. Modelling assumptions of timed automata such as instantaneous updates of variables and zero-time message exchange are often convenient for the analysis of timed system models, yet they, in general, inhibit direct implementations of model behaviour on real-world platforms where, e.g., updating variables take time. In this work, we aim for the generation of distributed code from networks of timed automata with exactly one program per network component (and no other programs, in particular no implicit global scheduler), where all execution times are considered and modelled (including the selection of subsequent edges), and that comes with a comprehensible notion of correctness. Our work can be seen as the first of two steps towards bridging the gap between timed automata models and code. We propose to firstly consider a simple, iterative programming language with an exact real-time semantics (cf. Sect. 4) as the target for code generation. In this step, which we consider to be the harder one of the two, we deal with the discrepancy between the atomicity of the timed automaton semantics and the non-atomic execution on real platforms. The second step will then be to deal with imprecise timing on real-world platforms. Our approach is based on the following ideas. We define a short-hand notation (called implementable timed automata) for a sub-language of the well-known timed automata (cf. Sect. 3). We assume independency from a global scheduler [5] as a sufficient criterion for the existence of a distributed implementation. For the timing aspect, we propose not to use platform clocks directly in, e.g., edge guards (see related work below) but to turn model clocks into program variables and to assume a "sleep" operation with absolute deadlines on the target platform (cf. Sect. 4). In Sect. 5, we establish the strong and concrete notion of correctness that for each time-safe computation of a program obtained by our translation scheme there is a computation path in the network with the same observable behaviour. Section 6 shows that our short-hand notation is sufficiently expressive to support industrial case studies and discusses the remaining gap towards realworld programming languages like C, and Sect. 7 concludes. Generating code for timed systems from timed automata models has been approached before [3, 4, 20, 25, 29] . All these works also generate code for a scheduler (as an additional, explicit component) that corresponds to the implicit, global scheduler introduced by the timed automata semantics [5] . Thus, these approaches do not yield the distributed programs that we aim for. A different approach in the context of timed automata is to investigate discrete sampling of the behaviour [28] and so-called robust semantics [28, 33] . A timed automaton model is then called implementable wrt. to certain robustness parameters. Bouyer et al. [11] have shown that each timed automaton (not a network, as in our case) can be sampled and made implementable at the price of a potentially exponential increase in size. A different line of work is [1, 2, 31] . They use timed automata (in the form of RT-BIP components [6] ) as abstract model of the scheduling of tasks. Considering execution times for tasks, a so-called physical model (in a slightly different formalism) is obtained for which an interpreter has been implemented (the real-time execution engine) that then realises a scheduling of the tasks. The computation time necessary to choose the subsequent task (including the evaluation of guards) is "hidden" in the execution engine (which at least warns if the available time is exceeded), and they state the unfortunate observation that time-safety does not imply time-robustness with their approach. There is an enormous amount of work on so-called synchronous languages like Esterel [10] , SIGNAL [8] , Lustre [19] and time triggered architectures such as Giotto/HTL [21] . These approaches provide an abstract programming or modelling language such that for each program, a deployable implementation, in particular for signal processing applications, can be generated. As modelling formalism (and input to code generation), we consider timed automata as introduced in [7] . In the following, we recall the definition of timed automata for self-containedness. Our presentation follows [26] and is standard with the single exception that we exclude strict inequalities in clock constraints. A timed automaton A = (L, A, X, V, I, E, ini ) consists of a finite set of locations (including the initial location ini ), sets A, X, and V of channels, clocks, and (data) variables. A location invariant I : L → Φ(X) assigns a clock constraint over X from Φ(X) to a location. Finitely many edges in E are of the form ( , α, ϕ, r, consists of input and output actions on channels and the internal action τ , Φ(X, V ) are conjunctions of clock constraints from Φ(X) and data constraints from Φ(V ), and R(X, V ) * are finite sequences of updates, an update either resets a clock or updates a data variable. For clock constraints, we exclude strict inequalities as we do not yet support their semantics (of reaching the upper or lower bound arbitrarily close but not inclusive) in the code generation. In the following, we may write (e) etc. to denote the source location of edge e. The operational semantics of a network N = A 1 . . . A n of timed automata as components -and with pairwise disjoint sets of clocks and variables -is the (labelled) transition system T (N ) = (C, Λ, { λ − →| λ ∈ Λ}, C ini ) over configurations. A configuration c ∈ C = { , ν | ν |= I( )} consists of location vector (an n-tuple whose i-th component is a location of A i ) and a valuation ν : X(N ) ∪ V (N ) → R + 0 ∪ D of clocks and variables. The location vector has invariant I( ) = n i=1 I( i ), and we assume a satisfaction relation between valuations and clock and data constraints as usual. Labels are Λ = {τ } ∪ R + 0 , and the set of initial configurations is There is an internal transition , ν τ − → , ν , if and only if there is an edge e = ( , τ, ϕ, r, ) enabled in , ν and ν is the result of applying e's update vector to ν. An edge is enabled in , ν if and only if its source location occurs in the location vector, its guard is satisfied by ν, and ν satisfies the destination location's invariant. There is a rendezvous transition , ν τ − → , ν , if and only if there are edges e 0 = ( 0 , a!, ϕ 0 , r 0 , 0 ) and e 1 = ( 1 , a?, ϕ 1 , r 1 , 1 ) in two different automata enabled in , ν and ν is the result of first applying e 0 's and then e 1 's update vector to ν. A transition sequence of N is any finite or infinite, initial and consecutive sequence of the form 0 , ν 0 λ1 −→ 1 , ν 1 λ2 −→ · · · . N is called deadlock-free if no transition sequence of N ends in a configuration c such that there are no c , c such that c Next, Deadline, Boundary. Given an edge e with source location and clock constraint ϕ clk , and a configuration c = , ν , we define next(c, ϕ clk ) = min{d ∈ R + 0 | ν+d |= I( )∧ϕ clk } and deadline(c, ϕ clk ) = max{d ∈ R + 0 | ν+next(c, ϕ clk )+ d |= I( )∧ϕ clk } if minimum/maximum exist and ∞ otherwise. That is, next gives the smallest delay after which e is enabled from c and deadline gives the largest delay for which e is enabled after next. The boundary of a location invariant ϕ clk is a clock constraint ∂ϕ clk s.t. ν + d |= ∂ϕ clk if and only if d = next(c, ϕ clk ) + deadline(c, ϕ clk ). A simple sufficient criterion to ensure existence of boundaries is to use location invariants of the form ϕ clk = x ≤ q, then ∂ϕ clk = x ≥ q. In the following, we introduce implementable timed automata that can be seen as a definition of a sub-language of timed automata as recalled in Sect. 2. As briefly discussed in the introduction, a major obstacle with implementing timed automata models is the assumption that actions are instantaneous. The goal of considering the sub-language defined below is to make the execution time of resets and the duration of message transmissions explicit. Other works like, e.g., [13] , propose higher-dimensional timed automata where actions take time. We propose to make action times explicit within the timed automata formalism. Implementable timed automata distinguish internal, send, and receive edges by action and update in contrast to timed automata. An internal edge models (only) updates of data variables or sleeping idle (which takes time on the platform), a send edge models (only) the sending of a message (which takes time), and a receive edge (only) models the ability to receive a message with a timeout. All kinds of edges may reset clocks. Figure 1 shows an example implementable timed automaton using double-outline edges to distinguish the graphical representation from timed automata. The edge from 0 to 1 , for example, models that message 'LZ[id]' may be transmitted between time s 0 + g (including guard time g and operating time) and s 0 + g + m, i.e., the maximal transmission duration here is m. The time n l1 would be the operating time budgeted for location 1 . The semantics of the implementable network N consisting of implementable timed automata I 1 , . . . , I n is the labelled transition system T (A I1 . . . A In ). The timed automata A Ii are obtained from I i by applying the translation scheme in Fig. 2 edge-wise. The construction introduces fresh × -locations. Intuitively, a discrete transition to an × -location marks the completion of a data update or message transmission in I that started at the next time of the considered configuration. After completion of the update or transmission, implementable timed automata always wait up to the deadline. If the update or transmission has a certain time budget, then we need to expect that the time budget may be completely used in some cases. Using the time budget, possibly with a subsequent wait, yields a certain independence from platform speed: if one platform is fast enough to execute the update or transmission in the time budget, then all faster platforms are. Note that the duration of an action may be zero in implementable timed automata (exactly as in timed automata), yet then there will be no timesafe execution of any corresponding program on a real-world platform. In [5] , the concept of not to depend on a global scheduler is introduced. Intuitively, independency requires that sending edges are never blocked because no matching receive edge is enabled or because another send edge in a different component is enabled. That is, the schedule of the network behaviour ensures that at each point in time at most one automaton is ready to send, and that each automaton that is ready to send finds an automaton that is ready for the matching receive. Similar restrictions have been imposed on timed automaton models in [9] to verify the ZeroConf protocol. Whether a network depends on a global scheduler is decidable; for details, we refer the reader to [5] . Figure 3 shows an artificial network of implementable timed automata whose independency from a global scheduler depends on the parameters s 1,0 + w 1 and s 2,0 + w 2 . If the location 1,1 is reached, then the standard semantics of timed automata would (using the implicit global scheduler) block the sending edge until 2,1 is reached. Yet in a distributed system, the sender should not be assumed to know the current location of the receiver. By choosing the parameters accordingly (i.e., by protocol design), we can ensure that the receiver is always ready before the sender so that the sender is never blocked. In this case, we can offer a distributed implementation. In the following sections, we only consider networks of implementable timed automata that are deadlock-free, closed component (no shared clocks or variables, no committed locations (cf. [7] )), and do not depend on a global scheduler. In this section, we introduce a timed programming language that provides the necessary expressions and statements to implement networks of implementable timed automata as detailed in Sect. 5. The semantics is defined as a structural operational semantics (SOS) [27] that is tailored towards proving the correctness of the implementations obtained by our translation scheme from Sect. 5. We use a dedicated time component in configurations of a program to track the execution times of statements and support a snapshot operator to measure the time that passed since the execution of a particular statement. Due to lack of space, we introduce expressions on a strict as-needed basis, including message, location, edge, and time expressions. In a general purpose programming language, the former kinds of expressions can usually be realised using integers (or enumerations), and time expressions can be realised using platform-specific representations of the current system time. Syntax. Expressions of our programming language are defined wrt. given network variables V and X. We assume that each constraint from Φ(X, V ) or expression from Ψ (V ) over V and X has a corresponding (basic type) program expression and thus that each variable v ∈ V and each clock x ∈ X have corresponding (basic type) program variables v v , v x ∈ V b . In addition, we assume typed variables for locations, edges, and messages, and for times (on the target platform). We additionally consider location variables V l to store the current location, edge variables V e to store the edge currently worked on, message variables V m to store the outcome of a receive operation, and time variables V t to store platform time. Message expressions are of the form mexpr ::= m | a, m ∈ V m , a ∈ A, location expressions are of the form lexpr ::= l | | nextloc I (mexpr ), l ∈ V l , ∈ L, and edge expressions are of the form eexpr ::= e | e, e ∈ V e , e ∈ E. A time expression has the form texpr ::= | t | t + expr , where is the current platform time and t ∈ V t . Note that time variables are different from clock variables. The values of clock variable v x are used to compute a new next time, which is then stored in a time variable, which can be compared to the platform time. Clock variables can be represented by platform integers (given their range is sufficient for the model) while time variables will be represented by platform specific data types like timespec with C [22] and POSIX. In this way, model clocks are only indirectly connected (and compared) to the platform clock. Table 1 . Statements S, statement sequences S, and programs P . | if e = eexpr 1 : S1 . . . e = eexpr n : Snfi | while expr do S od S ::= | S | S | S; S | S ; S ( ; S ≡ S; ≡ S), P ::= S1 · · · Sn. The set of statements, statement sequences, and timed programs are given by the grammar in Table 1 . The term nextedge I ([mexpr ]) represents an implementation of the edge selection in an implementable timed automaton that can optionally be called with a message expression. We denote the empty statement sequence by and introduce as an artificial snapshot operator on statements (see below). The particular syntax with snapshot and non-snapshot statements allows us to simplify the semantics definition below. We use StmSeq to denote the set of all statement sequences. π = S, (β, γ, w, u) , σ consisting of a statement sequence S ∈ StmSeq, the operating time of the current statement β ∈ R + 0 i.e., the time passed since starting to work on the current statement), the time to completion of the current statement γ ∈ R + 0 ∪ {∞} (i.e., the time it will take to complete the work on the current statement), the snapshot time w ∈ R + 0 (i.e., the time since the last snapshot), the platform clock value 1 u ∈ R + 0 , and a type-consistent valuation σ of the program variables. We will use operating time and time to completion to define computations of timed while programs (with discrete transitions when the time to completion is 0), and we will use the snapshot time w as an auxiliary variable in the construction of predicates by which we relate program and network computations. The valuation σ maps basic type variables from V b to values from a domain that includes all values of data variables from D as used in the implementable timed automaton and all values needed to evaluate clock constraints (see below), i.e. σ(V b ) ⊆ D b . Time variables from V t are mapped to non-negative real numbers, i.e., σ(V t ) ⊆ R + 0 , message variables from V m are mapped to channels, i.e., σ(V m ) ⊆ A ∪ {⊥} or the dedicated value ⊥ representing 'no message', location variables from V l are mapped to locations, i.e., σ(V l ) ⊆ L, and edge variables from V e are mapped to edges, i.e., σ(V e ) ⊆ E. For the interpretation of expressions in a component configuration we assume that, if the valuation σ of the program variables corresponds to the valuation of data variables ν, then the interpretation expr (π) of basic type expression expr corresponds to the value of expr under ν. Other variables obtain their values from σ, too, i.e. t (π) = σ(t), m (π) = σ(m), l (π) = σ(l), and e (π) = σ(e); constant symbols are interpreted by their corresponding value, i.e. a (π) = a, (π) = , and e (π) = e, and we have t + expr (π) = t (π) + expr (π). There are two non-standard cases. The -symbol denotes the platform clock value of π, i.e.. (π) = u, and we assume that nextloc I ([mexpr ]) (π) yields the destination location of the edge that is currently processed (as given by e), possibly depending on a message name given by mexpr . If e (π) denotes an internal action or send edge e, this is just the destination location (e), for receive edges it is (e) if mexpr evaluates to the special value ⊥, and an i from a (a i ?, i ) pair in the edge otherwise. If the receive edge is non-deterministic, we assume that the semantics of nextloc I resolves the non-determinism. Program Computations. Table 2 gives an SOS-style semantics with discrete reduction steps of a statement sequence (or component). Note that the rules in Table 2 (with the exception of receive) apply when the time to completion is 0, that is, at the point in time where the current statement completes. Each rule then yields a configuration with the operating time γ for the new current statement. The new snapshot time w is 0 if the first statement in S is a snapshot statement S , and w otherwise. Rule (R7) updates m to a, which is a channel or, in case of timeout, the 'no message' indicator '⊥'. Rule (R8) is special in that it is supposed to represent the transition relation of an implementable timed automaton. Depending on the program valuation σ, (R8) is supposed to yield a triple of the next edge to work on, this edge's next and deadline. For simplicity, we assume that the interpretation of nextedge I ([mexpr ]) is deterministic for a given valuation of program variables. A configuration of program P = S 1 · · · S n is an n-tuple Π = ( S 1 , (β 1 , γ 1 , w 1 , u 1 ), σ 1 , . . . , S n , (β n , γ n , w n , u n ), σ n ) of component configurations; C(P ) denotes the set of all configurations of P . The operational semantics of a program P is the labelled transition system on system configurations defined as follows. There is a delay transition if no current statement completes strictly before δ. There is an internal transition if for some i, 1 ≤ i ≤ n, a discrete reduction rule from Table 2 There is a synchronisation transition , σ j by (R7), and β j ≥ β i , i.e. if component j has been listening at least as long as component i has been sending. Note that this definition of synchronisation allows multiple components to send at the same time (which may cause message collision on a shared medium) and that, similar to the rendezvous communication of timed automata, out of multiple receivers, only one takes the message. In our application domain these cases do not happen because we assume that implementable networks do not depend on a global scheduler. That is, the program of an implementable network never exhibits any of these two behaviours. A program configuration is called initial if and only if the k-th component configuration, 1 ≤ k ≤ n, is at S k , with any β k , γ k = 0, w k = 0, u k = 0, and any σ k with σ k (V b ) = 0. We use C ini (P ) to denote the set of initial configurations of program P . A computation of P is an initial and consecutive sequence of program configurations ζ = Π 0 , Π 1 , . . . , i.e. Π 0 ∈ C ini (P ) and for all i ∈ N 0 exists λ ∈ R + 0 ∪ {τ } such that Π i λ − → Π i+1 as defined above. We need not consider terminating computations of programs here because we assume networks of implementable timed automata without deadlocks. The program of the network of implementable timed automata N = I 1 . . . I n is P (N ) = S(I 1 ) . . . S(I n ) (cf. Table 3c ). The edges' work is implemented in the corresponding Line 2 of the statement sequences in Tables 3a and 3b. The remaining Lines 3 to 8 include the evaluation of guards to choose the edge to be executed next. The result of choosing the edge is stored in program variable e which (by the while loop and the if-statement) moves to Line 1 of the implementation of that edge. The program's timing behaviour is controlled by variable t and is thus decoupled from clocks in the timed automata model. After Line 8, the value of t denotes the absolute time where the execution of the next edge is due. That is, clocks in the program are not directly compared to the platform time (which would raise issues with the precision of platform clocks) but are used to determine points in time that the target platform is supposed to sleep to. By doing so, we also lower the risk of accumulating imprecisions in the sleep operation of the target platform when sleeping for many relative durations. The idea of scheduling work and operating time is illustrated by the timing diagram in Fig. 4 . Row (a) shows a naïve schedule for comparison: From time t i−1 , decide on the next edge to execute and determine this edge's next time at t i (light grey phase: operating time, must complete within the next edge's next time n e ), then sleep up to the next time (dashed grey line), then execute the edge(s) actions (dark grey phase: work time, must complete within the edge's deadline d e ), then sleep up to the edge's deadline at t i+1 , and start over. The program obtained by our translation scheme implements the schedule shown in Row (b). The program begins with determining the next edge right after the work phase and then has only one sleep phase up to, e.g., t i+2 where the next work phase begins. In this manner, we require only one interaction with the execution platform that implements the sleep phases. Row (c) illustrates a possible extension of our approach where operating time is needed right before the work phase, e.g., to prepare the platform's transceiver for sending a message. We call the program P (N ) a correct implementation of network N if and only if for each observable behaviour of a time-safe execution of P (N ) there is a corresponding computation path of N . In the following, we provide our notion of time-safety and then elaborate on the above mentioned correspondence between program and network computations. Intuitively, a computation of P (N ) is not time-safe if either the execution of an edge's statement sequence takes longer than the admitted deadline or if the next time of the subsequent edge is missed, e.g., by an execution platform that is too slow. Note that in a given program computation, the performance of the platform is visible in the operation time β and time to completion γ. We write Π k :L e n to denote that the program counter of component k is at Line n of the statement sequence of edge e. We use σ| X∪V to denote the (network) configuration encoded by the values of the corresponding program variables. We assume 2 that for each program variable v, the old value, i.e., the value before the last assignment in the computation is available as @v. i.e., if the i-th configuration completes (γ i,k = 0) Line 2 of an edge's statement sequence, not more time than admitted by its deadline has been used (w k ), i.e., the sleepto statement in Line 1 completes exactly after the deadline of the previously worked on edge plus the current edge's next time. ♦ Note that, by Definition 2, operating times may be larger than the subsequent edge's next time in a time-safe computation (if the execution of the current edge completes before its deadline). Stronger notions of time-safety are possible. For correctness of P (N ), recall that we introduced Timed While Programs to consider the computation time that is needed to compute the transition relation of an implementable network on the fly. In addition, program computations have a finer granularity than network computations: In network computations, the current location and the valuation of clocks and variables are updated atomically in a transition. In the program P (N ), these updates are spread over three lines. We show that, for each time-safe computation ζ of program P (N ), there is a computation of network N that is related to ζ in a well-defined way. The relation between program and network configurations decouples both computations in the sense that at some times (given by the respective timestamp) the, e.g., clock values in the program configuration are "behind" network clocks (i.e., correspond to an earlier network configuration), at some times they are "ahead", and there are points where they coincide. Figure 5 illustrates the relation for one edge e. The top row of Fig. 5 gives a timing diagram of the execution of the program for edge e of one component. The rows below show the values over time for each program variable v up to e, n, and d. For example, the value of l will denote the source location of e until Line 3 is completed, and then denotes the destination location . Similarly, v and x denote the effects of the update vector of e on data variables and clocks. Note that, during the execution of Line 3, we may observe combinations of values for v and l that are never observed in a network computation due to the atomic semantics of networks. The two bottom lines of Fig. 5 show related network configurations aligned with their corresponding program lines. Note that the execution of each line except for Line 1 may be related to two network configurations depending on whether the program timestamp is before or after the current edge's deadline. Figure 6 illustrates the three possible cases: The execution of program Line 2 (work time, dark gray) is related to network configurations with the source location of the current edge. Right after the work time, the network location × is related and at the current edge's deadline the destination location is related. In the related network computation, the transition from × to always takes place at the current edge's deadline. This point in time may, in the program computation, be right after work time (Fig. 6a , no delay in × ), in the operating time (Fig. 6b) , or in the sleep time (Fig. 6c) . The relation between program and network configurations as illustrated in Fig. 5 can be formalised by predicates over program and network configurations, one predicate per edge and program line. 3 The following lemma states the described existence of a network computation for each time-safe program computation. The relation gives a precise, component-wise and phase-wise relation of program computations to network computations. In other words, we obtain a precise accounting of which phases of a time-safe program computation correspond to a network computation and how. We can argue component-wise by the closed component-assumption from Sect. 3. Table 3c reach the Line 2 of a send or receive edge (cf . Table 3a and 3b) and establish a related network configuration. For the induction step, we need to consider delays and discrete steps of the program. From time-safety of ζ we can conclude to possible delays in N for the related configurations with a case-split wrt. the deadline (cf. Fig. 6 ). When the program time is at the current edge's deadline, the network may delay up to the deadline in an intermediate location × , take a transition to the successor location , and possibly delay further. For discrete program steps, we can verify that N has enabled discrete transitions that reach a network configuration that is related to the next program line. Here, we use our assumptions from the program semantics that update vectors have the same effect in the program and the network. And we use the convenient property of our program semantics that the effects of statements only become visible with the discrete transitions. For synchronisation transitions of the program, we use the assumption that the considered network of implementable timed automata does not depend on a global scheduler, in particular that send actions are never blocked, or, in other words, that whenever a component has a send edge locally enabled, then there is a receiving edge enabled on the same channel. Our main result in Theorem 1 is obtained from Lemma 1 by a projection onto observable behaviour (cf. Definition 3). Intuitively, the theorem states that at each point in time with a discrete transition to Line 2, the program configuration exactly encodes a configuration of network P (N ) right before taking an internal, send, or receive edge. . . be the projection of a computation path ξ of the implementable network N onto component k, 1 ≤ k ≤ n, labelled such that each configuration k i,0 , ν k i,0 is initial or reached by a discrete transition to a source location of an internal, send, or receive edge. The sequence ξ k is the largest index such that between c := k j,0 , ν k j,0 and k j,ij , ν k j,ij + d j exactly next(c) time units have passed, is called the observable behaviour of component k in ξ. ♦ Theorem 1. Let N be an implementable network and ζ k = π 0,0 , . . . , π 0,n0 , π 1,0 , . . . the projection onto the k-th component of a time-safe computation ζ of P (N ) labelled such that π i,ni , π i+1,0 are exactly those transitions in ζ from a Line 1 to the subsequent Line 2. Then ( σ i,0 (l), σ i,0 | X∪V , u i,0 ) i∈N0 is an observable behaviour of component k on some computation path of N . ♦ Fig. 7 . Timed automaton of the implementable timed automaton (after applying the scheme from Fig. 2) for the LZ-protocol of sensors [15] . The work presented here was motivated by a project to support the development of a new communication protocol for a distributed wireless fire alarm system [15] , without shared memory, only assuming clock synchronisation and message exchange. We provided modelling and analysis of the protocol a priori, that is, before the first line of code had been written. In the project, the engineers manually implemented the model and appreciated how the model indicates exactly which action is due in which situation. Later, we were able to study the handwritten code and observed (with little surprise) striking regularities and similarities to the model. So we conjectured that there exists a significant sublanguage of timed automata that is implementable. In our previous work [5] , we identified independency from a global scheduler as a useful precondition for the existence of a distributed implementation (cf. Sect. 2). For this work, we have modelled the LZ-protocol of sensors in the wireless fire alarm system from [15] as an implementable timed automaton (cf. Fig. 1 ; Fig. 7 shows the timed automaton obtained by applying the scheme from Fig. 2 ). Hence our modelling language supports real-world, industry case-studies. Implementable timed automata also subsume some models of time-triggered, periodic tasks that we would model by internal edges only. From the program obtained by the translation scheme given in Table 3 , we have derived an implementation of the protocol in C. Clock, data, location, edge, and message variables become enumerations or integers, time variables use the POSIX data-structure timespec. The implementation runs timely for multiple days. Although our approach with sleeping to absolute times reduces the risk of drift, there is jitter on real-world platforms. The impact of timing imprecision needs to be investigated per application and platform when refining the program of a network to code, e.g., following [11] . In our case study, jitter is much smaller than the model's time unit. Another strong assumption that we use is synchrony of the platform clocks and synchronised starting times of programs which can in general not be achieved on real-world platforms. In the wireless fire alarm system, component clocks are synchronised in an initialisation phase and kept (sufficiently) synchronised using system time information in messages. Robustness against limited clock drift is obtained by including so-called guard times [23, 24] in the protocol design. In the model, this is constant g: Components are ready to receive g time units before message transmission starts in another component. Note that Theorem 1 only applies to time-safe computations. Whether an implementation is time-safe needs to be analysed separately, e.g., by conducting worst-case execution time (WCET) analyses of the work code and the code that implements the timed automata semantics. The C code for the LZ-model mentioned above actually implements a sleepto function that issues a warning if the target time has already passed (thus indicating non-time-safety). The translation scheme could easily be extended by a statement between Lines 2 and 3 that checks whether the deadline was kept and issues a warning if not. Then, Theorem 1 would strengthen to the statement that all computations of P (I) either correspond to observable behaviour of I or issue a warning. Note that, in contrast to [1, 2, 31] , our approach has the practically important property that time-safety implies time-robustness, i.e., if a program is time-safe on one platform then it is time-safe on any 'faster' platform. Furthermore, we have assumed a deterministic choice of the next edge to be executed for simplicity and brevity of the presentation. Non-deterministic models can be supported by providing a non-deterministic semantics to the nextedge I function in the programming language and the correctness proof. We have presented a shorthand notation that defines a subset of timed automata that we call implementable. For networks of implementable timed automata that do not depend on a global scheduler, we have given a translation scheme to a simple, exact-time programming language. We obtain a distributed implementation with one program for each network component, the programs are supposed to be executed concurrently, possibly on different computers. We propose to not substitute (imprecise) platform clocks for (model) clocks in guards and invariants, but to rely on a sleep function with absolute deadlines. The generated programs do not include any "hidden" execution times, but all updates, actions, and the time needed to select subsequent edges are taken into account. For the generated programs, we have established a notion of correctness that closely relates program computations to computation paths of the network. The close relation lowers the mental burden for developers that is induced by other approaches that switch to a slightly different, e.g., robust semantics for the implementation. Our work decomposes the translation from timed automata models to code into a first step that deals with the discrepancy between atomicity of the timed automaton semantics and the non-atomic execution on real platforms. The second step, to relate the exact-time program to real platforms with imprecise timing is the subject of future work. Model-based implementation of real-time applications Rigorous implementation of real-time systems -from theory to application Synthesis of Ada code from graph-based task models Code synthesis for timed automata On global scheduling independency in networks of timed automata Modeling heterogeneous real-time components in BIP A tutorial on Uppaal Synchronous programming with events and relations: the SIGNAL language and its semantics Compositional abstraction in real-time model checking The Esterel synchronous programming language: design, semantics, implementation Timed automata can always be made implementable Spanner: Google's globally distributed database Higher-dimensional timed automata Automated analysis of AODV using UPPAAL Ready for testing: ensuring conformance to industrial standards through formal verification Parameterized verification of track topology aggregation protocols Clock synchronization of distributed, real-time, industrial data acquisition systems Ridesharing: fault tolerant aggregation in sensor networks using corrective actions The synchronous data flow programming language LUSTRE Translating Uppaal to not quite C Giotto: a time-triggered language for embedded programming Programming Languages -C Formal approach to guard time optimization for TDMA Optimizing guard time for TDMA in a wireless sensor network -case study Automatic translation from Uppaal to C Real-Time Systems -Formal Specification and Automatic Verification A structural approach to operational semantics Dynamical properties of timed automata On generating soft real-time programs for non-realtime environments A methodology for choosing time synchronization strategies for wireless IoT networks Model-based implementation of parallel real-time systems Ad Hoc routing protocol verification through broadcast abstraction Almost ASAP semantics: from timed models to timed implementations