Session-based Concurrency, Reactively (Extended Version) Mauricio Cano1 , Jaime Arias2 , and Jorge A. Pérez1 1
University of Groningen
2
Inria Grenoble Rhône-Alpes
Abstract. This paper concerns formal models for the analysis of communication-centric software systems that feature declarative and reactive behaviors. We focus on session-based concurrency, the interaction model induced by session types, which uses (variants of) the π-calculus as specification languages. While well-established, such process models are not expressive enough to specify declarative and reactive behaviors common in emerging communication-centric software systems. Here we propose the synchronous reactive programming paradigm as a uniform foundation for session-based concurrency. We present correct encodings of session-based calculi into ReactiveML, a synchronous reactive programming language. Our encodings bridge the gap between process specifications and concurrent programs in which session-based concurrency seamlessly coexists with declarative, reactive, timed, and contextual behaviors.
1
Introduction
In this paper, we introduce the synchronous reactive programming paradigm as a practical foundation for communication-centric software systems. Our motivation is twofold. First, synchronous reactive programming allows us to uniformly integrate point-to-point communications (as in the π-calculus) with declarative, reactive, timed, and contextual behaviors—this is an elusive combination for process models such as the π-calculus. Second, by relying on ReactiveML (a synchronous reactive programming language with a formal semantics), we may bridge the gap between π-calculus processes and actual concurrent programs, thus bringing a rigorous communication model to programmers. Large software systems are deployed as aggregations of distributed interacting components, which are built using a myriad of different programming platforms and/or made available as black-boxes that expose minimal interaction interfaces. In these complex, heterogeneous systems communication emerges as the key unifying glue. Certifying that interacting components conform to their prescribed protocols is thus an important but challenging task, and is essential in ensuring overall system correctness. Besides protocol conformance, analyzing communication-centric software systems entails addressing additional challenges, which can be seen as related to the increasing ubiquity of these systems. Indeed, communication-centric software appears in emerging trends (e.g., collective adaptive systems) and as such is subject to various classes of requirements that are orthogonal to communication correctness. We focus on communication-centric software systems featuring declarative, reactive, timed, and contextual behaviors. (In §2 we illustrate these intended systems, using a transactional protocol 1
subject to failures.) By stipulating governing conditions (rather than how to implement such conditions), declarative approaches naturally specify, e.g., security policies. Closely intertwined, constructs modeling reactivity, time, and context-awareness are at the heart of mechanisms that enforce, e.g., self-adaptation and fault-tolerance in dependable systems. Therefore, while not directly connected to protocol specifications, declarative, reactive, timed, and contextual behaviors (and their interplay) do influence communication and should be integrated into the analysis of protocol conformance. Process calculi (such as the π-calculus [18]) have long offered a principled basis for the compositional analysis of message-passing programs. Within these approaches, our work concerns session-based concurrency, the interaction model induced by session types [12], which organize protocols as sessions between two or more participants. In session-based concurrency, a session type describes the contribution of each partner to the protocol. Interactions are structured, and always occur in matching pairs; e.g., when one partner sends, the other receives; when one partner offers a selection, the other chooses. Different session type theories for binary (two-party) and multiparty protocols have been developed [13]; here we focus on binary sessions. Binary and multiparty session types rely on π-calculi with session constructs. These session calculi have been extended with declarative, reactive, timed, and contextual behaviors, but none of these extensions captures all these features. For instance, session calculi with assertions (logical predicates) [5,3] may describe certain declarative requirements, but do not account for reactive and contextual behaviors. Frameworks with time-related conditions, such as [4,1], have similar limitations. The framework in [14] supports contextual information through events, but does not represent reactive, declarative behaviors. Integrating these extensions into a single process framework seems rather difficult, for they rely on different languages and often conflicting assumptions. Here we pursue a different approach: we embed session-based concurrency within the synchronous reactive programming (SRP) model for reactive, timed systems [2,11]. Hence, rather than extending session π-calculi with declarative, reactive, timed, and contextual features, we encode session-based communication into a setting where these features (and their interplay) are already well understood. We consider ReactiveML, a programming language based on SRP [17,16], as target language in our developments. ReactiveML is a general purpose functional language with a well-defined formal semantics. Our technical contributions are two correct encodings of session π-calculi into ReactiveML. In a nutshell, we use signals in ReactiveML to mimick names in session π-calculi. Our encodings enable us to integrate, in a seamless and uniform way, session-based constructs as “macros” in ReactiveML programs with declarative and reactive constructs. Moreover, since our encodings are executable (well-typed) ReactiveML programs, our results have a direct practical character, which serves to bridge the gap between specifications in process models and actual concurrent programs. This paper is structured as follows. §2 illustrates our approach via an example. §3 summarizes the syntax and semantics of a session π-calculus and of ReactiveML. In both cases, we consider languages with synchronous and asynchronous (queue-based) communication. §4 presents our two encodings and states their correctness. §5 collects closing remarks. The appendix includes further examples and technical details (omitted definitions and proofs). 2
2
A Motivating Example
We use a toy example to illustrate (i) the limitations of session π-calculi in representing structured communications with declarative/reactive behaviors, and (ii) how our approach, based on encodings into ReactiveML, can neatly overcome such limitations. A Ride Protocol. Suppose a conference attendee who finds himself in a foreign airport. To get in time for his presentation, he uses a mobile app in his phone to request a ride to the conference venue. The intended protocol may be intuitively described as follows: 1. Attendee sends his current location and destination to a neighbouring Driver. 2. Driver receives these two pieces of information and offers three options to Attendee: a ride right now, a ride at a later time, or to abort the transaction. 3. Attendee is in a hurry, and so he selects to be picked up right now. 4. Driver replies by sending an estimated arrival time at Attendee’s location. Using session π-calculus processes (as in, e.g., [19]), this protocol may be implemented as a process S = (νxy)(A(x) | D(y)), where processes A(x) and D(y), abstract the behavior of Attendee and Driver as follows: A(x) = xhloci.xhdesi.x / now.x(e).0 D(y) = y(l).y(d).y . {now : yhetai.0 , later : y(t).yhoki.0 , quit : Closey } where process Closey denotes an unspecified sub-protocol for closing the transaction. Above, we write xhzi.P (resp. x(w).P ) to denote the output (resp. input) along name x with continuation P . Processes x / l.P and x . {li : Pi }i∈I denote internal and external labeled choices, respectively. Above, now, later, and quit denote labels. Process 0 denotes inaction. Process (νxy)P declares x and y as dual session endpoints in P . This way, S says that A(x) and D(y) play complementary roles in the session protocol. The Need for Richer Behaviors. Session-based concurrency assumes that once a session is established, communication may proceed without interruptions. This is unrealistic in most real-life scenarios, where established sessions are prone to failures or interruptions. For instance, a connectivity issue in the middle of the protocol with Driver may leave Attendee stuck in the airport. In such cases, notions of contextual information, reactivity, and time become essential: Contextual Information such as, e.g., external events signalling a malfunction, allows relating the system with its environment. For instance, we may like to relate A(x) and D(y) with a connectivity manager that triggers warning events. Reactivity serves to detect unforeseen circumstances (e.g., failures) and to define appropriate system behaviors to run in such cases. For instance, we may like to define A(x) so that another driver is requested if a failure in a protocol with D(y) arises. Time allows to track the instant in which a failure occurred, and also to establish a deadline within which the failure should be resolved. For instance, in case of failure A(x) may try contacting alternative drivers only until k instants after the failure. As mentioned above, the session π-calculus does not support these features, and proposed extensions do not comprehensively address them. We rely on synchronous reactive programming (SRP) and ReactiveML, which already have the ingredients for seamlessly integrating declarative, reactive behavior into session-based concurrency. 3
ReactiveML. ReactiveML extends OCaml with reactive, timed behavior. Time is modelled as discrete units, called instants; reactivity arises through signals, which may carry values. In ReactiveML, expression signal x in e declares a new signal x. We use constructs emit s v and await s(x) in e to emit and await a signal s, respectively. Preemption based on signals is obtained by the expression do (e1 ) until s → (e2 ), which executes e1 until signal s is detected, and runs e2 in the next instant. Moreover, ReactiveML can encode the parallel composition of expressions e1 and e2 , denoted e1 k e2 . Embedding Sessions in ReactiveML. Our first encoding, denoted J·Kf (cf. Def. 14), translates session π-calculus processes into ReactiveML expressions; we use substitution f to represent names in the session π-calculus using (fresh) signals in ReactiveML. Our second encoding, denoted ([·]) (cf. Def. 17), supports an asynchronous semantics. We illustrate J·Kf by revisiting our example above. Let us define a concurrent reactive program in which JA(x)Kf , JD(y)Kf , and JD0 (w)Kf represent ReactiveML snippets that implement session-based communication. We consider a simple possibility for failure: that Driver (D(y)) may cancel a ride anytime or that communication with Attendee (A(x)) fails and cannot be recovered. Ideally, we would like a new driver D0 (w), whose implementation may be the same as D(y), to continue with the protocol, without disrupting the protocol from the perspective of A(x). This could be easily expressed in ReactiveML as the expression S 0 = signal w1 , w2 in (RA k RD) where: RA = do (JA(x)K{x←w1 } ) until f ail → (await w2 (z) in JA(x)K{x←z} ) RD = do (JD(y)K{y←w1 } ) until f ail → (BD) BD = signal w3 in (emit w2 w3 ; JD0 (w)K{w←w3 } )
S 0 declares two signals: while signal w1 connects a reactive attendee RA and the reactive driver RD, signal w2 connects RA with a backup driver BD. If no failure arises, RA and RD run their expected session protocol. Otherwise, the presence of signal f ail will be detected by both RA and RD: as a result, the attendee will await a new signal for restarting the session; process JD(y)K stops and BD will become active in the next instant. After emitting a fresh signal w3 , BD can execute the protocol with RA.
3
Preliminaries
A Session π-calculus Our presentation follows closely that of [19]. We assume a countable infinite set of variables Vs , ranged over by x, y, . . .. A variable represents one of the two endpoints of a session. We use v, v 0 , . . . to range over values, which include variables and the boolean constants tt, ff. Also, we use l, l0 , . . . to range over labels. We write x e to denote a finite sequence of variables (and similarly for other elements). Definition 1 (π). The set π of session processes is defined as: P, Q ::= xhvi.P | x(y).P | x / l.P | (νxy)P | ∗ x(y).P
| x . {li : Pi }i∈I | v? (P ) : (Q) | P | Q | 0
Process xhvi.P sends value v over x and then continues as P ; dually, process x(y).Q expects a value v on x that will replace all free occurrences of y in Q. Processes x/lj .P and x . {li : Qi }i∈I define a labeled choice mechanism, with labels indexed by the finite set I: given j ∈ I, process x / lj .P uses x to select lj and trigger process Qj . 4
bC OMc (νxy)(xhvi.P | y(z).Q) −→ (νxy)(P | Q{v/z}) bS ELc (νxy)(x / lj .P | y . {li :Qi }i∈I ) −→ (νxy)(P | Qj ) (j ∈ I) bR EPc (νxy)(xhvi.P | ∗ y(z).Q) −→ (νxy)(P | Q{v/z} | ∗ y(z).Q) bI F Tc tt? (P ) : (Q) −→ P
bI F Fc ff? (P ) : (Q) −→ Q
Fig. 1. Reduction relation for π processes (contextual congruence rules omitted).
We assume pairwise distinct labels. The conditional process v? (P ) : (Q) behaves as P if v evaluates to tt; otherwise Qn it behaves as Q. Parallel composition and inaction are standard. We often write i=1 Pi to stand for P1 | · · · | Pn . The double restriction (νxy)P binds together x and y in P , thus indicating that they are the two endpoints of a session. Process ∗ x(y).P denotes a replicated input process, which allows us to express infinite server behaviors. In x(y).P (resp. (νyz)P ) occurrences of y (resp. y, z) are bound with scope P . The set of free variables of P , denoted fv(P ), is as expected. The operational semantics for π is given as a reduction relation −→, the smallest relation generated by the rules in Fig. 1. Reduction expresses the computation steps that a process performs on its own. It relies on a structural congruence on processes, denoted ≡S , which identifies processes up to consistent renaming of bound variables, denoted ≡α . Formally, ≡S is the smallest congruence that satisfies the axioms: P | 0 ≡S P P | Q ≡S Q | P P ≡S Q if P ≡α Q (P | Q) | R ≡S P | (Q | R) (νxy)(νwz)P ≡S (νwz)(νxy)P (νxy)0 ≡S 0 (νxy)P | Q ≡S (νxy)(P | Q) if x, y 6∈ fv(Q) We briefly comment on the rules in Fig. 1. Reduction requires an enclosing restriction (νxy)(· · · ); this represents the fact that a session connecting endpoints x and y has been already established. Rule bC OMc represents the synchronous communication of value v through endpoint x to endpoint y. While Rule bS ELc formalizes a labeled choice mechanism, in which communication of a label lj is used to choose which of the Qi will be executed, Rule bR EPLc is similar to Rule bC OMc, and used to spawn a new copy of Q, available as a replicated server. Rules bI F Tc and bI F Fc are self-explanatory. Rules for reduction within parallel, restriction, and ≡S (not given in Fig. 1) are as expected. The following notion will be useful in stating properties of our translations. Definition 2 (Contexts for π). The syntax of (evaluation) contexts in π is given by the following grammar: E ::= [·] | E | P | P | E | (νxy)(E), where P is a π process and ‘[·]’ represents a ‘hole’. We write C[·] to range over contexts (ν x eye)([·] | P1 | . . . | Pn ), with n ≥ 1. E[P ] (resp. C[P ]) will denote the process obtained by filling [·] with P . An Asynchronous Session π-calculus (aπ) Following [14], we now define aπ, a variant of π with asynchronous (queue-based) semantics. The syntax of aπ includes variables x, y, . . . and co-variables, denoted x, y. Intuitively, x and x denote the two endpoints of a session, with x = x. We write Va to denote the set of variables and covariables; k, k 0 will be used to range over Va . As before, values include booleans and variables. The syntax of processes is as follows: 5
bS ENDc xhvi.P | x[i : m f1 , o : m f2 ] −→A P | x[i : m f1 , o : m f2 · v] bS ELc x / l.P | x[i : m f1 , o : m f2 ] −→A P | x[i : m f1 , o : m f2 · l] bC OMc x[i : m f1 , o : m · m f2 ] | x[i : m f1 , o : m f2 ] −→A x[i : m f1 , o : m f2 ] | x[i : m f1 · m, o : m f2 ] v bR ECVc x(y).P | x[i : v · m f1 , o : m f2 ] −→A P { /y} | x[i : m f1 , o : m f2 ] bB RAc x . {li : Pi }i∈I | x[i : lj · m f1 , o : m f2 ] −→A Pj | x[i : m f1 , o : m f2 ] bI F Tc tt? (P ) : (Q) −→A P
(j ∈ I)
bI F Fc ff? (P ) : (Q) −→A Q
Fig. 2. Reduction relation for aπ processes (contextual congruence rules omitted).
Definition 3 (aπ and aπ ? ). The set aπ of asynchronous session processes is defined as: P, Q ::= khvi.P | k(y).P | k / l.P | k . {li : Pi }i∈I | v? (P ) : (Q) | P | Q | 0 | (νx)P | µX.P | X | k[i : m; e o : m] e We write aπ ? to denote the sub-language of aπ without queues. Differences with respect to Def. 1 appear in the second line of the above grammar. The usual (single) restriction (νx)P is convenient in a queue-based setting; it binds both x and x in P . We consider recursion µX.P rather than input-guarded replication. Communication in aπ is mediated by queues of messages m (values v or labels l), one for each endpoint k; these queues, denoted k[i : m; e o : m], e have output and input parts. Synchronization proceeds as follows: the sending endpoint first enqueues the message m in its own output queue; then, m is moved to the input queue of the receiving endpoint; finally, the receiving endpoint retrieves m from its input queue. We will use to denote the empty queue. Notions of free/bound (recursive) variables are as expected. The operational semantics of aπ is defined as a reduction relation coupled with a structural congruence relation ≡A . The former is defined by the rules in Fig. 2, which either follow the above intuitions for queue-based message passing or are exactly as for π; the latter is defined as the smallest congruence on processes that considers standard principles for parallel composition and inaction, together with the axioms: (νx)(νy)P ≡A (νy)(νx)P (νx)0 ≡A 0 µX.P ≡A P {µX.P/X } k[i : ; o : ] ≡A 0 (νx)P | Q ≡A (νx)(P | Q) if x 6∈ fv(Q). The notion of contexts for aπ includes unary contexts E and binary contexts C: Definition 4 (Contexts for aπ). The syntax of contexts in aπ is given by the following grammar: E ::= [·] | E | P | P | E | (νx)E, where P is an aπ process andQ ‘[·]’ repn resents a ‘hole’. We write C[·1 , ·2 ] to denote binary contexts (ν x e)([·1 ] | [·2 ] | i=1 Pi ) with n ≥ 1. We will write E[P ] (resp. C[P, Q]) to denote the aπ process obtained by filling the hole in E[·] (resp. C[·1 , ·2 ]) with P (resp. P and Q). Both π and aπ abstract from an explicit phase of session initiation in which endpoints are bound together. We thus find it useful to identify aπ processes which are properly initialized (PI): intuitively, processes that contain all queues required to reduce. Definition 5 (Properly Initialized Processes). Let P ≡A (ν x e)(P1 | P2 ) be an aπ process such that P1 is in aπ ? (i.e., it does not include queues) and fv(P1 ) = {k1 , . . . , kn }. We say P is properly initialized (PI) if P2 contains a queue for each session declared in P1 , i.e., if P2 = k1 [i : , o : ] | · · · | kn [i : , o : ]. 6
ReactiveML: A synchronous reactive programming language Based on the reactive model given in [6], ReactiveML [17] is an extension of OCaml that allows unbounded time response from processes, avoiding causality issues present in other SRP approaches. ReactiveML extends OCaml with processes: state machines whose behavior can be executed through several instants. Processes are the reactive counterpart of OCaml functions, which ReactiveML executes instantaneously. In ReactiveML, synchronization is based on signals: events that occur in one instant. Signals can trigger reactions in processes; these reactions can be run instantaneously or in the next instant. Signals carry values and can be emitted from different processes in the same instant. We present the syntax of ReactiveML following [15], together with two semantics, with synchronous and asynchronous communication. We will assume countable infinite sets of variables Vr and names Nr (ranged over by x1 , x2 and n1 , n2 , respectively). Definition 6 (RML). The set RML of ReactiveML expressions is defined as: v, v 0 ::= c | (v, v) | n | λx.e | process e e, e0 ::= x | c | (e, e) | λx.e | e e | rec x = v | match e with {ci → ei }i∈I | let x = e and x = e in e | run e | loop e | signale x : e in e | emit e e | pause | process e | present e? (e) : e | do e when e | do (e) until e(x) → (e) Values v, v 0 , . . . include constants c (booleans and the unit value ()), pairs, names, abstractions, and also processes, which are made of expressions. The syntax of expressions e, e0 extends a standard functional substrate with match and let expressions and with process- and signal-related constructs. Expressions run e and loop e follow the expected intuitions. Expression signalg x : d in e declares a signal x with default value d, bound in e; here g denotes a gathering function that collects the values produced by x in one instant. When d and g are unimportant (e.g., when the signal will only be emitted once), we will write simply signal x in P . We will also write signal x1 , . . . , xn in e when declaring n > 1 distinct signals in e. If expression e1 transitions to the name of a signal then emit e1 e2 emits a signal carrying the value from the instantaneous execution of e2 . Expression pause postpones execution to the next instant. The conditional expression present e1 ? (e2 ) : (e3 ) checks the presence of a signal: if e1 transitions to the name of a signal present in the current instant, then e2 is run in the same instant; otherwise, e3 is run in the next instant. Expression do e when e1 executes e only when e1 transitions to the name of a signal present in the current instant, and suspends its execution otherwise. Expression do (e1 ) until e(x) → (e2 ) executes e1 until e transitions into the name of a signal currently present that carries a value which will substitute x. If this occurs, the execution of e1 stops at the end of the instant and e2 is executed in the next one. Using these basic constructs, we may obtain the useful derived expressions reported in Fig. 3, which include the parallel composition e1 k e2 . We will say that an expression with no parallel composition operator at top level is a thread. We write ≡R to denote the smallest equivalence that satisfies the following axioms: (i) e k () ≡R e; (ii) e1 k e2 ≡R e2 k e1 ; (iii) (e1 k e2 ) k e3 ≡R e1 k (e2 k e3 ). A Synchronous Semantics for RML. Following [15], we define a big-step operational semantics for RML. We require some auxiliary definitions for signal environments and events. Below, ] and v denote usual multiset union and inclusion, respectively. 7
M
M
e1 ; e2 = let _ = () and _ = e1 in e2 e1 k e2 = let _ = e1 and _ = e2 in () M await e1 (x) in e2 = do (loop pause ) until e1 (x) → (e2 ) M let rec process f x1 . . .xn = e1 in e2 = let f = (rec f = λx1 . . .xn .process e1 ) in e2 (n ≥ 1) M if e1 then e2 else e3 = match e1 with {tt → e2 | ff → e3 } Fig. 3. Derived RML expressions.
Definition 7 (Signal Environment). Let D, G, M be sets of default values, gathering functions, and multisets, respectively. A signal environment is a function S : Nr → M (D × G × M), denoted S = [(d1 , g1 , m1 )/n1 , . . . , (dk , gk , mk )/nk ], with k ≥ 1. We use the following notations: S d (ni ) = di , S g (ni ) = gi , and S m (ni ) = mi . Also, S v = fold gi mi di where fold recursively gathers multiple emissions of different values in the same signal; see [17,15] for details. An event E associates a signal ni to a multiset mi that represents the values emitted during an instant: M
Definition 8 (Events). An event is defined as a function E : Nr → M, i.e., E = [m1 /n1 , . . . , mk /nk ], with k ≥ 1. Given events E1 and E2 , we say that E1 is included in E2 (written E1 vE E2 ) if and only if ∀n ∈ Dom(E1 ) ∪ Dom(E2 ) ⇒ E1 (n) v E2 (n). The union E1 and E2 (written E1 tE E2 ) is defined for all n ∈ Dom(E1 ) ∪ Dom(E2 ) as (E1 tE E2 )(n) = E1 (n) ] E2 (n). We now define the semantics of RML expressions. A big-step transition in RML capE,b
tures reactions within a single instant, and is of the form e −−→ e0 where S stands for S
the smallest signal environment (wrt vE and S m ) containing input, output, and local signals; E is the event made of signals emitted during the reaction; b ∈ {tt, ff} is a boolean value that indicates termination: b is false if e is stuck during that instant and is true otherwise. At each instant i, the program reads an input Ii and produces an output Oi . The reaction of an expression obeys four conditions: (C1) (Ii tE Ei ) vE Sim (i.e., S must contain the inputs and emitted signals); (C2) Oi vE Ei (i.e., the output g d signals are included in the emitted signals); (C3) Sid ⊆ Si+1 ; and (C4) Sig ⊆ Si+1 (i.e., default values and gathering functions are preserved throughout instants). Fig. 4 gives selected transition rules; see [7] for a full account. Rules bL-PARc and bL-D ONEc handle let expressions, distinguishing when (a) at least one of the parallel branches has not yet terminated, and (b) both branches have terminated and their resulting values can be used. Rule bRUNc ensures that declared processes can only be executed while they are preceded by run . Rules bL P -S TUc and bL P -U Nc handle loop expressions: the former decrees that a loop will stop executing when the termination boolean of its body becomes ff; the latter executes a loop until Rule bL P -S TUc is applied. Rule bS IG -D ECc declares a signal by instantiating it with a fresh name in the continuation; its default value and gathering function must be instantaneous expressions. Rule bE MITc governs signal emission. Rule bPAUSEc suspends the process for an instant. Rules bS IG -Pc and bS IG -NPc check for presence of a signal n: when n is currently present, the body e2 is run in the same instant; otherwise, e3 is executed in the next instant. Rules bDU-E NDc, bDU-Pc, and bDU-NPc handle expressions do (e1 ) until e2 (x) → (e3 ). Rule bDU-E NDc says that if e1 terminates instanta8
E ,b
E ,b
1 1 e1 − −− − → e01
2 2 e2 − −− − → e02
S
bL-PARc
b1 ∧ b2 = ff
S E1 tE E2 ,ff
let x1 = e1 and x2 = e2 in e3 −−−−−−−→ let x1 = e01 and x2 = e02 in e3 S
E2 ,tt
E1 ,tt
S
S
bL-D ONEc
E ,b
e3 {v1 , v2/x1 , x2 } −−3−→ e03
e2 −−−→ v2
e1 −−−→ v1
S
E1 tE E2 tE E3 ,b
let x1 = e1 and x2 = e2 in e3 −−−−−−−−−−→ e03 S
bRUNc
bL P -U Nc
bL P -S TUc E ,b
E ,tt
E,ff
e0 −−2−→ e00
e −−1−→ process e0
E1 tE E2 ,b
run e −−−−−−→ e
S E,ff
00
e1 −−−→ v1 S
bS IG -D ECc
S
E3 ,b
e3 {n/x} −−−→
e2 −−−→ v2 S
S
E1 tE E2 ,b
0
loop e −−−−−−→ e0
S
E2 ,tt
E ,b
loop e −−2−→ e0
e −−1−→ v
loop e −−−→ e ; loop e
S
E1 ,tt
E ,tt
e −−−→ e0
S
S
S
e03
n fresh S(n) = (v1 , v2 , m)
S E1 tE E2 tE E3 ,b
signale2 x : e1 in e3 −−−−−−−−−−→ e03 S
E ,tt
E ,tt
e1 −−1−→ n
e2 −−2−→ v
S
bE MITc
S
bPAUSEc
E1 tE E2 tE [{v}/n],tt
emit e1 e2 −−−−−−−−−−−−−→ () E1 ,tt
e1 −−−→ n S
bS IG -Pc
E2 ,b
n∈S
e2 −−−→ S E,ff
S
E,tt
e02
e1 −−−→ n S
bS IG -NPc
present e1 ? (e2 ) : (e3 ) −−−→ e02
n 6∈ S E,ff
present e1 ? (e2 ) : (e3 ) −−−→ e3
S
bDU-E NDc
∅,ff
pause −−→ ()
S
S
bDU-Pc
E ,tt
e2 −−2−→ n S
E ,tt
E ,tt
e1 −−1−→ v
e2 −−2−→ n
S E1 tE E2 ,tt
n∈S
S
E ,ff
e1 −−1−→ e01
S E1 tE E2 ,ff
v do (e1 ) until e2 (x) → (e3 ) −−−−−−−→ v do (e1 ) until e2 (x) → (e3 ) −−−−−−−→ e3 {S (n)/x} S
S
E2 ,tt
e2 −−−→ n bDU-NPc
n 6∈ S
S
E1 ,ff
e1 −−−→ S
e01
E1 t E2 ,ff
do (e1 ) until e2 (x) → (e3 ) −−−−E−−−→ do (e01 ) until e2 (x) → (e3 ) S
Fig. 4. Big-step semantics for RML expressions (selection).
neously, then the whole expression terminates. Rule bDU-Pc says that if e2 transitions to a currently present signal n, then e3 is executed in the next instant, substituting x with the values gathered in n. Rule bDU-NPc executes e1 as long as e2 does not reduce to a currently present signal. We shall rely on a simple notion of equality: Definition 9 (Equality with case normalization). Let ,→R denote the extension of ≡R with the axiom match cj with {ci → Pi }i∈I ,→R Pj , where cj is a constant and j ∈ I. RMLq: ReactiveML with a Queue-based Semantics. We extend RML with an explicit store of queues that keeps the state of the executed program. Unlike signals, the store of queues is preserved throughout time. The syntax of RML is extended with constructs that modify the queues located in the store; the resulting language is called RMLq: Definition 10 (RMLq). RMLq expressions are obtained by extending the grammar of values in Def. 6 with the following forms: v ::= · · · | pop | put | isEmpty. 9
bP UT-Qc
bP OP -Qc ∅,tt
∅,tt
hput q v ; Σ, q : e hi 9999K h() ; Σ, q : e h · vi hpop q ; Σ, q : v · e hi 9999K hv ; Σ, q : e hi S
S
bNE MPTYc
bP OP -Q c
∅,tt ∅,ff hisEmpty q ; Σ, q : e hi 9999K h() ; Σ, q : e hi hpop q ; Σ, q : i 9999K hpop q ; Σ, q : i S
S
∅,ff
bE MPTYc hisEmpty q ; Σ, q : i 9999K hisEmpty q ; Σ, q : i S
Fig. 5. Big-step semantics for RMLq: Queue-related operations.
The new constructs allow RMLq programs to modify queues, which are ranged over by q, q 0 , . . .. Construct put receives a queue and an element as parameters and pushes the element into the end of the queue. Construct pop takes a queue and dequeues its first element; if the queue is empty in the current instant the process will block the current thread until an element is obtained. Construct isEmpty blocks a thread until the instant in which a queue stops being empty. The semantics of RMLq includes a state Σ, Σ 0 ::= ∅ | Σ, q : ve (i.e., a possibly empty collection of queues) and configurations K, K 0 ::= he ; Σi. The big-step E,b
semantics then has transitions of the form he ; Σi 999K he0 ; Σ 0 i, where S is a signal S
environment, b is a termination boolean, and E is an event. The corresponding transition system is generated by rules including those in Fig. 5 (see also [7]). Most transition rules for RMLq are interpreted as for RML; we briefly discuss queue-related rules in Fig. 5. Rule bP UT-Qc pushes an element into a queue and terminates instantaneously. Rule bP OP -Qc takes the first element from the queue (if not empty) and terminates instantaneously. Rule bNE MPTYc enables isEmpty to terminate instantaneously if the queue is not empty. Rule bP OP -Q c keeps the thread execution stuck for at least one instant if the queue is empty; Rule bE MPTYc is similar. We rule out programs with parallel pop/put operations along the same session in the same instant.
4
Expressiveness Results
We present our main results: correct translations of π into RML and of aπ into RMLq. The Formal Notion of Encoding We define notions of language, translation, and encoding by adapting those from Gorla’s framework for relative expressiveness [10]. Definition 11 (Languages & Translations). A language L is a tuple hP, − →, ≈i, where P is a set of processes, − → denotes an operational semantics, and ≈ is a behavioral equality on P. A translation from Ls = hPs , − →s , ≈s i into Lt = hPt , − →t , ≈t i (each with countably infinite sets of variables Vs and Vt , respectively) is a pair hJ·K, ψJ·K i, where J·K : Ps → Pt is a mapping, and ψJ·K : Vs → Vt is a renaming policy for J·K. We are interested in encodings: translations that satisfy certain correctness criteria:
Definition 12 (Encoding). Let Ls = hPs , − →s , ≈s i and Lt = hPt , − →t , ≈t i be languages; also let hJ·K, ψJ·K i be a translation between them (cf. Def. 11). We say that such a translation is an encoding if it satisfies the following criteria: 10
1. Name invariance: For all S ∈ Ps and substitution σ, there exists σ 0 such that JSσK = JSKσ 0 , with ψJ·K (σ(x)) = σ 0 (ψJ·K (x)), for any x ∈ Vs . 2. Compositionality: Let ress (·, ·) and pars (·, ·) (resp. rest (·, ·) and part (·, ·)) denote restriction and parallel composition operators in Ps (resp. Pt ). Then, we define: Jress (e x, P )K = rest (ψJ·K (e x), JP K) and Jpars (P, Q)K = part (JP K, JQK). 3. Operational correspondence, i.e., it is sound and complete: (1) Soundness: For all S ∈ Ps , if S − →s S 0 , there exists T ∈ Pt such that JSK =⇒t T and T ≈t JS 0 K. (2) Completeness: For all S ∈ Ps and T ∈ Pt , if JSK =⇒t T , there exists S 0 such that S =⇒s S 0 and T ≈t JS 0 K. While name invariance and compositionality are static correctness criteria, operational correspondence is a dynamic correctness criterion. Notice that our notion of compositionality is less general than that in [10]: this is due to the several important differences in the structure of the languages under comparison (π vs. RML and aπ vs. RMLq). We shall present translations of π into RML and of aπ into RMLq, which we will show to be encodings. We instantiate Def. 11 with the following languages: Definition 13 (Concrete Languages). We shall consider: - Lπ will denote the tuple hπ, −→, ≡S i, where π is as in Def. 1; −→ is the reduction semantics in Fig. 1; and ≡S is the structural congruence relation for π. E,b
E,b
S
S
- LRML will denote the tuple hRML, −−→, ,→R i, where RML is as in Def. 6; −−→ is the big-step semantics for RML; and ,→R is the equivalence in Def. 9. - Laπ will denote the tuple haπ, −→A , ≡A i, where aπ is as in Def. 3; −→A is the reduction semantics in Fig. 2; and ≡A is the structural congruence relation for aπ. E,b
E,b
S
S
- LRMLq will denote the tuple hRMLq, 999K, ≡R i, where RMLq is as in Def. 10; 999K is the big-step semantics for RMLq; and ≡R is the equivalence for RML. When events, termination booleans, and signal environments are unimportant, we write E,b
E,b
S
S
P 7−→ Q instead of P −−→ Q, and K 7999K K 0 instead of K 999K K 0 . Encoding Lπ into LRML Key aspects in our translation of Lπ into LRML are: (i) the use of value carrying signals to model communication channels; and (ii) the use of a continuation-passing style (following [8]) to model variables in π using RML signals. Definition 14 (Translating Lπ into LRML ). Let hJ·Kf , ψJ·Kf i be a translation where: (1) ψJ·Kf (x) = x, i.e., every variable in π is mapped to the same variable in RML. (2) J·Kf : Lπ → LRML is as in Fig. 6, where f is a substitution function. Function f in J·Kf ensures that fresh signal identifiers are used in each protocol action. The translation of xhvi.P declares a new signal x0 which will be sent paired with value v through signal x; process JP Kf,{x←x0 } is executed in the next instant. Dually, the translation of x(y).P awaits a signal carrying a pair, composed of a value and the signal name that to be used in the continuation, which is executed in the next instant. Translations for selection and branching are special cases of those for output and input. Restriction (νxy)P is translated by declaring a fresh signal w, which replaces 11
M
signal x0 in (emit fx (v, x0 ); pause ; JP Kf,{x←x0 } ) await fx (y, w) in JP Kf,{x←w} signal x0 in (emit fx (l, x0 ); pause ; JP Kf,{x←x0 } ) await fx (l, w) in match l with {li → JPi Kf,{x←w} } if v then (pause ; P ) else (pause ; Q) signal w in JP Kf,{x←w,y←w} let rec process repl α β = signal x0 in do (loop present fα ? (emit x0 ; pause ) : (())) until fα (y, w) → (run β{α←w} ) k await x0 in run (repl α β) in run repl x (process JP Kf ) M M JP | QKf = JP Kf k JQKf J0Kf = ()
Jxhvi.P Kf Jx(y).P Kf Jx / l.P Kf Jx . lj {li : Pi }i∈I Kf Jv? (P ) : (Q)Kf J(νxy)P Kf J∗ x(y).P Kf
= M = M = M = M = M = M =
Fig. 6. Translation from Lπ to LRML (Def. 14). Notice that fx is a shorthand for f (x).
x, y in JP Kf . Conditionals, parallel composition and inaction are translated homomorphically. Input-guarded replication is a special case of recursion, enabling at most one copy of the spawned process in the same instant; such a copy will be blocked until the process that spawned it interacts with some process. In Fig. 6, α, β denote variables inside the declaration of a recursive process, distinct from any other variables. We state our first technical result: the translation of Lπ into LRML is an encoding. In the proof, we identify a class of well-formed π processes that have at most one output and selection per endpoint in the same instant; see [7] for details. Theorem 1. Translation hJ·Kf , ψJ·Kf i is an encoding, in the sense of Def. 12. Encoding Laπ into LRMLq The main intuition in translating aπ into RMLq is to use the queues of RMLq coupled with a handler process that implements the output-input transmission between queues. We start by introducing some auxiliary notions. Q Q Notation 1 Let P ≡A (ν x e)( i∈{1,...,n} Qi | kj ∈ek kj [i : , o : ] be PI (cf. Def. 5) with variables e k. We will write P as Cl [Ql , K(e k)], where l ∈ {1, . . . , n}, Cl [·1 , ·2 ] = Q Q e Qj | [·1 ] | [·2 ], and K(k) = e kj [i : , o : ]. j∈{1,...,n}\{l}
k j ∈k
This notation allows us to distinguish two parts in a PI process: the non-queue processes and the queue processes K(e k). We now define the key notion of handler process: Definition 15Q(Handler process). Given e k = {k1 , . . . , kn }, the handler process H(e k) is defined as i∈{1,...,n} I(ki ) k O(ki ), where I(k) and O(k) are as in Fig. 7. Given an endpoint k, a handler defines parallel processes I k and Ok to handle input and output queues. Transmission is a handshake where both Ok and I k (or viceversa) must be ready to communicate. If ready, Ok sends a pair containing the message (pop ko ) and a fresh signal for further actions (α0 ). Once the pair is received, it is enqueued in k i (i.e., the dual I k ). The process is recursively called in the next instant with the new endpoints. The translation of aπ ? into RMLq requires a final auxiliary definition: 12
M
I(k) = let rec process I α = (present ackα ? (emit ackα ; await α(x, α0 ) in (put x ki ); run I α0 ) : (I α) in run I k M O(k) = let rec process O α = signal α0 in isEmpty αo ; emit ackα ; (present ackα ? (emit α ((pop ko ), α0 ); pause ; run O α0 ) : (run O α) in run O k Fig. 7. Components of handler processes (Def. 15) M
{[x(y).P ]} = let y = pop xi in {[P ]} M {[x . {li : Pi }i∈I ]} = let y = pop xi in match l with {li : {[Pi ]}}i∈I M {[b? (P ) : (Q)]} = if b then {[P ]} else {[Q]} M {[µX.P ]} = let rec process αX = {[P ]} in run αX
{[xhvi.P ]} {[x / l.P ]} {[P | Q]} {[(νx)P ]} {[X]} {[0]}
M
= M = M = M = M = M =
put xo v; {[P ]} put xo l; {[P ]} {[P ]} k {[Q]} signal x, x in {[P ]} pause ; run αX ()
Fig. 8. Auxiliary translation from aπ ? into RMLq (Def. 17).
Definition 16. We define δ(·) as a function that maps aπ processes into RMLq states: δ(k[i : e h; o : m]) e = {ki : e h, ko : m} e δ(P | Q) = δ(P ) ∪ δ(Q) δ((νx)P ) = δ(P ) and as δ(P ) = ∅ for every other aπ process. Definition 17 (Translating Laπ into LRMLq ). Let h([·]), ψ([·]) i be a translation where: - ψ([·]) (k) = k, i.e., every variable in aπ is mapped to the same variable in RMLq. - ([·]) : Laπ → LRMLq is defined for properly initialized aπ processes C[Q, K(e k)], which are translated into RMLq configurations as follows: ([C[Q, K(e k)]]) = h{[C[Q, 0]]} k H(e k) ; δ(K(e k))i where {[·]} : Laπ? → LRMLq is in Fig. 8; H(e k) is in Def. 15; and δ(·) is in Def. 16. Two key ideas in translation ([·]) are: queues local to processes and compositional (queue) handlers. Indeed, communication between an endpoint k and its queues ki , ko proceeds instantaneously, for such queues should be local to the process implementing session k. Queue handlers effectively separate processes/behavior from data/state. As such, it is conceivable to have handlers that have more functionalities than those of H(e k). In [7] we provide an example of a handler more sophisticated than H(e k). Translation ([·]) is in two parts. First, {[·]} translates non-queue processes: output and input are translated into queuing and dequeuing operations, respectively. Selection and branching are modeled similarly. Translations for the conditional, inaction, parallel, and recursion is as expected. Recursion is limited to a pause-guarded tail recursion in {[·]} to avoid loops of instantaneous expressions and nondeterminism when accessing queues. Second, ([·]) creates an RML configuration by composing the RMLq process obtained via {[·]} with appropriate handlers and with the state obtained from the information in aπ 13
queues. Because of this two-part structure, static correctness properties are established for {[·]} (for this is the actual translation of source processes), whereas operational correspondence is established for ([·]) (which generates an RMLq configuration). Theorem 2 (Name invariance and compositionality for {[·]}). Let P , σ, x, and E[·] be an aπ ? process, a substitution, a variable in aπ ? , and an evaluation context (cf. Def. 4), respectively. Then: (1) {[P σ]} = {[P ]}σ, and (2) {[E[P ]]} = {[E]} {[P ]} . Theorem 3 (Operational correspondence for ([·])). Given a properly initialized aπ process C[Q, K(e k)], it holds that: 1. Soundness: If C[Q, K(e k)] −→A C[Q0 , K0 (e k)] then ([C[Q, K(e k)]]) 7999K ([C 0 [Q00 , K00 (e k)]]), for some Q00 , K00 (e x), C 0 where 0 0 ∗ 0 00 00 C[Q, K(e x)] −→A C[Q , K (e x)] −→A (ν x e)C [Q , K (e x)]. 2. Completeness: If ([C[Q, K(e x)]]) 7999K R then there exist Q0 ,C 0 ,K0 (e x) such that 0 0 ∗ e)C [Q , K0 (e x)] and R = ([C 0 [Q0 , K0 (e x)]]). C[Q, K(e x)] −→A (ν x In soundness, a single RMLq step mimicks one or more steps in aπ, i.e., several source computations can be grouped into the same instant. This way, e.g., the interaction of several outputs along the same session with their queue (cf. Rule bS ENDc) will take place in the same instant. In contrast, several queue synchronizations in the same session (cf. Rule bC OMc) will be sliced over different instants. Conversely, completeness ensures that our encoding does not introduce extraneous behaviors: for every RMLq transition of a translated process there exists one or more corresponding aπ reductions.
5
Closing Remarks
We have shown that ReactiveML can correctly encode session-based concurrency, covering both synchronous and asynchronous (queue-based) communications.1 Our encodings are executable: as such, they enable to integrate session-based concurrency in actual RML programs featuring declarative, reactive, timed, and contextual behavior. This is an improvement with respect to previous works, which extend the π-calculus with some (but not all) of these features and/or lack programming support. Interestingly, since ReactiveML has a well-defined semantics, it already offers a firm basis for both foundational and practical studies on session-based concurrency. Indeed, ongoing work concerns the principled extension of our approach to the case of multiparty sessions. We have not considered types in source/target languages, but we do not foresee major obstacles. In fact, we have already shown that our encoding J·Kf supports a large class of well-typed π processes in the system of [19], covering a typed form of operational correspondence (cf. Corollary 3) but also type soundness: if P is a well-typed π process, then JP Kf is a well-typed RML expression—see Thm. 11. We conjecture a similar result for ([·]), under an extension of [19] with queues. On the ReactiveML side, we can exploit the type-and-effect system in [15] to enforce cooperative programs (roughly, 1
Synchronous communication as in the (session) π-calculus should not be confused with the synchronous programming model of ReactiveML.
14
programs without infinite loops). Since J·Kf and ([·]) already produce well-typed, executable ReactiveML expressions, we further conjecture that they are also cooperative, in the sense of [15]. Acknowledgements We thank Ilaria Castellani, Cinzia Di Giusto, and the anonymous reviewers for useful remarks and suggestions. This work has been partially sponsored by CNRS PICS project 07313 (SuCCeSS) and EU COST Actions IC1201 (BETTY), IC1402 (ARVI), and IC1405 (Reversible Computation).
References 1. M. Bartoletti, T. Cimoli, M. Murgia, A. S. Podda, and L. Pompianu. Compliance and subtyping in timed session types. In FORTE, volume 9039 of LNCS, pages 161–177. Springer, 2015. 2. A. Benveniste, P. Caspi, S. A. Edwards, N. Halbwachs, P. L. Guernic, and R. de Simone. The synchronous languages 12 years later. Proceedings of the IEEE, 91(1):64–83, 2003. 3. L. Bocchi, K. Honda, E. Tuosto, and N. Yoshida. A theory of design-by-contract for distributed multiparty interactions. In CONCUR 2010, volume 6269 of LNCS, pages 162–176. Springer - Verlag, 2010. 4. L. Bocchi, W. Yang, and N. Yoshida. Timed multiparty session types. In Proc. of CONCUR’14, volume 8704, pages 419–434. Springer, 2014. 5. E. Bonelli, A. B. Compagnoni, and E. L. Gunter. Correspondence assertions for process synchronization in concurrent communications. J. Funct. Program., 15(2):219–247, 2005. 6. F. Boussinot and R. de Simone. The SL synchronous language. IEEE Trans. Software Eng., 22(4):256–266, 1996. 7. M. Cano, J. Arias, and J. A. Pérez. Session-based Concurrency, Reactively (Extended Version), 2017. Available at http://www.jperez.nl/publications. 8. O. Dardha, E. Giachino, and D. Sangiorgi. Session types revisited. In Proc. of PPDP’12, pages 139–150, 2012. 9. X. Fu, T. Bultan, and J. Su. Conversation protocols: a formalism for specification and verification of reactive electronic services. Theor. Comput. Sci., 328(1-2):19–37, 2004. 10. D. Gorla. Towards a unified approach to encodability and separation results for process calculi. Inf. Comput., 208(9):1031–1053, 2010. 11. N. Halbwachs, F. Lagnier, and C. Ratel. Programming and verifying real-time systems by means of the synchronous data-flow language LUSTRE. IEEE Trans. Software Eng., 18(9):785–793, 1992. 12. K. Honda, V. T. Vasconcelos, and M. Kubo. Language Primitives and Type Discipline for Structured Communication-Based Programming. In Proc. of ESOP’98, volume 1381, pages 122–138. Springer, 1998. 13. H. Hüttel, I. Lanese, V. T. Vasconcelos, L. Caires, M. Carbone, P.-M. Deniélou, D. Mostrous, L. Padovani, A. Ravara, E. Tuosto, H. T. Vieira, and G. Zavattaro. Foundations of session types and behavioural contracts. ACM Comput. Surv., 49(1):3:1–3:36, Apr. 2016. 14. D. Kouzapas, N. Yoshida, R. Hu, and K. Honda. On asynchronous eventful session semantics. Mathematical Structures in Computer Science, 26(2):303–364, 2016. 15. L. Mandel and C. Pasteur. Reactivity of Cooperative Systems - Application to ReactiveML. In 21st International Symposium, SAS 2014, Munich, Germany, 2014., pages 219–236, 2014. 16. L. Mandel, C. Pasteur, and M. Pouzet. ReactiveML, ten years later. In Proc. of PPDP 2015, pages 6–17, 2015.
15
17. L. Mandel and M. Pouzet. ReactiveML: a reactive extension to ML. In Proc. of PPDP’05, pages 82–93. ACM, 2005. 18. R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, I. Inf. Comput., 100(1):1–40, 1992. 19. V. T. Vasconcelos. Fundamentals of session types. Inf. Comput., 217:52–70, 2012.
16
(Qualifiers) (Pretypes)
(Types)
(Contexts)
q ::=
lin un p ::= ?T.T !T.T ⊕{li : Ti }i∈I &{li : Ti }i∈I T ::= bool end qp a µa.T Γ , ∆ ::= ∅ Γ, x : T
(linear) (unrestricted) (receive) (send) (select) (branching) (boolean) (termination) (qualified pretype) (type variable) (recursive type) (empty context) (assumption)
Fig. 9. Session Types: Qualifiers, Pretypes, Types, Contexts.
A A.1
Appendix for Section 3 Type System for π
Here we present the type system for the language π given in §3 and show an example of typing. Definition 18 (Session Types: Syntax). The syntax of session types is given in Fig. 9. We use q to range over qualifiers, p to range over pretypes, and T to range over types. The type syntax includes pretypes and types. Pretype !T1 .T2 denotes output, and types a channel that sends a value of type T1 and continues according to type T2 . Dually, pretype ?T1 .T2 denotes input, and types a channel that receives a value of type T1 and then proceeds according to type T2 . Pretypes ⊕{li : Ti }i∈I and &{li : Ti }i∈I denote labeled selection (internal choice) and branching (external choice), respectively. Types annotate channels and can be either (1) bool, used for constants and variables, (2) end, which types a channel endpoint that can no longer be used, (3) qualified pretypes, which type the actions executed by a channel, or (4) recursive types for disciplining potentially infinite communication patterns. The approach to recursion taken for π in [19] is equi-recursive; i.e., a recursive type is assumed to be equal to its unfolding. Qualifiers refer to linear or unrestricted behaviors. Intuitively, linearly qualified types are assigned to endpoints occurring in exactly one thread (a process not comprising parallel composition); the unrestricted qualifier allows an endpoint to occur in multiple threads. For each qualifier q, there are predicates q(T ) and q(Γ ) defined as follows: – un(T ) if and only if T = bool or T = end or T = un p. – lin(T ) if and only if true. – q(Γ ) if and only if (x : T ) ∈ Γ implies q(T ). Session type systems depend on type duality to relate session types with complementary (or opposite) behaviors: e.g., the dual of input is output (and vice versa); branching is the dual of selection (and vice versa). For the purposes of this paper, we only provide an inductive definition of duality. For a more detailed study of duality see, e.g., [App4] for a formal definition. We write T to denote the dual of type T . 17
Definition 19 (Duality of session types). For every type T , except bool, we define duality as: end = end
!S.U = ?S.U
?S.U = !S.U
&{li : Si }i∈I = ⊕{li : Si }i∈I
⊕{li : Si }i∈I = &{li : Si }i∈I
a = a µa.S = µa.S
Typing uses a context splitting operator on contexts, denoted ◦, which maintains the linearity invariant for channels. It is defined below. Definition 20 (Context splitting). Let Γ1 and Γ2 be two contexts; we write Γ1 , Γ2 to denote their concatenation. The context splitting of Γ1 and Γ2 , written Γ1 ◦ Γ2 , is defined as follows: Γ = Γ1 ◦ Γ2 un(T ) Γ, x : T = (Γ1 , x : T ) ◦ (Γ2 , x : T ) Γ = Γ1 ◦ Γ2 lin(T ) Γ = Γ1 ◦ Γ2 lin(T ) Γ, x : T = (Γ1 , x : T ) ◦ Γ2 Γ, x : T = Γ1 ◦ (Γ2 , x : T ) ∅◦∅=∅
Given a context Γ and a process P , typing judgments are of the form Γ ` P . Fig. 10 gives the typing rules for π processes; we now give some intuitions (see [19] for full details). Rules (T:B OOL) and (T:VAR) are for variables; in both cases, we check that all linear variables are consumed, using predicate un(·). Rule (T:N IL) types the inactive process 0; it also checks that the context only contains unrestricted variables. Rule (T:I F) type-checks the conditional process. Rule (T:R EPL) checks replicated processes, making sure that the associated context is unrestricted. Rule (T:PAR) types parallel composition using context splitting to divide resources among the two sub-processes. Rule (T:R ES) types the restriction operator: it performs a duality check on the types of the co-variables. Rule (T:I N) types an input process: it checks whether x has the right type and checks the continuation; it also adds variable y with type T and x with the type of the continuation to the context. To type-check a process xhvi.P , Rule (T:O UT) splits the context in three parts: the first is used to check the type of the sent object v; the second is used to check the type of subject x; the third is used to check the continuation P . Rules (T:B RA) and (T:S EL) type-check label branching and label selection processes, and they in a fashion similar to (T:I N) and (T:O UT), respectively. We state the subject reduction property for this type system: Theorem 4 ([19]). If Γ ` P and P −→ Q then Γ ` Q. We now collect some results that concern the structure of processes; they all follow [19]. Some auxiliary notions are needed. Notation 2 ((Typable) Programs) A process P such that fn(P ) = ∅ is called a program. Therefore, program P is typable if it is well-typed under the empty environment ( ` P ). Example 1. The π process xhv1 i.0 | xhv2 i.0 | y(z).0
(1)
can be well-typed in the above system under the context Γ = {x : µa.un!bool.a, y : lin?bool.end}. For the sake of presentation assume Γ 0 = {x : µa.un!bool.a}. The derivation tree is as follows: D1
D2
.. .. . . (T:O UT ) 0 (T:O UT ) 0 Γ ` xhv1 i.0 Γ ` xhv2 i.0 (T:PAR ) Γ 0 ` xhv1 i.0 | xhv2 i.0 (T:PAR ) Γ ` xhv1 i.0 | xhv2 i.0 | y(z).0 18
D3
.. . (T:I N ) Γ ` y(z).0
(T:B OOL )
un(Γ ) Γ ` ff, tt : bool
(T:I F )
Γ1 ` v : bool
(T:VAR )
Γ2 ` P
un(Γ1 , Γ2 ) Γ1 , x : T, Γ2 ` x : T
Γ2 ` Q
Γ1 ◦ Γ2 ` v? (P ) : (Q) (T:PAR )
Γ1 ` P
(T:I N ) (T:O UT ) (T:S EL ) (T:B RA )
Γ2 ` Q
Γ1 ◦ Γ2 ` P | Q Γ1 ` x : q ?T.U
(T:N IL )
(T:R EPL )
(T:R ES )
Γ `P
un(Γ ) Γ `0
un(Γ )
Γ ` ∗P
Γ, x : T, y : T ` P Γ ` (νxy)P
(Γ2 , y : T ) ◦ x : U ` P
Γ1 ◦ Γ2 ` x(y).P
Γ1 ` x : q !T.U
Γ2 ` v : T
Γ3 ◦ x : U ` P
Γ1 ◦ Γ2 ◦ Γ3 ` xhvi.P
Γ1 ` x : q ⊕ {li : Ti }i∈I
Γ2 ◦ x : Tj ` P
j∈I
Γ1 ◦ Γ2 ` x / lj .P Γ1 ` x : q & {li : Ti }i∈I
∀i ∈ I. Γ2 ◦ x : Ti ` Pi
Γ1 ◦ Γ2 ` x . {li : Pi }i∈I
Fig. 10. Session types: Typing rules for π processes.
where D1 ,D2 , and D3 represent corresponding branches of the previous derivation tree, which have been left out for the sake of presentation. We write D1 to represent the following derivation: (T:VAR ) (T:O UT )
un(Γ 0 ) un(Γ 0 ) (T:B OOL ) Γ 0 ` x : un!bool.a{S/a} Γ 0 ` v1 : bool Γ 0 ` xhv1 i.0
(T:N IL )
un(Γ 0 ) Γ0 ` 0
where S = µa.un!bool.a. Note that the leftmost premise of the rule: Γ 0 ` x : un!bool.T {S/a} can conclude thanks to the equi-recursive treatment of recursive types in [19]. This means that a type µa.T is equivalent to its unfolding. Derivation D2 is represented by the following tree: (T:VAR ) (T:O UT )
un(Γ 0 ) un(Γ 0 ) (T:B OOL ) Γ 0 ` x : un!bool.a{S/a} Γ 0 ` v2 : bool Γ 0 ` xhv2 i.0
(T:N IL )
un(Γ 0 ) Γ0 ` 0
Lastly, D3 represents the following derivation: (T:VAR ) (T:I N )
un(Γ 0 ) Γ 0 , y : lin?bool.end ` y : lin?bool.end Γ ` y(z).0
(T:N IL )
un(Γ 0 ) Γ 0 , z : bool, y : U ` 0
Notice that processes may be typed under different environments. For example, the process in (1) could be typed under environment Γ 00 = {x : µa.un!bool.a, y : µa.un?bool.a}, with a derivation tree similar to the one above. Notice that µa.un!bool.a and µa.un?bool.a are dual types. 19
bR ECURc E,tt v{rec x = v/x} −−−→ v 0
bPAIRc bVALc
E2 ,tt
E1 ,tt
e2 −−−→ v2
e1 −−−→ v1
S
S
S
E1 tE E2 ,tt
∅,tt
E,tt
rec x = v −−−→ v 0
(e1 , e2 ) −−−−−−−→ (v1 , v2 )
v −−→ v
S
S
S
E2 ,tt
E1 ,tt
e2 −−−→ v
e3 { S E1 tE E2 tE E3 ,tt
e1 −−−→ λx.e3 S
bA PPLc
0
0
E3 ,tt
v /x} −−−→ v S
e1 e2 −−−−−−−−−−→ v S
E1 ,tt
e −−−→ cj bC ASEc
E ,b
ej −−2−→ e0j
j∈I
S
S E1 tE E2 ,b
match e with {ci → ei }i∈I −−−−−−→ e0j S
E1 ,b1
−−− → e1 − bL-PARc
S
E2 ,b2
e01
−−− → e2 −
e02
b1 ∧ b2 = ff
S E1 tE E2 ,ff
let x1 = e1 and x2 = e2 in e3 −−−−−−−→ let x1 = e01 and x2 = e02 in e3 S
E1 ,tt
E2 ,tt
e1 −−−→ v1
e2 −−−→ v2
S
bL-D ONEc
S
E ,b
e3 {v1 , v2/x1 , x2 } −−3−→ e03 S
E1 t E2 t E3 ,b
E let x1 = e1 and x2 = e2 in e3 −−−−E−−−− −−→ e03
S
bRUNc
bL P -U Nc
bL P -S TUc
E ,tt
e −−1−→ process e0 S
E ,b S
E1 t E2 ,b
e1 −−−→ v1 S
bS IG -D ECc
S
E loop e −−−− −−→ e0
S
E2 ,tt
S
E ,b e3 {n/x} −−3−→ e03
e2 −−−→ v2 S
S
E1 t E2 ,b
loop e −−−→ e0 ; loop e
S
E ,b
loop e −−2−→ e0
e −−1−→ v
S E,ff
E run e −−−− −−→ e00
E1 ,tt
E ,tt
E,ff
e −−−→ e0
e0 −−2−→ e00
n fresh S(n) = (v1 , v2 , m)
S E1 tE E2 tE E3 ,b
signale2 x : e1 in e3 −−−−−−−−−−→ e03 S
E1 ,tt
e1 −−−→ n S
bE MITc
E2 ,tt
e2 −−−→ v S
E1 tE E2 tE [{v}/n],tt
bPAUSEc
emit e1 e2 −−−−−−−−−−−−−→ () S
E1 ,tt
e1 −−−→ n bS IG -Pc
S
n∈S
E2 ,b
e2 −−−→ S E,ff
∅,ff
pause −−→ () S
E,tt
e02
e1 −−−→ n bS IG -NPc
present e1 ? (e2 ) : (e3 ) −−−→ e02
S
n 6∈ S E,ff
present e1 ? (e2 ) : (e3 ) −−−→ e3
S
S
Fig. 11. Big-step semantics for RML expressions (Part 1).
A.2
Full transition rules for RML
Fig. 11 and Fig. 12 show a full account of all the transitions rules in RML. They are explained as follows: – Rule bVALc indicates that values can be kept during subsequent instants, however their execution always terminates (i.e., they do not suspend the thread). – Rule bPAIRc shows that the internal elements of each pair should be instantaneous. – Rule bR ECURc refers to a single unfolding of a recursive function, which must be done instantaneously. – Rule bA PPLc is the usual rule for application on functions, the involved expressions should terminate instantaneously. – Rule bC ASEc is the usual ML match operator. We set that the matched expression must be resolved instantaneously. 20
bDW-NSc
bDW-I NTc E,tt
e2 −−−→ n
E ,tt
e2 −−2−→ n
n 6∈ S
S
E ,ff
e1 −−1−→ e01
n∈S
S
S
E1 tE E2 ,ff
E,ff
do e1 when e2 −−−→ do e1 when n
do e1 when e2 −−−−−−−→ do e01 when n
bDW-E NDc
bDU-E NDc
S
E ,tt
e2 −−2−→ n S
n∈S
S
E ,tt
E ,tt
e1 −−1−→ v
S E1 tE E2 ,tt
e2 −−2−→ n S
S
E1 t E2 ,tt
do (e1 ) until e2 (x) → (e3 ) −−−−E−−−→ v
do e1 when e2 −−−−−−−→ v S
S
E2 ,tt
e2 −−−→ n bDU-Pc
E ,tt
e1 −−1−→ v
n∈S
S
E1 ,ff
e1 −−−→ S
e01
v E1 t E2 ,ff do (e1 ) until e2 (x) → (e3 ) −−−−E−−−→ e3 {S (n)/x} S
E2 ,tt
e2 −−−→ n bDU-NPc
n 6∈ S
S
E ,ff
e1 −−1−→ e01 S
E1 tE E2 ,ff
do (e1 ) until e2 (x) → (e3 ) −−−−−−−→ do (e01 ) until e2 (x) → (e3 ) S
Fig. 12. Big-step semantics for RML expressions (Part 2).
– Rules bL-PARc and bL-D ONEc handle let expressions, distinguishing the cases in which (a) at least one of the parallel branches has not yet terminated, and (b) both branches have terminated and their resulting values can be used in the body of the let. – Rule bRUNc indicates that declared processes can only be executed while preceded by a run operator. – Rules bL P -S TUc and bL P -U Nc handle loop expressions: the former decrees that a loop will stop executing when the termination boolean of its body becomes ff; the latter executes a loop until Rule bL P -S TUc is applied. If the body of the loop is instantaneous, then the execution would require an infinite tree, which would lead to cooperativity [15]. This is why expressions where the body is instantaneous (e.g., loop ()) have no semantics. – Rule bS IG -D ECc declares a signal by instantiating it with a fresh name in the continuation; its default value and the gathering function must be instantaneous expressions. – Rule bE MITc governs signal emission. – Rule bPAUSEc suspends the thread for a single instant. – Rules bS IG -Pc and bS IG -NPc check for presence of a signal: when the signal is currently present the body e2 is executed in the same instant; otherwise, e3 is executed in the next instant. – Rules bDW-NSc, bDW-I NTc, and bDW-E NDc govern expressions do e1 when e2 . Rule bDW-NSc suspends the execution of e1 when e2 transitions to a signal not currently present. Rule bDW-I NTc decrees that when e2 transitions to a currently present signal e1 is executed as long as the signal is present. Rule bDW-E NDc says that if e2 transitions to a currently present signal and e1 terminates instantaneously, then the whole expression terminates. – Rules bDU-E NDc, bDU-Pc, and bDU-NPc handle expressions do (e1 ) until e2 (x) → (e3 ). Rule bDU-E NDc says that if e1 terminates instantaneously, then the whole expression terminates. Rule bDU-Pc says that if e2 transitions to a currently present signal (n), then e3 is executed in the next instant, substituting x with the values gathered in n. Rule bDU-NPc executes e1 as long as e2 does not reduce to a currently present signal. 21
A.3
Behaviors and reactivity for RML
Following [15], we present a type-and-effect system for RML. First, we define behaviors, a simple language to completely abstract values and the presence on signals. Behaviors will be used to analyze the reactivity, understood as the absence of infinite instantaneous loops. In [15] they decided to to keep an abstraction of the structure of the process, so as to have reasonable precision. The behaviors Assume a set K of behaviors that range over κ, κ1 , . . . , behavior variables range over φ, φ1 , . . . . Free variables are defined as usual, represented by a set fbv(·). Definition 21 (Behaviors). The set π of session processes is defined as: κ, κ1 , . . . ::= • | 0 | φ | κ2 k κ1 | κ1 ; κ2 | κ1 + κ2 | µφ.κ | run κ Non-instantaneous actions (i.e., actions that take more than one instant to execute) are represented by •, instantaneous ones are represented with 0. The language of behaviors also include variables (φ), since we need to represent processes that take processes as parameters, considering that RML allows higherorder definitions. Behaviors must represent the structure of the process; therefore, we start by having a parallel composition operator on behaviors: κ1 k κ2 . Observe that two RML processes in parallel are noninstantaneous, only if each of the processes is non-instantaneous. The same logic is applied to the sequential operator and recursive operators, as an example consider the following processes: let rec process goodRec = pause ; run goodRec let rec process badRec = run badRec; pause Notice that goodRec will be non-instantaneous as the instants are clearly marked by the pause operator, whereas badRec will be an instantaneous loop, as the recursive call comes before the pause operator. The behavior κ1 +κ2 allow us to represents choice, for example, and conditional operator. The recursion behavior µφ.κ allow us to model recursive behavior and they are defined as usual: µφ.κ = κ{µφ.κ/φ}
µφ.κ = κ if φ 6∈ fbv(κ)
Notice that there is no operator for representing the behavior of a loop. Indeed, loops are just a special case of recursion, hence, we will define κ∞ = µφ.κ; run φ to be the behavior of a loop. Reactive behaviors In this section we will define the notion of reactivity for behaviors. We start by defining non-instantaneous behavior: Definition 22 (Non-instantaneous behavior). We will say that a behavior is non-instantaneous ↓ (κ) if: ↓ (κ2 ) ↓ (κ1 ) ↓ (κ2 ) ↓ (κ1 κ2 ) ↓ (κ1 ) ↓ (•) ↓ (φ) ↓ (κ1 ; κ2 ) ↓ (κ1 ; κ2 ) ↓ (κ1 k κ2 ) ↓ (κ1 k κ2 ) ↓ (κ1 + κ2 ) ↓ (κ) ↓ (κ) ↓ (µφ.κ) ↓ (run κ) It is important to note that function calls are not non-instantaneous. Behavior •, as expected, is noninstantaneous. Variables should also be non-instantaneous; reactivity is only checked when the variable has been instantiated. Sequential and parallel composition operators are only non-instantaneous if one of the two behaviors κ1 or κ2 is. In the case of the non-deterministic choice, the two behaviors have to be noninstantaneous as there is only one of the two behaviors that will be executed. Finally, a recursive behavior is non-instantaneous only if the body is non-instantaneous. 22
Definition 23 (Reactive behavior). We define a predicate R κ as follows: φ 6∈ R R κ1 not(↓ (κ1 )) R κ2 ) R κ1 ↓ (κ1 ) ∅ κ2 ) R φ R κ1 ; κ2 R κ1 ; κ2 R κ1 R κ2 R κ1 R κ2 R ∪ {φ} κ R κ R κ1 k κ2 R κ1 + κ2 R µφ.κ R run κ
R 0 R •
we say that a behavior κ is reactive if ∅ κ. Intuitively, predicate R κ means behavior κ is reactive with respect to the set of variables R, that is, these variables do not appear in the first instant of κ and all the recursions inside the behavior are not instantaneous. The rule for µφ.κ checks that variable φ has not been introduced in the current instant, otherwise, we would have an infinite loop. The recursion variables are then added to R. In the case of κ1 ; κ2 we remove the variables from R if κ1 is non-instantaneous. Behaviors 0, • are always reactive, and for the other operators, the reactivity is checked in the sub-expressions. Equivalence on behaviors We define an equivalence relation ≡κ , that will help to simplify the behaviors: Definition 24. Equivalence on behaviors Let ≡κ be a relation defined on the following rules: – – – –
Operators k, +, ; are idempotent and associative. Operators k, + are commutative. 0 is the neutral element of ; and k. • is the neutral element of +.
The relation also satisfies the following rules. In them, op indicates k, ; or +: κ1 ≡κ κ2 µφ.κ1 ≡κ µφ.κ2
κ1 ≡κ κ2 run κ1 ≡κ run κ2
•∞ ≡κ •
κ1 ≡κ κ01 κ2 ≡κ κ02 κ1 opκ2 ≡κ κ01 opκ02
In [15] it is also proven that ≡κ preserves reactivity: Theorem 5 (Reactivity preservation [15]). If κ1 ≡κ κ2 and R κ1 then R κ2 . A.4
Type-and-effect system for RML
In RML the link between processes and behaviors is given by a type-and-effect system. The behavior of a process becomes its effect computed using the type system. Notice that in the implementation of RML after the typechecking, all the behaviors are tested for reactivity: if all the behaviors are reactive, then the program is reactive, otherwise a warning is printed. We present the type system as follows: Types We assume a set of type variables that range over α, α1 , . . . and a set of types that ranges over τ, τ1 , . . . . We also define type schemes that range over σ, σ1 , . . . and represent universal quantifiers over type variables and behavior variables. As with behaviors, we set ftv(·) to be the set of free type variables. Furthermore, we define Γ as a context that assign types schemes to variables. Lastly, we will set ftbv(·) to be the set of free type and behavior variables of a given type. 23
Definition 25 (Types). Types are defined by the following grammar: τ, τ1 , . . . = α | T | τ1 × τ2 | τ1 → τ2 | τ process[κ] | (τ1 , τ2 ) event | hli : τi ii∈I σ, σ1 , . . . = τ | ∀α.σ | ∀φ.σ Γ = ∅ | Γ, x : σ Intuitively, a type τ is either a type variable, a base type (e.g., bool, unit), a product, a function, a process or a signal. The type of processes is parametric to its return type and its behavior. The type (τ1 , τ2 ) event of a signal is parametric to the type of the emitted value (τ1 ) and the type τ2 of the received value. The previous happens given the fact that the gathering function τ1 → τ2 → τ2 is applied. We will also allow for variant types, representing a tagged sum, where each label can have a certain type. Type schemes universally quantify over type variables and behavior variables. We define instantiation and generalization as expected: σ{τ/α} ≤ ∀α.σ
σ{κ/φ} ≤ ∀φ.σ
(Instantiation)
( τ if e is expansive gen(τ, e, Γ ) = e e ∀e α.∀φ.τ with α e, φ = ftbv(τ ) \ ftbv(Γ ) otherwise
(Generalization)
In [15], expressions that allocate signals cannot be generalized. The syntactic criterion for discriminating these expressions is by distinguishing between expansive and non-expansive ones. We say an expression e is expansive if it can allocate a signal or a reference and non-expansive otherwise. The notions of reactivity, behaviors and behavior equivalence remain the same. Type system Following [15], typing judgments are of the form Γ `r e : τ | κ. Intuitively, a judgment means that with environment Γ , expression e has type τ and behavior κ. We write Γ `r e : τ | _ ≡κ 0 when the behavior of expression e equals 0. Definition 26 (Initial typing environment). The initial typing environment Γ0 contains the types of the primitives: Γ0 = {tt : bool, ff : bool, fst :∀α1 , α2 .α1 × α2 → α1 , . . . } Intuitively, Γ0 contains all the ground types and function types. The rules for typing are defined in Fig. 13. Lastly, we will consider bool to be a variant type with two labels. This means that tt : bool = tt : htt : unit, ff : uniti and ff : bool = ff : htt : unit, ff : uniti. We now give some intuitions on the rules: – Rules (T-VAR), (T-C ON) and (T-PAIR) deal with variables, constants and pairs. Variables should instantiate the type for the variable in the environment, while constants should be an instant of base types. Lastly, pairs can only contain instantaneous actions. – Rules (T-A BS) and (T-A PP), (T-R EC) deal with abstraction, function application and recursion. They are as expected. We ask that functions and applications are instantaneous. – Rule (T-P ROC) stores the behavior of the body in the type of the process. Behavior k 0 is used to express other possible behaviors. The type system presented [15] ensures that a process has at least the behavior of its body, however, there may be other behaviors; this is what κ0 . This fact is related to subeffecting and for the scope of this paper is not necessary (for details see [15]). – Rule (T-RUN) types processes that are running. Its behavior should be stored and given by a process declaration. 24
(T-VAR )
τ ≤ Γ (x) Γ `r x : τ
(T-A BS )
|0
(T-C ON )
τ ≤ Γ0 (c) Γ `r c : τ
Γ, x : τ `r e : τ2 | 0 Γ `r λx.e : τ1 → τ2 | 0
(T-R EC )
(T-A PP )
|0 `r rec x = v : τ | 0
Γ, x : τ `r v : τ Γ
(T-RUN )
|0
(T-PAIR )
Γ `r e1 : τ1 | _ ≡κ 0 Γ `r e2 : τ2 | _ ≡κ 0 Γ `r (e1 , e2 ) : τ1 × τ2 | 0
Γ `r e1 : τ1 → τ2 | _ ≡κ 0
Γ `r e2 : τ2 | _ ≡κ 0
Γ `r e1 e2 : τ1 → τ2 | 0
(T-P ROC )
Γ `r e : τ
|κ
Γ `r process e : τ process[κ + κ0 ] | 0
Γ `r e : τ process[κ] | _ ≡κ 0
(T-PAUSE ) Γ `r pause : unit | • | run κ Γ `r e1 : τ1 | κ1 Γ `r e2 : τ2 | κ2 Γ, x1 : gen(τ1 , e1 , Γ ), x2 : gen(τ2 , e2 ) `r e3 : τ | κ3 (T-L ET ) Γ `r let x1 = e1 and x2 = e2 in e3 : τ | (κ1 k κ2 ); κ3 Γ `r e1 : τ2 | _ ≡κ 0 Γ `r e2 : τ1 → τ2 → τ2 | _ ≡κ 0 Γ, x : (τ1 , τ2 ) event `r e3 : τ | κ (T-S IG ) Γ `r signale2 x : e1 in e3 : τ | κ Γ `r e1 : (τ1 , τ2 ) event | _ ≡κ 0 Γ `r e2 : τ | κ1 Γ `r e2 : τ | κ2 (T-P RE ) Γ `r present e1 ? (e2 ) : (e3 ) : τ | κ1 + (•; κ2 ) Γ `r e1 : (τ1 , τ2 ) event | _ ≡κ 0 Γ `r e2 : τ1 | _ ≡κ 0 Γ `r e : τ | κ (T-E MIT ) (T-L OOP ) Γ `r emit e1 e2 : unit | 0 Γ `r loop e : unit | (0; κ)∞ Γ `r e1 : τ1 | κ1 Γ `r e2 : (τ1 , τ2 ) event | _ ≡κ 0 Γ, x : τ2 `r e3 : τ | κ2 (T-D O U) Γ `r do (e1 ) until e2 → (e3 ) : τ | κ1 + (•; κ2 ) Γ `r e : τ | κ φ 6∈ fbv(Γ, τ ) Γ `r e1 : τ1 | κ Γ `r e2 : (τ1 , τ2 ) event | _ ≡κ 0 (T-M ASK ) (T-D OW) ∞ Γ `r do e1 when e2 : τ | κ + • Γ `r e : τ | κ{•/φ} Γ `r c : hli : τi ii∈I | 0 Γ `r ei : τ | κi for each i (T-M ATCH ) P Γ `r match c with {ci → ei }i∈I : τ | i∈I κi Γ `r run e : τ
Fig. 13. Typing rules for RML processes.
– Rule (T-PAUSE) types the pause operator with type unit and non-instantaneous behavior. – Rule (T-L ET) types let processes as expected (each branch). – Rule (T-S IG) types the signal declaration operator. Notice the type of the gathering function τ1 → τ2 → τ2 , as well as the fact that the variable x is added with a signal type in the environment. – In Rule (T-P RE) notice that there are two possibilities: the behavior of e2 can be executed right away or the behavior of e3 will be executed in the next time unit. Hence the behavior to check is κ1 + (•; κ2 ). – Rule (T-E MIT) is as expected, considering that the emission is instantaneous. – Rule (T-L OOP) types loops, by checking that the behavior is infinite. – Rule (T-D O U) is the same as Rule (T-P RE). – Rule (T-D OW) types a process that will execute some behavior instantaneously or will execute a behavior that will do nothing while waiting for a given signal. – Rule (T-M ASK) allows to simplify effect expressions, by using what is called effect masking. Notice that if a behavior variable appearing in some behavior κ is free in the environment, it is not constrained; 25
therefore, we can give it any value. In particular, in [15], the authors choose to replace it with •, which is the neutral element of +, allowing it to be simplified. – Ruule (T-M ATCH) verifies that the constant c used for matching is of a variant type, and proceed to type each possible continuation. Lastly, we will introduce a form of weakening in our type system: Lemma 1 (Weakening for `r ). If Γ, x : τ `r e : τ 0 | κ and x 6∈ fv(e) then Γ `r e : τ 0 | κ Proof. The proof proceeds by induction on the typing derivation. We show one case in which variables are added to the environment, all the other cases proceed similarly. Our base case is Rule (T-VAR), that is trivial, since e = x and fv(x) = {x}: 1. (T-A BS): Suppose that: (T-A BS )
Γ, y : τ 0 , x : τ `r e : τ2 | 0 Γ, y : τ 0 `r λx.e0 : τ1 → τ2 | 0
where y 6∈ fv(λx.e0 ). There are two cases: (a) y = x: We use renaming of bound variables in λx.e0 to change x into some z 6∈ dom(Γ ) and we apply the Rule (T-A BS). By the inductive hypothesis, conclude. (b) y 6= x: Conclude by the inductive hypothesis.
A.5
Additional transition rules for RMLq
Fig. 14, Fig. 15 and Fig. 16 present the transition rules for RMLq. They are based on the rules presented in Fig. 11 and Fig. 12. However, the rules in this setting are applied to configurations, rather than processes; furthermore, configurations are pairs composed of processes P and sets of queues (i.e., states) Σ. States contain information about the queues and the elements that populate them. Queues are collected through the derivation three using a point-wise union on sets Σ1 ∪ Σ2 . To avoid issues regarding nondeterminism in order of queues we will not allow the execution of two put / pop operators (or the combination of both) on the same queue in parallel during the instant. This effectively means that only one thread will have control of that queue (i.e., the thread has a lock on the queue) at any given instant. In constructs with continuation such as the let operator, it is reasonable to let the continuations use the queues modified by the expressions evaluated before the continuation is executed. Thus, let x = e1 and y = e2 in e3 will first execute e1 and e2 with states Σ1 and Σ2 in parallel and the continuation e3 will use Σ1 ∪ Σ2 in its execution.
B B.1
Appendix for Section 4 Proof of Thm. 1 in Page 12
The statement is a corollary of proving the correctness properties in Def. 12 for J·Kf (namely, name invariance, compositionality, and operational correspondence). 26
bPAUSEc
bR ECURc E,tt hv{x/rec x = v} ; Σi 9999K hv 0 ; Σ1 i
bVALc
S
∅,ff
hpause ; Σi 9999K h() ; Σi S
E,tt
∅,tt
hrec x = v ; Σi 9999K hv 0 ; Σ1 i
hv ; Σi 9999K hv ; Σi
S
S
E1 ,tt
0
he ; Σi 99999K hprocess e ; Σ1 i
00
he ; Σ1 i 9999K he ; Σ2 i
S
bRUNc
E2 ,b
0
S
E1 tE E2 ,b
00
hrun e ; Σi 99999999K he ; Σ2 i S
bA PPLc E1 ,tt
E2 ,tt
he2 ; Σi 99999K hv 0 ; Σ2 i
he1 ; Σi 99999K hλx.e3 ; Σ1 i S
S E1 tE E2 tE E3 ,tt
E3 ,tt
he3 {x/v 0 } ; Σ1 ∪ Σ2 i 99999K hv ; Σ3 i S
he1 e2 ; Σi 9999999999999K hv ; ∪Σ3 i S
bL-PARc E1 ,b1
E2 ,b2
he1 ; Σi 99999K he01 ; Σ1 i
he2 ; Σi 99999K he02 ; Σ2 i b1 ∧ b2 = ff
S
S
E1 tE E2 ,ff
hlet x1 = e1 and x2 = e2 in e3 ; Σi 999999999K hlet x1 = e01 and x2 = e02 in e3 ; Σ1 ∪ Σ2 i S
E1 ,tt
he1 ; Σi 99999K hv1 ; Σ1 i S
E2 ,tt
he2 ; Σi 99999K hv2 ; Σ2 i S
E3 ,b
he3 {x1 , x2/v1 , v2 } ; Σ1 ∪ Σ2 i 9999K he03 ; Σ3 i S E1 tE E2 tE E3 ,b
bL-D ONEc
let x1 = e1 and x2 = e2 in e3 999999999999K he03 ; Σ3 i S
E1 ,tt
he1 ; Σi 99999K hn ; Σ1 i n ∈ S S
bS IG -Pc
E2 ,b
he2 ; Σ1 i 9999K he02 ; Σ2 i S
E1 tE E2 ,b
hpresent e1 ? (e2 ) : (e3 ) ; Σi 99999999K he02 ; Σ2 i S
E,tt
he1 ; Σi 9999K hn ; Σ1 i n 6∈ S bS IG -NPc
S
E,ff
hpresent e1 ? (e2 ) : (e3 ) ; Σi 9999K he3 ; Σ1 i S
Fig. 14. Big-step semantics for RMLq expressions.
Name invariance Theorem 6 (Name invariance for J·Kf ). For every π process P , substitution σ and renaming function f such that dom(f ) ⊆ ran(σ) it holds that JP σKf = JP Kg σ where dom(g) ⊆ dom(σ) and ran(f ) = ran(g). In this property, we should consider the substitutions made by the renaming function f . Notice that some of the free names of P in the LHS of JP σKf = JP Kg σ will be first modified by σ and then by f , whereas in the RHS they will be first modified by g and then by σ. This is why g must have the same range as f and its domain should be a subset of dom(σ), ensuring that after both substitutions the terms remain equal. We proceed with the proof of the statement. Proof. The proof proceeds by induction on the structure of P : 27
E1 ,tt
E2 ,tt
he1 ; Σi 99999K hv1 ; Σ1 i
he2 ; Σi 99999K hv2 ; Σ2 i
S
bPAIRc
S
E1 tE E2 ,tt
h(e1 , e2 ) ; Σi 999999999K h(v1 , v2 ) ; Σ1 ∪ Σ2 i S
E1 ,tt
E2 ,tt
he1 ; Σi 99999K hn ; Σ1 i
he2 ; Σi 99999K hv ; Σ2 i
S
bE MITc
S
E1 tE E2 tE [{v}/n],tt
hemit e1 e2 ; Σi 9999999999999999K h() ; Σ1 ∪ Σ2 i S
E,tt
he2 ; Σi 9999K hn ; Σ1 i S
bDW-NSc
n 6∈ S
E,ff
hdo e1 when e2 ; Σi 9999K hdo e1 when n ; Σ1 i S
E2 ,tt
he2 ; Σi 99999K hn ; Σ1 i n ∈ S S
bDW-I NTc
E1 ,ff
he1 ; Σ1 i 99999K he01 ; Σ2 i S
E1 tE E2 ,ff
hdo e1 when e2 ; Σi 999999999K hdo S
E2 ,tt
he2 ; Σi 99999K hn ; Σ1 i n ∈ S S
bDW-E NDc
e01
when n ; Σ2 i E1 ,tt
he1 ; Σ1 i 99999K hv ; Σ2 i S
E1 tE E2 ,tt
hdo e1 when e2 ; Σi 999999999K hv ; Σ1 i S
bL P -U Nc
bL P -S TUc E,ff
he ; Σi 9999K he0 ; Σ1 i S E,ff
E1 ,tt
he ; Σi 99999K hv ; Σ1 i S
E2 ,b
hloop e ; Σ1 i 9999K he0 ; Σ2 i S
E1 tE E2 ,b
0
hloop e ; Σi 9999K he ; loop e ; Σ1 i
0
hloop e ; Σi 99999999K he ; Σ2 i
S
S
Fig. 15. Big-step semantics for RMLq expressions.
Case 1 (P = 0). J0Kf = (), for every f
(By Fig. 6)
(1)
0σ = 0, for every σ
(fn(0) = ∅ )
(2)
J0σKf = J0Kf = () = J0Kf σ, for every f
(By Fig. 6)
(3)
Case 2 (P = xhvi.Q). Let us consider the set of free names of P . We have two cases depending on whether v is a constant or not: Subcase 1 (v is a constant). We proceed with a direct proof. fn(P ) = {x} ∪ fn(Q)
(By fn(·) )
(1)
Let u ˜ = {u1 , . . . , x, . . . un }, n ≥ 1 and σ be a translation. Then x ∈ dom(σ) ∨ x 6∈ dom(σ) Subsubcase 1 (x ∈ dom(σ)). Assume, w.l.o.g σ(x) = x ˆ: σ = {y1 , . . . , xˆ, . . . , yn/u1 , . . . , x, . . . , un }
(Assumption)
(2)
f = {u1 ← z1 , . . . , x ˆ ← zi , . . . , um ← zm }, 1 ≤ i ≤ m ≤ |˜ u|
(Assumption)
(3)
P σ = xhvi.Qσ = x ˆhvi.(Qσ)
(Applying σ )
(4)
28
E2 ,tt
E1 ,tt
he2 ; Σi 99999K hn ; Σ1 i
he1 ; Σ1 i 99999K hv ; Σ2 i
S
bDU-E NDc
S E1 tE E2 ,tt
hdo (e1 ) until e2 (x) → (e3 ) ; Σi 999999999K hv ; Σ2 i S
E2 ,tt
E1 ,ff
he1 ; Σ1 i 99999K he01 ; Σ2 i
he2 ; Σi 99999K hn ; Σ1 i n ∈ S bDU-Pc
S
S
E t E ,ff
v 1 E 2 hdo (e1 ) until e2 (x) → (e3 ) ; Σi 999999999K he3 {S (n)/x} ; Σ2 i S
E2 ,tt
E1 ,ff
he1 ; Σ1 i 99999K he01 ; Σ2 i
he2 ; Σi 99999K hn ; Σ1 i n 6∈ S S
bDU-NPc
S
E1 tE E2 ,ff
hdo (e1 ) until e2 (x) → (e3 ) ; Σi 999999999K hdo S
E1 ,tt
he1 ; Σi 99999K hv1 ; Σ1 i S
(e01 )
until e2 (x) → (e3 ) ; Σ2 i
E2 ,tt
he2 ; Σi 99999K hv2 ; Σ2 i S
E3 ,b
he3 {x/n} ; Σ1 ∪ Σ2 i 9999K he03 ; Σ3 i n fresh S(n) = (v1 , v2 , m) S
bS IG -D ECc
E1 tE E2 tE E3 ,b
hsignale2 x : e1 in e3 ; Σi 999999999999K he03 ; Σ3 i S
∅,tt
he ; Σi 9999K hcj ; Σ1 i j ∈ I S
bC ASEc
E,b
hej ; Σ1 i 999K he0j ; Σ2 i S
E,b
hmatch e with {ci → ei }i∈I ; Σi 999K he0j ; Σ2 i S
Fig. 16. Transition for RMLq expressions (cont.)
By Def. 6 consider: JP σKf = signal x0 in emit zi (v, x0 ); pause ; JQσKf,{zi ←x0 }
(5)
Now, let g = {y1 ← z1 , . . . , x ← zi , . . . , ym ← zm }, 1 ≤ i ≤ m ≤ |˜ u| and consider: JP Kg σ = signal x0 in emit zi (v, x0 ); pause ; (JQKg,{zi ←x0 } )σ JQσKf,{zi ←x0 } = (JQKg,{zi ←x0 } )σ JP σKf = JP Kg σ
(6)
(I.H)
(7)
(By (5),(6),(7))
(8)
Subsubcase 2 (x 6∈ dom(σ)). This case is straightforward by applying the I.H. Subcase 2 (v is a variable). This proof proceeds as above, considering v inside the possible substitutions. Case 3 (All other cases for P ). They proceed as above, taking into account the possible free variables for each process. t u Compositionality Theorem 7 (Compositionality of J·K f ). Let P and E[·] be a π process and an evaluation context (cf. Def. 2), respectively. Then JE[P ]Kf = JEKf JP Kg , for some f, g. 29
Proof. By case analysis on E[·] and each case by induction on the structure of P , using Def. 2: Case 1 (E = P | R). By induction on the structure of P , we only analyze the cases for inaction, output and input: Subcase 1 (P = 0). As follows: E[P ] = 0 | JRKf
JE[P ]Kf = () k JRKf for any f JEKf JP Kf = () k JRKf for any f JEKf JP Kf = JE[P ]Kf
(Assumption)
(1)
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
Subcase 2 (P = xhvi.P ). As follows:
E[P ] = xhvi.P | JRKf
(Assumption)
(1)
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
E[P ] = x(y).P | R
(Assumption)
(1)
JE[P ]Kf = Jx(y).P Kf k JRKf for any f JEKf JP Kf = Jx(y).P Kf k JRKf for any f JEKf JP Kf = JE[P ]Kf
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
JE[P ]Kf = Jxhvi.P Kf k JRKf for any f JEKf JP Kf = Jxhvi.P Kf k JRKf for any f JEKf JP Kf = JE[P ]Kf
Subcase 3 (P = x(y).P ). As follows:
Subcase 4 (All other processes). The proof is analogous to the previous ones. Case 2 (E = R | V ). As above. Case 3 (E = (νxy)P ). By induction on the structure of P , we show the cases for inaction, input and output: Subcase 1 (P = 0). As follows: E[P ] = (νxy)0 ≡S 0
(Assumption, ≡S )
(1)
JE[P ]Kf = signal c in J0Kf,{x←c,y←c}
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
Let g = f, {x ← c, y ← c} : JEKf JP Kg = signal c in J0Kg JEKf JP Kg = JE[P ]Kf
Subcase 2 (P = xhvi.P ). As follows:
E[P ] = (νzw)xhvi.P
(Assumption)
(1)
JE[P ]Kf = signal c in JP Kf,{x←c,y←c}
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
Let g = f, {x ← c, y ← c} : JEKf JP Kg = signal c in JP Kg JEKf JP Kg = JE[P ]Kf
30
Subcase 3 (P = x(y).P ). As follows: E[P ] = (νzw)x(y).P
(Assumption)
(1)
JE[P ]Kf = signal c in JP Kf,{z←c,w←c}
(Fig. 6)
(2)
(Fig. 6)
(3)
(By (2),(3))
(4)
Let g = f, {z ← c, w ← c} : JEKf JP Kg = signal c in JP Kg JEKf JP Kg = JE[P ]Kf
t u Operational correspondence Before stating the operational correspondence statement we proceed by providing some auxiliary definitions: Definition 27 (Prefixed Processes and Redexes). We say processes xhvi.P , x(y).P , x/l.P , x.{li : Pi }i∈I , and ∗ x(y).P are pre-redexes prefixed at x. Furthermore, we will call processes x(y).P , x . {li : Pi }i∈I and ∗ x(y).P input-like. Redexes are processes of the form xhvi.P | y(z).Q, xhvi.P | ∗ y(z).Q, or x / lj .P | y . {li : Qi }i∈I , with j ∈ I.
Definition 28 (Well-formed π processes). Process P is well-formed if for each of its structural congruent processes of the form (ν x eye)(Q | R | S) the following conditions hold: 1. If Q = v? (Q1 ) : (Q2 ) then v ∈ {tt, ff}. 2. If Q and R are prefixed on the same variable then they are input-like of the same nature. 3. If Q is prefixed on x1 ∈ x e and R is prefixed on y1 ∈ ye then Q | R is a redex.
Theorem 8 (Operational Correspondence). Let C[P ] be a well-formed π process (cf. Def. 28) with C[·] = (ν x eye)(· | S). Then: 1. Soundness: If C[P ] −→ C[Q] then JC[P ]Kf 7−→,→R JC 0 [Q]Kf and C[Q] −→∗ (ν x eye)C 0 [Q], for some C 0 , f . 2. Completeness: If JC[P ]Kf 7−→ R then there exist Q and C 0 [·] such that R ,→R JC 0 [Q]Kf and C[P ] −→∗ (ν x eye)C 0 [Q]. Proof. We prove each item: 1. Soundness: The proof proceeds by induction on the reduction and a case analysis on the last applied rule: Case 1 (Rule bI F Tc). Assume C[P ] −→ C[Q] with Rule bI F Tc. C[P ] ≡S (ν x eye)(tt? (Q) : (R) | S)
(Assumption)
(1)
C[P ] −→ (ν x eye)(Q | S)
(Fig. 1)
(2)
31
By applying J·Kf to (1): JC[P ]Kf = signalh e c : (_, _) in JP | SKg
= signalh e c : (_, _) in (if tt then pause ; JQKf else pause ; JRKf ) k JSKg ∅,tt
Using Rule bC ASEc from Fig. 4 we see that JC[P ]Kf −−→ JQKf . G
We now analyze JSKg . By Def. 28, we know that either: (a) S is a redex (or) (b) S ≡S x(y).S 0 (or) (c) S ≡S x . {li : Si0 }i∈I (or) (d) S ≡S ∗ x(y).S 0 . We analyze each of these cases:
E,ff
(a) Subsubcase 1 (S is a redex). Then there exist E, G, S 0 such that JSKf −−−→ JS 0 Kg , such that G
S −→ S 0 . Since S is a redex, we apply a case analysis on the rules in Fig. 1: – Subsubsubcase 1 (Rule bI F Tc). We build a derivation as above. – Subsubsubcase 2 (Rule bI F Fc). We build a derivation as above, but instead of tt we use ff and apply Rule bC ASEc. – Subsubsubcase 3 (Rule bC OMc). We build a derivation tree as follows; we only observe process: (νxy)(xhvi.S 0 | y(z).S 00 ) assume a signal environment G with all the necessary signals initialized. Then, we will show that E,b J(νxy)(xhvi.S 0 | y(z).S 00 )Kf −−→ JS 0 Kg0 | JS 00 Kg00 {v/z } G
for some E, b, g. Applying J·Kf (Fig. 6): J(νxy)(xhvi.S 0 | y(z).S 00 )Kf =signalh c : (_, _) in
signalh x0 : (_, _) in emit c (v, x0 ); pause ; JS 0 Kg0 k await c(z, α) in JS 00 Kg00
where g 0 = f, {x ← c, y ← c}, {x ← x0 } and g 00 = f, {x ← c, y ← c}, {y ← α}. For the derivation tree we apply Rule bS IG -D ECLc and then Rule bL-PARc (cf. Fig. 4) to split the encoded process into: <1 = signalh x0 : (_, _) in emit c1 (v, x0 ); pause ; JS 0 Kg0 <2 = await c1 (z, α) in JS 00 Kg00
We show the subtrees for each
E,tt
emit d (v, c0 ) −−−→ () G
bS IG -D ECc 0
E,tt
signalh x : (_, _) in emit c (v, x0 ) −−−→ () G
32
where E = {*(v, x0 ) + /x0 }
∅,ff
pause −−→ () G
bL-PARc
E,ff
pause ; JS 0 Kg0 −−−→ JS 0 Kg0 G
• The tree for <2 is as follows:
∅,ff
pause −−→ () G
bL P -S TUc
∅,ff
loop pause −−→ (); loop pause G
bDU-Pc
∅,ff
await c(z, α) in JS 00 Kg00 −−→ JS 00 Kg00 {v/z } G
E,ff
We have shown that if S is a redex, then JSKf −−−→ JS 0 Kg , such that S −→ S 0 . G
– Subsubsubcase 4 (Rule bS ELc). We build a derivation tree, similarly as above. – Subsubsubcase 5 (Rule bR EPLc). The derivation tree can be obtained as follows, we only observe process: (νxy)(xhvi.S 0 | ∗ y(z).S 00 ) assume a signal environment G with all the necessary signals initialized. Then, we will show that E,b
J(νxy)(xhvi.S 0 | ∗ y(z).S 00 )Kf −−→JS 0 Kf k run process JS 00 Kg {v/z } k G
run Λ y (process JS 00 Kf )
for some E, b, g, where Λ is the unfolding of the recursive call to repl. Applying J·Kf (Fig. 6): J(νxy)(xhvi.S 0 | ∗ y(z).S 00 )Kf = signalh c : (_, _) in
signalh x0 : (_, _) in emit c (v, x0 ); pause ; JS 0 Kg0 k J∗ y(z).S 00 Kf
For the derivation tree we start using Rule bS IG -D ECLc to split the encoded process into: <1 = signalh x0 : (_, _) in emit c (v, x0 ); pause ; JS 0 Kg0
<2 = let rec process repl α β =
signal s in do (loop present fα ? (emit s ; pause ) : (())) until fα (z, w) → (run β{α←w} ) k await s(_) in run (repl α β) in run repl y (process JS 00 Kf ) we then proceed to analyze each process individually. First, by using a similar derivation to {*(v,x0 )+/x0 },tt
Subsubsubcase 3, we have that <1 −−−−−−−−−−→ JS 0 Kf . G
Then, we now can show the derivation for <2 : bVALc
∅,tt
<4 −−→ <4 bR ECURc
∅,tt
rec repl = <3 −−→ <4 bL-D ONEc
D1
G
G
E,ff
run <4 y (process JS 00 Kf ) −−−→ <5 G
E,ff
<2 −−−→ <5 S
33
where: <02 = process signal s in do (loop present fα ? (emit s ; pause ) : (())) until fα (z, w) → (run β{α←w} ) k await s(_) in run (repl α β) <3 = λβλα.<02 <4 = λβλα.<02 {rec repl = <3/repl} <5 = run process JS 00 Kf {v/z } k run rec repl = (<3 y (process JS 00 Kf )) and the derivation tree D1 is as follows: bVALc
bVALc
∅,tt G
bA PPc
∅,tt
λβ.<02 {y/α} −−→ λβ.<6 {y/α}
<4 y −−→ <4
G
∅,tt
<4 y −−→ λβ.<6 {y/α} bA PPc bRUNc
D2
G
∅,tt 00 <4 y (process JS 00 Kf ) −−→ <6 {process JS Kf , y/β, α} D3 G
E,ff
run <4 y (process JS 00 Kf ) −−−→ <5 G
00 where <6 = <4 {rec repl = <3 , process JS Kf , y/repl, β, α} and D2 is
bVALc
∅,tt 00 00 <02 {process JS Kf , y/β, α} −−→ <02 {process JS Kf , y/β, α} G
and D3 is presented in the sequel. First, consider <7 =signal s in do (loop present fy ? (emit s ; pause ) : (())) until fy (z, w) → (run process JS 00 Kf,{y←w} ) k await s(_) in run rec repl = (<3 y (process JS 00 Kf ) E,ff
D3 will show that <7 −−−→ <5 as follows. First apply Rule bS IG -D ECLc to add the fresh signal S
to the signal environment G, now apply Rule bL-PARc to obtain the following processes: <8 = do (loop present fy ? (emit s ; pause ) : (())) until fy (z, w) → (run process JS 00 Kf,{y←w} ) <9 = await s(_) in run rec repl = (<3 y (process JS 00 Kf ) Now we will show that E,ff
<8 −−−→ run process JS 00 Kf {v/z } G
and that ∅,ff
<9 −−→ run rec repl = (<3 y (process JS 00 Kf ) G
The derivation corresponding to <8 is as follows: 34
∅,ff
E,tt
emit s −−−→ () pause −−→ () G
bL-D ONEc
S
E1 ,ff
emit s ; pause −−−→ () G
bS IG -Pc
E1 ,ff
present fy ? (emit s ; pause ) : (()) −−−→ () G
bL P -S TUc
E1 ,ff
loop present fy ? (emit s ; pause ) : (()) −−−→ (); loop present fy ? (emit s ; pause ) : (()) G
bDU-Pc
E,ff
<8 −−−→ run process JS 00 Kf {v/z } G
where E = {*_ + /s}. Notice that since signal s carries no value, the multiset corresponding to the values of s is empty. With these derivations we have shown that E,b
J(νxy)(xhvi.S 0 | ∗ y(z).S 00 )Kf −−→JS 0 Kf k run process JS 00 Kf,{y←w} {v/z } k G
run rec repl = (<3 y (process JS 00 Kf )
(b) Subsubcase 2 (S ≡S x(y).S 0 ). In this case the only possible Rule in RML is bDU-NPc: ∅,ff
pause −−→ () G
bL P -S TUc
∅,ff
loop pause −−→ (); loop pause G
bDU-NPc
∅,ff
JSKg0 −−→ JSKg0 G
(c) Subsubcase 3 (S ≡S x . {li : Si0 }i∈I ). As above. (d) Subsubcase 4 (S ≡S ∗ x(z).S 0 ). Observe that since signal fx is not emitted, the unfolded process will be stuck in the loop loop present fα ? (emit s ; pause ) : (()). Hence, indicating that S 6−→. The proof concludes by applying the I.H on JQKg . Case 2 (Rule bI F Fc). As above. Case 3 (Rule bC OMc). Assume C[P ] −→ C[Q] with Rule bC OMc. Assume S = S1 , . . . , Sn , n ≥ 1, for some arbitrary processes S 1 ≤ i ≤ n: C[P ] ≡S (ν we eu)(νxy)(xhvi.Q | y(z).R | S)
(Assumption)
(3)
C[P ] −→ (ν we eu)(νxy)(Q | R{v/z } | S)
(Fig. 1)
(4)
By applying J·Kf (Fig. 6) to (1): JC[P ]Kf = signalh e c : (_, _) in JP | SKg
= signalh e c : (_, _) in signalh x0 : (_, _) in emit c1 (v, x0 ); pause ; JQKg0 k await c1 (z, α) in JRKg00 k JSKg
where signalh e c : (_, _) in JP | SKg is a shortcut for the signal declaration: signalh c1 : (_, _) in signalh c2 : (_, _) in . . . signalh cn+1 : (_, _) in . . . 35
and g = f, {x ← c1 , y ← c1 . . . , w1 ← c2 , u1 ← c2 , . . . , wn ← cn+1 , un ← cn+1 }, g 0 = g, {x ← x0 } and g 00 = g, {y ← α}. Assume G: G = {((v, x0 ), g, *(v, x0 )+)/c1 , ((_, _), h, *+)/x0 , (p1 , h, m1 )/c1 , . . . , (pn , h, mn )/cn+1 } where p1 , . . . , pn and m1 , . . . , mn are pairs and multisets, respectively. They contain information pertaining all the signals used in the translation. Using the rules in Def. 4 we can show that: E,b
JC[P ]Kf −−→ JQKg0 k JRKg00 {v/z } k U G
First, apply Rule bS IG -D ECLc n times to obtain process: signalh x0 : (_, _) in emit c1 (v, x0 ); pause ; JQKg0 k await c1 (z, α) in JRKg00 k JSKg
Then, apply Rule bL-PARc to split into processes: <1 = signalh x0 : (_, _) in emit c1 (v, x0 ); pause ; JQKg0 <2 = await c1 (z, α) in JRKg00 <3 = JSKg
Finally, we show the derivation for each one of the processes as subcases: – <1 : Apply Rule bL-PARc to split the sequential composition operator ; and obtain the following trees: bE MITc
E,tt
emit d (v, c0 ) −−−→ () G
bS IG -D ECc
E,tt
0
signalh x : (_, _) in emit c1 (v, x0 ) −−−→ () G
where E = {*(v, x0 ) + /x0 }
∅,ff
pause −−→ () G
bL-PARc
– <2 : The tree is shown as follows:
E,ff
pause ; JQKg0 −−−→ JQKg0 G
∅,ff
pause −−→ () S
bL P -S TUc
∅,ff
loop pause −−→ (); loop pause S
bDU-Pc
∅,ff
await c1 (z, α) in JRKg00 −−→ JRKg00 {v/z }
– <3 : By Def. 28, we know that for every 1 ≤ i ≤ n either: (a) S is a redex (or) (b) S ≡S x(y).S 0 (or) (c) S ≡S x . {li : S 0 }i∈I (or) (d) S ≡S ∗ x(y).S 0 . We analyze each of these cases: 36
S
(a) Subsubcase 1 (S is a redex). If S is a redex then S −→ S 0 . We now need to show that E,tt JSKg −−−→ JS 0 Kg : We proceed by a case analysis on the possible reduction rule (Fig. 1): G
• Subsubsubcase 1 (Rule bI F Tc). We build a derivation as above. • Subsubsubcase 2 (Rule bI F Fc). We build a derivation as above. • Subsubsubcase 3 (Rule bC OMc). We build a derivation as above. • Subsubsubcase 4 (Rule bS ELc). We build a derivation as above. • Subsubsubcase 5 (Rule bR EPLc). We build a derivation as above. (b) Subsubcase 2 (S ≡S x(y).S 0 ). As above. (c) Subsubcase 3 (S ≡S x . {li : Si0 }i∈I ). As above. (d) Subsubcase 4 (S ≡S ∗ x(y).S 0 ). As above. The proof concludes by applying the I.H to JQKg and JRKg . Case 4 (Rule bR EPLc). : We can build a derivation tree as above. Case 5 (Rule bS ELc). : We can build a derivation tree as above. Case 6 (Rule bR ESc). : Straightforward by the I.H. Case 7 (Rule bPARc). : Straightforward by the I.H. Case 8 (Rule bS TRc). : Can be reduced to any of the above cases.
2. Completeness: By assumption, P is well-formed (cf. Def. 28). Then, we apply a case analysis for the three possibilities concerning well formed processes: P is either a conditional, a redex or an input-like process. By Def. 28, we know that: C[P ] ≡S (ν x eye)(P | S1 | . . . | Sn )
(Assumption)
(1)
and the cases for P are as follows: (a) P ≡ v? (Q) : (R) with v ∈ {tt, ff}. (b) P is a redex (or) (c) P ≡S x(y).S 0 (or) (d) P ≡S x . {li : Si0 }i∈I (or) (e) P ≡S ∗ x(y).S 0 . We discuss the proof for each case. (a) P ≡ v? (Q) : (R): We proceed by a case analysis on v and build the derivations as in the proof for Soundness. (b) P is a redex: We proceed by induction and a case analysis on the last applied rule in π. Notice that all redexes can reduce. All the derivation trees are as in the proof for Soundness. (c) P ≡S x(y).S 0 : We proceed by a case analysis on S 0 and build the derivations as in the proof for Soundness. (d) P ≡S x . {li : Si0 }i∈I : We proceed by a case analysis on S 0 and build the derivations as in the proof for Soundness. (e) P ≡S ∗ x(y).S 0 : We proceed by a case analysis on S 0 and build the derivations as in the proof for Soundness. t u 37
Notice that the type system for π does not ensure that every well-typed process is well-formed. For example consider the process x? (0) : (0). This process is well-typed under environment Γ = x : bool, but it is not well-formed, since x is not a boolean value (i.e., tt, ff). Although the same logic apply for programs (i.e., (ν x eye)P ), the fact that every variable is bound in a program, enforces, via Rule (T:R ES), that every pair of variables x, y introduced in the typing environment are of dual types. However, even with this restriction, not every well-typed program is well-formed. In fact, consider the following program: (νxy)(xhvi.P | y(x).Q | xhv 0 i.R) it is well-typed, but not well-formed, as there is more than one output process prefixed on the same variable. Remark 1 (Typing derivation). We present the typing derivation for the previously presented process: (T:VAR) (T:VAR)
(T:B OOL)
Γ ` x :!bool.T Γ ` v : bool [3] Γ ` xhvi.P [2]
[5] Γ `P
[1]
(T:VAR)
Γ ` y :?bool.T [4]
Γ, z : bool ` Q
Γ ` y(z).Q
(T:B OOL)
Γ ` x :!bool.T Γ ` v 0 : bool [6] Γ ` xhv 0 i.R
Γ `R
Γ ` y(x).Q | xhvi.R
Γ ` xhvi.P | y(z).Q | xhv 0 i.R ` (νxy)(xhvi.P | y(z).Q | xhv 0 i.R)
where Γ = {x : µa. un!bool.T, y : µb. un?bool.T , }; assuming that Γ ` P , Γ ` Q and Γ ` R. Also, [1] = (T:R ES), [2] = (T:PAR), [3] = (T:O UT), [4] = (T:PAR), [5] = (T:I N) and [6] = (T:O UT). Considering the previous situation, we would like to characterize the kind of typable programs that do not fit in our notion of well-formedness. For this, we consider the definition of well-formedness given in [19]: Definition 29 (Well-formed π processes (Vasconcelos)). Process P is well-formed if for each of its structural congruent processes of the form (ν x eye)(Q | R | S) the following conditions hold: 1. If Q = v? (Q1 ) : (Q2 ) then v ∈ {tt, ff}. 2. If Q and R are prefixed on the same variable then they are of the same nature. 3. If Q is prefixed on x1 ∈ x e and R is prefixed on y1 ∈ ye then Q | R is a redex. Let us call this class of well-formed processes V. Also, consider our own notion of well-formed as a class R: Definition 30 (Well-formed π processes). Process P is well-formed if for each of its structural congruent processes of the form (ν x eye)(Q | R | S) the following conditions hold: 1. If Q = v? (Q1 ) : (Q2 ) then v ∈ {tt, ff}. 2. If Q and R are prefixed on the same variable then they are input-like of the same nature. 3. If Q is prefixed on x1 ∈ x e and R is prefixed on y1 ∈ ye then Q | R is a redex. Intuitively, one can argue that R ⊂ V. We formalize this argument in the following statements: Lemma 2. For every P ∈ R then P ∈ V. Proof. We proceed by a direct proof. Assume P ∈ R, then by Def. 29, we have thatfor each of its structural congruent processes of the form (ν x eye)(Q | R | S) the following conditions hold: 1. If Q = v? (Q1 ) : (Q2 ) then v ∈ {tt, ff}. 38
2. If Q and R are prefixed on the same variable then they are input-like. 3. If Q is prefixed on x1 ∈ x e and R is prefixed on y1 ∈ ye then Q | R is a redex. Since conditions (1), (3) are the same as in Def. 29, we only need to reconcile condition (2). Hence, consider an arbitrary process that satisfies condition (2): (ν x eye)(Q | R | S) where Q and R are prefixed on the same variable, therefore, they are input-like. Clearly, two input-like processes and of the same nature satisfy condition (2) of Def. 29, and therefore, they belong to class V. Thus, we have determined that a process P ∈ R is also contained on V. t u Lemma 3. There exists a P such that P ∈ V and P 6∈ R. Proof. Consider the following process: (νxy)(xhvi.P | y(x).Q | xhv 0 i.R) it is easy to see that it does not belong to R, since it violates Def. 28 (i.e., there are 2 pre-redexes prefixed on the same variable and they are not input-like). The typing derivation can be for this process can be found in Rem. 1. t u Corollary 1. R ⊂ V. Proof. This follows directly from Lem. 2 and Lem. 3.
t u
We have then proven that R ⊂ V. Although V strictly contains R, these two classes are strongly dependent on the typing system for π. In fact, we can show that under linear types R = V. Intuitively, this claim holds because under linear types processes are obliged to use a given endpoint only once, without allowing more parallel copies to be executed. Formally: Theorem 9. For every context Γ and π process P such that Γ ` P and lin(Γ ), we have that P ∈ V if and only if P ∈ R. Proof. We prove each direction of the biconditional: (⇒) This is proved by showing that the second condition of the well-formedness in [19] cannot hold for any process typed under linear types. This is trivially true because using Rule (T:PAR) will split the context and one of the branches will not be typable. (⇐) Apply Lem. 2. t u Now, we would like to classify the difference between these two classes: V \ R. By comparing the two definitions (Def. 29 and Def. 28), one can see that the only difference is in (2). Therefore, the new class S can be described as follows: Definition 31 (Class S). Process P ∈ S if for one or more of its structural congruent processes of the form (ν x eye)(Q | R | S) the following condition holds: 1. If Q and R are prefixed on the same variable then they are not input-like and of the same nature (i.e., output-like). 39
Intuitively, the previous description corresponds to a process P ∈ S: if P ≡S (ν x eye)(Q | R | S) and processes Q and R are not input-like and of the same nature, they must be output-like. This means that S describes processes such as: (νxy)(xhvi.P | y(x).Q | xhv 0 i.R) or (νxy)(x / l.P | y . {li : Qi } | x / l0 .R) Notice that while processes such as (νxy)(xhvi.P ) belong to S, they are not of interest in an operational correspondence statement, for they have no dynamic behavior. We now state a corollary of our operational correspondence statement (cf. Thm. 8) to include a class of typed π processes: Corollary 2 (Operational Correspondence (Typed and Well-formed)). Let C[P ] ∈ R be a well-typed π program (cf. Def. 28, Not. 2) with C[·] = (ν x eye)(· | S). Then: 1. Soundness: If C[P ] −→ C[Q] then JC[P ]Kf 7−→,→R JC 0 [Q]Kf and C[Q] −→∗ (ν x eye)C 0 [Q], for some C 0 , f . 2. Completeness: If JC[P ]Kf 7−→ R then there exist Q and C 0 [·] such that R ,→R JC 0 [Q]Kf and C[P ] −→∗ (ν x eye)C 0 [Q]. t u
Proof. The proof is the same as Thm. 8.
Notice that in the corollary there are two new restrictions: the π process must be a program (cf. Not. 2) and well-typed (i.e., ` P ). These restrictions make sense as we are trying to find a meaningful class of well-typed π processes that satisfy the operational correspondence property of the encoding. Now, one may argue that well-formed, well-typed processes are not very interesting in themselves, as the operational correspondence statement (cf. Thm. 8) already considers well-formed processes. Therefore, we would like to study the case when the program is well-typed and not well-formed. Hence, we will show that it is possible to find a context C[·] for every program P 6∈ R, such that C[P ] ∈ R (cf. Def. 28). First, we state some auxiliary results: Lemma 4 (Safety [19]). If ` P then P ∈ V. The previous lemma ensures that well-typed programs are well-formed in the sense of [19]. Therefore, we know that a well-typed program P 6∈ Wr will belong to S, a class we already described above. Hence, it is possible to prove the following statement: Theorem 10. For every well-typed program P such that P 6∈ R there exists a context C[·] such that ` C[P ] and C[P ] ∈ R. Proof. We proceed by a direct proof looking at the structure of a program P ∈ S. Then, we apply a case analysis on the possible output-like processes that compose P . (i) Since P is a program then P ≡S (ν x eye)Q. (ii) Since P 6∈ R and P is well-typed then, by Lem. 4 and Def. 31 we have that P ∈ S. (iii) From (ii) notice that, without loss of generality, P has at least one output-like process without a partner. Here we distinguish cases depending on if that process is an output or selection and show how to build context C[·]: (a) P ≡S (ν x eye)(xhvi.R | Q0 ): 40
(1) First, take the outermost restrictions (ν x eye). Context C[·] = (ν x eye)(· | O | Q0 ). (2) We now need to build process O. Notice that from the assumptions, ` P . Therefore, we know that there must exist a context Γ, x : T such that Γ, x : T, y : T ` xhvi.R | Q0 . (3) From (iii) we also know that there is no process in Q0 that can reduce with xhvi.R. Hence, we need to add a new process to the context. (4) From (ii), (2) we can conclude that any process O = y(z).R0 such that Γ, x : T, y : T ` xhvi.R | y(z).R0 | Q0 will make P ∈ Wr , by introducing a complementary process that makes xhvi.R | y(z).R0 a redex. Then, context C[·] = (ν x eye)(· | O | Q0 ) = (ν x eye)(· | y(z).R0 | Q0 ) 0 where Γ, x : T, y : T ` xhvi.R | Q and Γ, x : T, y : T ` xhvi.R | y(z).R0 | Q0 . (5) Notice that by construction, C[P ] is typable under the empty environment. This procedure can be extended to any number of output processes without a partner, or in different channels. (b) P ≡S (ν x eye)(x / l.R | Q0 ): (1) First, take the outermost restrictions (ν x eye). Context C[·] = (ν x eye)(· | O | Q0 ). (2) We now need to build process O. Notice that from the assumptions, ` P . Therefore, we know that there must exist a context Γ, x : T such that Γ, x : T, y : T ` x / l.R | Q0 . (3) From (iii) we also know that there is no process in Q0 that can reduce with x / l.R. Hence, we need to add a new process to the context. (4) From (ii), (2) we can conclude that any process O = y . {li : Ri0 }i∈I such that Γ, x : T, y : T ` l / v.R | y . {li : Ri0 }i∈I | Q0 will make P ∈ Wr , by introducing a complementary process that makes x / l.R | y . {li : Ri0 }i∈I a redex. Then, context C[·] = (ν x eye)(· | O | Q0 ) = 0 0 0 (ν x eye)(· | y . {li : Ri }i∈I | Q ) where Γ, x : T, y : T ` x / l.R | Q and Γ, x : T, y : T ` l / v.R | y . {li : Ri0 }i∈I | Q0 . (5) Notice that by construction, C[P ] is typable under the empty environment. This procedure can be extended to any number of selection processes without a partner, or in different channels. t u Using the previous theorem we can then generalize Corollary 2 by dropping the well-formed requirement (P ∈ R) as follows: Corollary 3 (Operational Correspondence (Typed)). Let P be a well-typed π program (cf. Not. 2) then there exists C[·] such that C[P ] is a well-typed program and: 1. Soundness: If C[P ] −→ C[Q] then JC[P ]Kf 7−→,→R JC 0 [Q]Kf and C[Q] −→∗ (ν x eye)C 0 [Q], for some C 0 , f . 2. Completeness: If JC[P ]Kf 7−→ R then there exist Q and C 0 [·] such that R ,→R JC 0 [Q]Kf and C[P ] −→∗ (ν x eye)C 0 [Q].
Proof. By applying Thm. 10 and following the same proof as Thm. 8.
t u
We have then shown that there exists a meaningful class of well-typed π processes that satisfy an operational correspondence statement for our encoding. Furthermore, we have shown that this class of processes is that of programs, which has strong ties to the safety properties studied in [19]. As a final result regarding typing properties, we would like to find a way to relate session types with the typing of RML (§ A.3). First, we show some admissible rules for the derived constructs of our types system: Proposition 1 (Admissible rules for RML). For the following derived expressions in Fig. 3: e1 k e2 , e1 ; e2 and await e1 (x) in e2 , the following rules are admissible: (T-S EQ )
Γ `r e1 : τ1 | κ1
Γ `r e2 : τ2 | κ2
Γ `r e1 ; e2 : τ2 | κ1 ; κ2 41
(T-PAR )
(T-AWAIT )
Γ `r e1 : τ1 | κ1
Γ `r e2 : τ2 | κ2
Γ `r e1 k e2 : unit | κ1 k κ2
Γ `r e1 : (τ1 , τ2 ) event | _ ≡κ 0 Γ, x : τ2 `r e2 : unit | κ2 Γ `r await e1 (x) in e2 : unit | •∞ +(•; κ2 )
Proof. We show the typing derivation for each expression: 1. Type derivation for e1 ; e2 : (T-L ET )
Γ `r e1 : τ1 | κ1
Γ `r () : unit | 0 Γ `r e2 : τ2 | κ2
Γ `r let _ = () and _ = e1 in e2 : τ2 | (0 k κ1 ); k2
therefore, as an admissible Rule we have: (T-S EQ )
Γ `r e1 : τ1 | κ1
Γ `r e2 : τ2 | κ2
Γ `r e1 ; e2 : τ2 | κ1 ; κ2
2. Type derivation for e1 k e2 : (T-L ET )
Γ `r e1 : τ1 | κ1
Γ `r e2 : τ2 | κ2
Γ `r () : unit | 0
Γ `r let _ = e1 and _ = e2 in () : unit | (κ1 k κ2 ); 0
therefore, as an admissible Rule we have: (T-PAR )
Γ `r e1 : τ1 | κ1
Γ `r e2 : τ2 | κ2
Γ `r e1 k e2 : unit | κ1 k κ2
3. Type derivation for await e1 (x) in e2 : (T-D O U)
Γ `r loop pause : unit | (0; •)∞
Γ `r e1 (τ1 , τ2 ) event | _ ≡κ 0 Γ, x : τ2 `r e2 : unit | κ2
Γ `r do (loop pause ) until e1 (x) → (e2 ) : unit | (0; •)∞ + (•; κ2 )
therefore, as an admissible Rule we have: (T-AWAIT )
Γ `r e1 : (τ1 , τ2 ) event | _ ≡κ 0 Γ, x : τ2 `r e2 : unit | κ2 Γ `r await e1 (x) in e2 : unit | •∞ +(•; κ2 ) t u
Before proceeding to state the main theorem about the typing of encoded programs, we will need some auxiliary definitions and results: 42
Definition 32 (Subjects and continuations of a session type). For every type we will define the set of subjects of a type, written sub(T ), as follows: sub(bool) = {bool} sub(end) = ∅ sub(!T.S) = sub(T ) ∪ sub(S) sub(?T.S) = sub(T ) ∪ sub(S) [ sub(⊗{l : Ti }i∈I ) = sub(Ti ) i∈I
sub(&{l : Ti }i∈I ) =
[
sub(Ti )
i∈I
sub(a) = {a} sub(µa.T ) = sub(T ) We will also define sub1 (T ), as the set of most external subject of a protocol as follows: sub1 (bool) = {bool} sub1 (end) = ∅ sub1 (!T.S) = sub1 (T ) sub1 (?T.S) = sub1 (T ) [ sub1 (⊗{l : Ti }i∈I ) = sub1 (Ti ) i∈I
sub1 (&{l : Ti }i∈I ) =
[
sub1 (Ti )
i∈I
sub1 (a) = {a} sub1 (µa.T ) = sub(T ) Lastly, we define cont(T ) as the set of continuations of a type as follows: cont(bool) = {bool} cont(end) = ∅ cont(!T.S) = {S} cont(?T.S) = {S} cont(⊗{l : Ti }i∈I ) = {Ti }i∈I cont(&{l : Ti }i∈I ) = {Ti }i∈I cont(a) = ∅ cont(µa.T ) = cont(T ) The previous definition intuitively allow us to get the set of types of all the messages in a given protocol, as well as the set of possible continuations the protocol will have. It will be useful to characterize the kind of protocols we will treat further results. 43
Definition 33 (Translation set of type continuations). We define ⦃·⦄ as a mapping from π types and pretypes (cf. Fig. 9) to a set of RML types (cf. Fig. 13) as follows: ⦃bool⦄ = {bool} ⦃end⦄ = {(unit, unit) event} ⦃q ?T.S⦄ = {(τ1 × τ2 , τ1 × τ2 ) event | τ1 ∈ ⦃T ⦄ ∧ τ2 ∈ ⦃S⦄} ⦃q !T.S⦄ = {(τ1 × τ2 , τ1 × τ2 ) event | τ1 ∈ ⦃T ⦄ ∧ τ2 ∈ ⦃S⦄} ⦃q ⊗ {li : Ti }i∈I ⦄ = {(hli : τi ii∈I × τ, hli : ⦃Ti ⦄ii∈I × τ ) event | ∀i.(τi ∈ ⦃Ti ⦄ ∧ τ ∈ ⦃Ti ⦄)} ⦃q ⊗ {li : Ti }i∈I ⦄ = {(hli : τi ii∈I × τ, hli : ⦃Ti ⦄ii∈I × τ ) event | ∀i.(τi ∈ ⦃Ti ⦄ ∧ τ ∈ ⦃Ti ⦄)} ⦃a⦄ = {(unit, unit) event} ⦃µa.T ⦄ = ⦃T ⦄ Intuitively, we will use the translation set of type continuations for keeping track of the type of the variables carrying the continuation signal will have. The translation of base types is standard. Pretypes generate sets disregarding the qualifier and depending on types T and S. Note that if the types are not selections nor branchings, the set will be an unitary set with an element of signal type. In the case that the pretype is a selection or branching types the the set will be composed of types of the form (hli : τi ii∈I × τ, hli : ⦃Ti ⦄ii∈I × τ ) event, where τ will belong to each possible continuation ⦃Ti ⦄, this is done to be able to keep track of all the possible continuations when branching occurs. We extend ⦃·⦄ to sets, using a pointwise application. We now prove a result that will allow us to determine which kind of types can a continuation have. Lemma 5 (Type inversion on processes). For every pre-redex P (cf. Def. 27) in π such that Γ ` P for some Γ one of the following holds: 1. If P = xhvi.Q then there exist an environment Γ 0 , types T, S and a qualifier q such that Γ = Γ 0 , v : T, x : q !T.S and either: (a) S = end (or) (b) S = q p (or) (c) S = µa.T . 2. If P = x(y).Q then there exist an environment Γ 0 , types T, S and a qualifier q such that Γ = Γ 0 , x : q ?T.S and either: (a) S = end (or) (b) S = q p (or) (c) S = µa.T . 3. If P = x / l.Q then there exist an environment Γ 0 , types Ti with i ∈ I and a qualifier q such that Γ = Γ 0 , x : ⊗{li : Ti } and for every Ti : (a) Ti = end (or) (b) Ti = q p (or) (c) Ti = µa.T . 4. If P = x . {li : Pi }i∈I then there exist an environment Γ 0 , types Ti with i ∈ I and a qualifier q such that Γ = Γ 0 , x : &{li : Ti } and for every Ti : (a) Ti = end (or) (b) Ti = q p (or) 44
(c) Ti = µa.T . 5. If P = ∗ x(y).Q then there exist an environment Γ 0 , types T, S and a qualifier q such that Γ = Γ 0 , x : q ?T.S and either : (a) S = end (or) (b) S = q p (or) (c) S = µa.T . Proof. The proof proceeds by structural induction on P . We only show the proof for output, since all the cases are analogous: Case 1. P = xhvi.P : By assumption we have that there exists Γ such that Γ ` P and by inspection on the typing rules for π (cf. Fig. 10) we can see that only Rule (T:O UT) can be applied. Therefore, we have that Γ = Γ1 ◦ Γ2 ◦ Γ3 and: (T:O UT )
Γ1 ` x : q !T.S Γ2 ` v : T Γ3 ◦ x : S ` Q Γ1 ◦ Γ2 ◦ Γ3 ` xhvi.Q
From inversion on the previous Rule application we have: 1. Γ1 ` x : q !T.S implies that x : q !T.S ∈ Γ , sine we have to finish that derivation by Rule (T:VAR). 2. Γ2 ` v : T implies that v : T ∈ Γ or that v ∈ {tt, ff}. 3. Γ3 , x : S ` Q implies that S can only be of the following types, by applying a case analysis on the type grammar in Fig. 18: (a) S = bool: Impossible, since by assumption Γ3 , x : S ` Q and a boolean type cannot appear outside the subject of a type. (b) S = end: This is possible, occurs when the protocol ends. (c) S = q p: Possible, occurs if the protocol has not yet terminated. (d) S = a: Impossible, since recursive variables alone are not typable in the system, this case contradicts the typability assumption. (e) S = µa.T : Possible, occurs if S is the unfolding of a recursive type. t u Definition 34 (Linear environment in π). Let L(·) be a function between π typing environments defined as follows: L(Γ, x : bool) = L(Γ ) L(Γ, x : end) = L(Γ ) L(Γ, x : lin !T.S) = L(Γ ), x : lin !T.S L(Γ, x : un !T.S) = L(Γ ) L(Γ, x : lin !T.S) = L(Γ ), x : lin ?T.S L(Γ, x : un ?T.S) = L(Γ ) L(Γ, x : lin ⊗ {l : Ti }i∈I ) = L(Γ ), x : lin ⊗ {l : Ti }i∈I L(Γ, x : un ⊗ {l : Ti }i∈I ) = L(Γ ) L(Γ, x : lin & {l : Ti }i∈I ) = L(Γ ), x : lin ⊗ {l : Ti }i∈I L(Γ, x : un & {l : Ti }i∈I ) = L(Γ ) L(Γ, x : µa.T ) = L(Γ ), x : µa.T if lin(T ) L(Γ, x : µa.T ) = L(Γ ) if un(T )
45
The previous function will help us to get only the linearly qualified part of a π typing environment. It will be useful for the applications of weakening (Lem. 1) in the proof of the following statement. Theorem 11 (Type preservation). For every S π process P such that Γ ` P we have that there exists κ such that ∆ `r JP Kf : unit | κ and ∆ = Γ0 ∪ x∈dom(Γ ) fx : ⦃Γ (x)⦄ (cf. Def. 26). Proof. We proceed by induction on the typing derivation Γ ` P and a case analysis on the last applied rule. Assume Γ0 is as in Def. 26. We assume a function L(Γ ) over π typing environments that returns only the types that are linearly qualified (cf. Def. 34): 1. Rule (T:I F): By assumption: (T:I F )
Γ1 ` v : bool Γ2 ` Q Γ2 ` R Γ1 ◦ Γ2 ` v? (Q) : (R)
We now show S the derivation ∆ `r if v then (pause ; Q) else (pause ; R) : unit | •; κ1 + •; κ2 , with ∆ = Γ0 ∪ x∈dom(Γ ) fx : ⦃Γ (x)⦄. Remember that bool = htt : unit, ff : uniti and that the conditional is simulated by a match operator in RML: (T-C ON) ∆ `r v : bool | 0 (T-M ATCH )
(W)
(T-S EQ), IH c ∆ \ Γ1 `r (pause ; Q) : unit | •; κ1 ∆ `r (pause ; Q) : unit | •; κ1
(W)
(T-S EQ), IH c ∆ \ Γ1 `r (pause ; Q) : unit | •; κ2 ∆ `r (pause ; R) : unit | •; κ2
∆ `r if v then (pause ; Q) else (pause ; R) : unit | •; κ1 + •; κ2
c1 = L(S where Γ x∈dom(Γ1 ) fx : ⦃Γ1 (x)⦄). Notice that weakening can only be applied due to the fact that linear variables will only appear in a single part of the splitting, and this in turn indicates that they cannot appear in other processes, as they have to be consumed only once. 2. (T:PAR): Γ1 ` P Γ2 ` Q (T:PAR ) Γ1 ◦ Γ2 ` P | Q The derivation for ∆ `r JP Kg k JQKf : unit | κ1 k κ2 proceeds by applying Rule (T-PAR) and using weakening before applying the inductive hypothesis. Weakening is possible for the same argument as the previous case, meaning that linear variables can only appear free in one of the processes and not in the other (cf.Def. 20). If the variable is unrestricted, then it will appear in both and there is no need to weaken it: ∆ `r JP Kf : unit | κ1
∆ ` JQKf : unit | κ2
∆ `r JP Kf k JQKf : unit | κ1 k κ2
3. (T:R ES): Γ, x : T, y : T ` Q (T:R ES ) Γ ` (νxy)Q 46
(T:PAR )
(T:VAR)
(T-PAIR), (T-VAR), (T-VAR)
∆, x0 : τ3 `r fx : τ3 | 0
∆, x0 : τ3 `r (v, x0 ) | 0
∆, x : τ3 `r emit fx (v, x ) : unit | 0 0
(T-S EQ), (T-PAUSE), IH
0
∆ , x : τ3 `r pause ; JP 0 Kg : unit | •; κ 0
(W)
0
∆, x0 : τ3 `r pause ; JP 0 Kg : unit | •; κ
∆, x : τ3 `r emit fx (v, x ); pause ; JP Kg : unit | 0; •; κ 0
0
0
∆ `r JP Kf : unit | 0; •; κ
Fig. 17. Typing derivation for Cases P = xhvi.Q(a), P = xhvi.Q(b) and P = xhvi.Q(c).
The derivation for ∆ `r signalfst c : (err, nc) in JQKg : unit | κ, with g = f, {x ← c, y ← c}, fst = λx.λy.x, err : τ and nc : ζ where, τ ∈ ⦃sub1 (T )⦄ and ζ ∈ ⦃cont(T )⦄ proceeds as follows:
(T-S IG )
(T-PAIR), (T-VAR), (T-VAR)
(T-VAR)
IH
∆ `r (err, nc) : τ1 | _ ≡κ 0
∆ `r fst : τ2 | _ ≡κ 0
∆, c : τ3 `r JQKg : unit | κ
∆ `r signalfst c : (err, nc) in JQKg : unit | κ
where τ1 = τ × ζ, τ2 = τ × ζ → τ × ζ → τ × ζ and τ3 = (τ × ζ, τ × ζ) event. 4. (T:O UT): (T:O UT )
Γ1 ` x : q !T.U Γ2 ` v : T Γ3 ◦ x : U ` P Γ1 ◦ Γ2 ◦ Γ3 ` xhvi.Q
The derivation proceeds as follows: (a) JP Kf = signalfst x0 : (err, nc) in emit fx (v, x0 ); pause ; JQKf,{x←x0 } , by Def. 14. (b) Assume fx : τ , where τ = (τ1 × τ2 , τ1 × τ2 ) event and τ1 ∈ ⦃T ⦄, τ2 ∈ ⦃S⦄. Also assume that fst = λx.λy.x (i.e., a function that always return the first element) as a gathering function. By Lem. 5 we distinguish the following cases depending on S: i. Case S = end. For this case we have to show that: ∆ = Γ0 , fx : (τ1 ×τ2 , τ1 ×τ2 ) event, error : unit, nc : unit, v : τ1 `r JP Kf : unit | κ; •; 0 This is S done in Fig. 17. Note that τ3S= (unit, unit) event, g = f, {x ← x0 } and ∆0 = ∆ \ L( x∈dom(Γ1 ) fx : ⦃Γ1 (x)⦄ ∪ x∈dom(Γ2 ) fx : ⦃Γ2 (x)⦄). Weakening is possible by the previous arguments. ii. Case S = q p0 . We have to show that: ∆ = Γ0 , fx : (τ1 × τ2 , τ1 × τ2 ) event, error : τ4 , nc : τ5 , v : τ1 `r JP Kf : unit | κ; •; 0 where τ4 ∈ ⦃sub1 (q P )⦄ and τ5 ∈ ⦃cont(p q)⦄. The type derivation proceeds as Fig. 17, considering that τ3 = (τ4 × τ5 , τ4 × τ5 ) event. iii. Case S = µa.S 0 . This case is analogous as the previous one as recursive types are treated as the finite case. 47
5. (T:I N): (T:I N )
Γ1 ` x : q ?T.S (Γ2 , y : T ) ◦ x : S ` Q Γ1 ◦ Γ2 ` x(y).Q
(1)
The derivation proceeds as follows: (T-VAR)
IH
∆ `r fx : (τ, τ ) event | _ ≡κ 0 (T-AWAIT )
(W)
∆; , y : τ1 , w : τ2 `r JQKg : unit | κ2 ∆, y : τ1 , w : τ2 `r JQKg : unit | κ2
∆ `r await fx (y, w) in JQKg : unit | •∞ +(•; κ2 )
where g = f, {x ← w}, τ = τ1 × τ2 with τ1 ∈ ⦃T ⦄, τ2 ∈ ⦃S⦄ and ∆0 = \L( ⦃Γ1 (x)⦄).
S
x∈dom(Γ1 )
fx : t u
All the other cases will proceed in a similar way.
The previous result gives a form of type soundness, that ensure that encoded programs compile in the RML compiler. B.2
Proof of Thm. 2 in Page 14
We proceed by proving each item: Name Invariance: First, it is important to notice that ki and ko are shorthands for functions inputQ k, outputQ k (cf. App. C.1) which return the corresponding name of the input/output queue of endpoint k. Therefore fv(ko ) = fv(ki ) = {k}. Statement: Let P , σ, x, and E[·] be an aπ ? process, a substitution, a variable in aπ ? , and an evaluation context as in Def. 4, respectively. Then {[P σ]} = {[P ]}σ. Proof. Proceed by induction on the structure of P . We show the cases for inaction and output. All the other cases follow the same pattern: Case 1. P = 0 P =0
(Assumption)
(1)
Pσ = 0
(fv(0) = ∅)
(2)
{[P ]} = ()
(Fig. 8)
(3)
{[P ]}σ = ()
(fv(()) = ∅)
(4)
{[P σ]} = {[P ]}σ
(By (3),(4))
(5)
Case 2. P = xhvi.Q P = xhvi.P
(Assumption)
(1)
P σ = xσhvσi.P σ
(Def. of substitution)
(2)
{[P ]} = put xo v; {[P ]}
(Fig. 8)
(3)
{[P ]}σ = put
(Def. of substitution)
(4)
(By (3),(4), fv(xi ) = {x})
(5)
xo σ vσ; {[P ]}σ
{[P σ]} = {[P ]}σ 48
Statement: Let P , σ, x, and E[·] be an aπ ? process, a substitution, a variable in aπ ? , and an evaluation context as in Def. 4, respectively. Then {[E[P ]]} = {[E]} {[P ]} . Proof. The proof proceeds by induction on the structure of P , as in Thm. 7. B.3
Proof of Thm. 3 in Page 14
We start by stating some auxiliary lemmas: Lemma 6. For every aπ ? process h{[Q]} ; Σi 97 99K hR ; Σ 0 i then 1. 2. 3. 4. 5.
R = let x = pop y in S for some x, y, S and y ∈ Σ (or) R = run S for some S (or) R = () (or) R = match ci with {li → Si }i∈I , sor some Si and some li (or) R = S1 k S2 , where S1 and S2 are any of the above.
for some Σ 0 Proof. The proof proceeds by induction on the structure of Q: Case 1. Q = zhvi.Q0 : {[Q]} = put zo v; {[Q0 ]}
(Def. 17)
(1)
hput zo v; {[Q0 ]} ; zo : m, e Σi 97 99K hS ; Σ 0 i
(Rules bL-D ONEc,bP UT-Qc)
(2)
h{[Q0 ]} ; zo : m e · v, Σi 97 99K hS ; Σ 0 i
(Inversion on bL-D ONEc)
(3)
Conclude by applying the IH. Case 2. Q = z / l.Q0 : As above. Case 3. Q = z(w).Q0: There are two subcases: depending on whether reduction occurs with Rule bQ-P OPc or bQ-P OP c. The former case proceeds as above. The latter case is trivial since Rule bQ-P OP c does not reduces the process in any way. Case 4. Q = z / l.Q0 : As above. Case 5. Q = x . {li : Qi }i∈I : There are two subcases: depending on whether reduction occurs with Rule bQ-P OPc or bQ-P OP c. The latter case is trivial since Rule bQ-P OP c does not reduces the process in any way. The former proceeds as above, assuming that queue xi contains a valid label. If it contains a non-valid label then the proof proceeds trivially as the matching operator will not be resolved. Case 6. Q = b? (P ) : (P ): Proceeds depending on whether the boolean b corresponds to either tt or ff or neither. The latter case is trivial since there is no reduction on the encoded process. The former proceeds as above. Case 7. Q = (νx)Q0 : This case proceeds by doing an inversion on Rule bS IG -D ECc and applying the IH. Case 8. Q = µX.Q0 : Case proceeds by the following reduction: We first apply Rule bL-D ONEc. The left branch concludes by using Rule bR ECURc. Then, we proceed on the right branch is as follows: 49
∅,tt
E,b
G
G
hprocess R ; Σi 9999K hprocess R ; Σi hR ; Σi 999K hS ; Σ 0 i E,b hrun process {[Q0 ]}{process {[P ]}{rec αX = {[P ]}/αX }/αX } ; Σi 999K hS ; Σ 0 i G
where
0 0 R = run process {[P ]}{process {[Q ]}{rec αX = {[Q ]}/αX }/αX }
we proceed by a case analysis on b: If b = tt, then the expression terminates, meaning that S = (). Otherwise, if b = ff, then we do a case analysis on Q0 . For each case we apply an analogous reasoning as above. Case 9. Q = 0: Trivial by Def. 17. Case 10. Q = Q0 | Q00 : Can be reduced to any of the cases above. Case 11. Q = X: Trivial, since {[X]} = pause ; run αX . Lemma 7. For every properly initialized process C[Q, K(e k)] it holds that if h{[Q]} ; δ(K(e k))i 97 99K hS ; Σi k), S = {[Q0 ]} and Σ = δ(K0 (e k)). for some S, Σ. Then Q | K(e k) −→∗A Q0 | K0 (e Proof. This proceeds by induction on the structure of Q as follows: Case 1. Q = khvi.Q0 : Q = khvi.Q0 0
{[P ]} = put ko v; {[Q ]}
(Assumption)
(1)
(Fig. 8)
(2)
By using Rule bL-D ONEc we have that: hput ko v ; δ(K(e k))i 97 99K h() ; δ(K0 (e k)i) h{[Q0 ]} ; δ(K0 (e k))i 97 99K hS ; δ(K00 (e k))i hput ko v; {[Q0 ]} ; δ(K(e k))i 97 99K hS ; Σi
(3)
where the left branch concludes by applying Rule bP UT-Qc. Then we can prove for the right branch as follows: h{[Q0 ]} ; δ(K0 (e k))i 97 99K hS ; δ(K00 (e k))i 0 0 e ∗ 00 e Q | K (k) −→A R | K (k), S = {[R]} K(e k) ≡A k[i : m f1 , o : m f2 ] | K(e k)
(Inversion on (3)) (4) (IH)
(5)
(Def. 5)
(6)
khvi.Q0 | k[i : m f1 , o : m f2 ] | K(e k \ k) −→A Q0 | k[i : m f1 , o : m f2 · v] | K(e k \ k) (Fig. 2) (7) 0 ∗ khvi.Q | K(e k) −→A R, S = {[R]} ((6), trans. −→A ) (8)
50
Case 2. Q = k(y).Q0 : Q = k(y).Q0 0
{[P ]} = let x = pop ko in {[Q ]}
(Assumption)
(1)
(Fig. 8)
(2)
We distinguish two cases depending on whether Rule bL-D ONEc or Rule bL-PARc is applied. This is determined by the state of the queue k[i : m f1 , o : m f2 ]. If m f1 = , then we proceed with Rule bL-PARc and the proof is trivial, since the Q does not reduce. Otherwise we proceed as follows: By using Rule bL-D ONEc we have that δ(K(e k)) = ki : m · m f1 , ko : m f2 , δ(K(e k \ k)) for some m, by assumption: hpop ko ; δ(K(e k))i 97 99K h() ; δ(K0 (e k)i) h{[Q0 ]}{m/y } ; δ(K0 (e k))i 97 99K hS ; δ(K00 (e k))i hlet y = pop ko in {[Q0 ]} ; δK(e k)i 97 99K hS ; Σi
(3)
where the left branch concludes by applying Rule bP OP -Qc. Then we prove for the right branch as follows: h{[Q0 ]}{m/y } ; δ(K0 (e k))i 97 99K hS ; δ(K00 (e k))i Q0 {m/x} | K0 (e k) −→∗A R | K00 (e k), S = {[R]} K(e k) ≡A k[i : m · m f1 , o : m f2 ] | K(e k \ k)
(Inversion on (3))
(4)
(IH)
(5)
(Def. 5)
(6)
(Fig. 2)
(7)
((6), trans. −→A )
(8)
0
k(y).Q | k[i : m · m f1 , o : m f2 ] | K(e k \ k) −→A 0 m Q { /x} | k[i : m f1 , o : m f2 · v] | K(e k \ k) 0
k(y).Q | K(e k)
−→∗A
R, S = {[R]}
Case 3. Q = µX.Q0 : By applying Lem. 6 we can distinguish two cases for the reduction: either S = run αX or S 6= run αX . The former case is trivial, since once unfolded, the recursive process can reduce until the first occurrence of process variable X. The second proceeds by a case analysis on Q0 and applying the IH. Case 4. Others: All the other cases proceed as above. f1 , o : m · m f2 ] | k[i : m f3 , o : m0 · m f4 ] it holds that if Lemma 8. For every K(k, k) ≡A k[i : m hH(k, k) ; δ(K(k, k))i 97 99K hS ; Σi then: 1. k[i : m f1 , o : m · m f2 ] | k[i : m f3 , o : m0 · m f4 ] −→A k[i : m f1 · m0 , o : m f2 ] | k[i : m f3 · m, o : m f4 ] (and) 2. S = O(k 0 ) k I(k 00 ) k O(k 00 ) k I ( k 0 ) for fresh names k 0 , k 00 (and) 3. Σ = δ(K0 (k, k)). 51
Proof. We proceed with a direct proof. W.l.o.g we show only the transitions for I(overlinek)parallelO(k). The transition for O(k) k I(k) will be obtained in the same way. M
I(k) =let rec process I α = (present ack α ? (emit ack α ; await α(x, α0 ) in (put x ki ); run I α0 ) : (I α) in run I k
(Def. 15)
(1)
(Def. 15)
(2)
M
O(k) =let rec process O α = signal α0 in isEmpty αo ; emit ack α ; (present ack α ? (emit α ((pop ko ), α0 ); pause ; run O α0 ) : (run O α) in run O k
First, apply Rule bL-PARc to split the parallel process in two branches. We proceed for each branch: – Branch: O(k): bR ECURc bL ET-D ONEc
hR0 ; δ(K(k, k))i 97 99K hR0 ; δ(K(k, k))i hR ; δ(K(k, k))i 97 99K hR0 ; δ(K(k, k))i
D
hO(k) ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i where R = rec O = λα.process (signal α0 in isEmpty αo ; emit ack α ; (present ack α ? (emit α ((pop ko ), α0 ); pause ; run O α0 ) : (run O α)) 0
R =λα.process (signal α0 in isEmpty αo ; emit ack α ; (present ack α ? (emit α ((pop ko ), α0 ); pause ; run O α0 ) : (run O α)){R/O} where D is as follows: L-Done
hR000 ; δ(K(k, k))i 97 99K h() ; δ(K(k, k))i bS IG -D ECc
hRiv ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i
hR00 ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i hR0 {α/k} ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i
bA PPLc
hrun O k{O/R0 } ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i where R00 = signal k 0 in isEmpty αo ; emit ack k ; (present ack k ? (emit k ((pop ko ), k 0 ); pause ; run O k 0 ) : (run O )˛ R000 = isEmpty k Riv = emit ack k ; (present ack k ? (emit k ((pop ko ), k 0 ); pause ; run O k 0 ) : (run O )˛ we continue in the rightmost branch: 52
bS IG -Pc
hRvii ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i hRvi ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i
hRv ; δ(K(k, k))i 97 99K h() ; δ(K(k, k))i
hRiv ; δ(K(k, k))i 97 99K hO(k 0 ) ; δ(K0 (k, k))i where Rv = emit ack k Rvi = (present ack k ? (emit k ((pop ko ), k 0 ); pause ; run O k 0 ) : (run O )˛ Rvii = emit k ((pop ko ), k 0 ); pause ; run O k 0 Where Rvii concludes by using Rule bL-D ONEc, bE MITc, bL-PARc, bPAUSEc. – Branch I(k): Analogously to the previous proof, we can build a derivation tree for I(k) obtaining the desired expression I(k 0 ). The signals ack k , ack k are assumed to be present, since they are emitted by I(k) and O(k) respectively. We proceed to prove our operational correspondence statement: Statement: Given a properly initialized aπ process C[Q, K(e k)], it holds that: 1. Soundness: If C[Q, K(e k)] −→A C[Q0 , K0 (e k)] then 0 00 00 e e ([C[Q, K(k)]]) 7999K ([C [Q , K (k)]]), for some Q00 , K00 (e x), C 0 where 0 00 00 0 0 ∗ e)C [Q , K (e x)]. C[Q, K(e x)] −→A C[Q , K (e x)] −→A (ν x 2. Completeness: If ([C[Q, K(e x)]]) 7999K R then there exist Q0 ,C 0 ,K0 (e x) such that C[Q, K(e x)] −→∗A 0 0 0 0 0 0 (ν x e)C [Q , K (e x)] and R = ([C [Q , K (e x)]]). Proof. We proceed by showing each item: 1. Soundness: The proof proceeds by induction on the reduction and a case analysis on the last applied rule: Case 1. Rule bS ENDc: C[Q, K(e k)] −→A C[Q0 , K0 (e k)], with Rule bS ENDc Y Q = khvi.Q0 ∧ C[Q] = Qj | Q | K(e k)
(Assumption) (1) ((1), Def. 1)
(2)
j∈{1,...,n}
Y
h{[C[Q]]} ; δ(K(e k))i = hsignal e k in
{[Qj ]} k {[Q]} k H(e k) ; δ(K(e k))i (Def. 17)
(3)
j∈{1,...,n}
By applying Rules bS IG -D ECc for every k ∈ e k and bL-PARc n + 2 times we see that: Y hsignal e k in {[Qj ]} k {[Q]} k H(e k) ; δ(K(e k))i 97 99K j∈{1,...,n}
h
Y
Sj k S k H(ke1 ) ; δ(K0 (e k))i
(Fig. 4)
(4)
j∈{1,...,n}
for some k1 , Sj , S. We conclude by applying Lem. 7 and IH to each process {[Qj ]}, {[Q]}, Sj , S. Taking into account that, by Def. 15, H(e[k1 ) does not interact with any of the {[Qj ]}, {[Q]}. 53
Case 2. Rules bR ECVc,bS ELc,bB RAc,bI F Tc,bI F Fc,bR ESc,bPARc,bS TRc,: They proceed analogously as above. Case 3. Rule bC OMc: Proceeds as a direct proof by applying Lem. 8. Completeness: The proof is direct by applying Lem. 7 and Lem. 8 for each process composing C[P, K(e k)]. B.4
Enhanced handlers
In aπ, messages inside queues are consumed. Therefore, one of the challenges to build an adaptable service that is capable of reacting and preserving the messages that have been already communicated in a given protocol is to find ways to backup the data that is being communicated. This can be achieved in ReactiveML by using an enriched version of the handlers presented in §4. We provide an implementation of the new handlers closer to actual ReactiveML code. Note that the syntactic and semantic intuitions are preserved. We then define a handler as a process let process handler k = (hI k) || (hO k), where hI k and hO k are defined below. rec process hI k = present (ack k) then begin emit (ack (comp k)); await k(x,y) in (put x k_i); (put x c_k_i); (run hI y); end else run (hI k)
1 let 2 3 4 5 6 7 8 9 10
rec process hO k = signal k1 in isEmpty k_o; emit (ack (comp k)); present (ack k) then begin let x = (pop k_o) in emit (comp k) (x, k1); (put x c_k_o);pause; run (hO k1); end else run (hO k)
1 let 2 3 4 5 6 7 8 9 10 11
We assume standard functions comp, ack that receive a signal k and return the signal corresponding to the complementary k and the acknowledgement signal ackk , respectively. For simplicity, in {[·]} we will assume that k_i,k_o,c_k_o,c_k_i are shorthands for functions inputQ, outputQ, copyOutQ and copyInQ that return the name of queues ki , ko , cko and cki . They correspond to all the queues related to endpoint k. The handlers are extended versions of the ones presented in Def. 15. Their novelty appears in lines 7 and 8 respectively. Intuitively, every message that is received and sent is saved in the backup input/output queues c_k_i,c_k_o. These copied queues will then be used to keep track of the state of the protocol in any given instant. 54
C
Appendix to Section 2
C.1
Revisiting the riding protocol.
In §2 we used an example to show the advantages of using SRP in a real-life scenario. However, that solution was not ideal, in the sense that switching between services should not require a full restart of the system: if a certain information was already sent, it would not be ideal to have to repeat the same process. In this section we show that it is possible, using ([·]) and RMLq to model a failure resistant pickup application as follows. We build upon the example presented in §2. First, we rewrite S as an aπ process: Saπ = (νx)(A(x) | D(x) | x[i : , o : ] | x[i : , o : ]) The main differences with S in §2 are: (i) the hiding operator only establishing a single session on channel x (with its complementary being x) and, (ii) composing the respective queues for each endpoint in parallel with the complete system implementation. We proceed then to make use of {[·]} to model a recoverable riding protocol. Assume that rA x = {[A(x)]} k H(x) and that rD x = driver x || handler x, where driver x = {[D(x)]}. We will also assume a function qSel which receives two queues as parameters and returns the first one if it is empty, and the second one otherwise. Furthermore, assume that now,later,quit are label keywords. Lastly, we will assume that c_x_i and c_x_o are queues that copy the messages being sent and received by endpoint x. We will now show the implementation for the application that a secondary driver should have. process system = signal x, xc in (run rA xc) || (run frD) 3 let process bDriver x = 4 match c_x_i with 5 | [] -> let l = pop (qSel c_x_i x_i) in 6 let d = pop (qSel c_x_i x_i) in 7 let lab = pop (qSel c_x_i x_i) in 8 match lab with 9 | now -> put x_o 10 | later -> let w = pop (qSel c_x_i x_i) in 11 put x_o "Ok" 12 | quit -> closeSession x 13 | q -> let l = pop (qSel c_x_i x_i) in 14 let d = pop (qSel c_x_i x_i) in 15 let lab = pop (qSel c_x_i x_i) in 16 match lab with 17 | now -> let w = pop c_x_o in 18 match w with 19 |"eta" -> () 20 |_ -> put x_o 21 | later -> let w = pop (qSel c_x_i x_i) in 22 let z = pop c_x_o in 23 match w with 24 |"Ok" -> () 25 |_ -> put x_o "Ok" 26 | quit -> closeSession x 27 let process frD x = do rD x until fail(x) -> (bDriver x) 1 let 2
55
For this recovery procedure it is necessary to identify the session endpoint of the failed driver. For this reason, signal fail will contain the endpoint that was being originally used. In a sense, this correspond to identifying the exact communication that needs to be recovered. This endpoint will then be reused by bDriver to restart communication. An interesting feature of this procedure is that the conference attendee will not need to restart the communication. In fact, due to the use of backup queues, the client can proceed in the exact point where the communication failed, as the new driver will have all the required information. Intuitively, the System declares two signals x,xc which will correspond to endpoints x and x respectively. Then, the attendee process rA is run in parallel with the failure resistant driver frD. Process frD will then execute process rD until a failure is detected. Once this happens, the backup driver process bDriver connects with the attendee rA on the same session x. However, depending on the state of the output backup queue, there are two possible execution paths. First, if the output queue is empty, then no messages were sent by the previous canceled process. Thus, bDriver will first obtain all the messages in the input queue c_x_i. Once this queue is emptied, then the protocol will continue as usual. Second, if the backup output queue is not empty, we proceed with the second execution path. This means that for every output in the protocol specification, it is necessary to compare the first element of the backup output queue and decide whether it matches with the message that is going to be sent or not. If it matches, we skip that output since the message was already sent and proceed with the continuation. Otherwise, we simply continue as the protocol would prescribe.
D
Further Related work
SRP was introduced in the 1980s [2] as a way to implement and design critical real-time systems. Since then, several works have provided solid foundations for SRP programming languages. In particular, the work on ESTEREL [App5] and the model presented in [6] offer foundations for languages such as RML [17,15] and ULM [App7]. Also worth mentioning are works that relate synchronous languages to the π-calculus; for instance, the work [App1] develops a non-deterministic variant of the SRP model of ESTEREL. The paper [App13] offers a survey of synchronous reactive programming languages, including ESTEREL, LUSTRE [App10], and several others. Session types [12] have been thoroughly studied. Prior works have extended the foundations of sessionbased concurrency to include event-based behavior [14], adaptive behavior [App11], and timed behavior [4]. All these extensions use (variants of) the π-calculus as their base language. A key difference with our work is that we propose an SRP language (i.e., RML) to obtain a natural integration of some of the aforementioned features. Practical approaches to session types have resulted in a variety of implementations, including [App19,App22,App21]. The paper [App2] offers a recent survey of session types and behavioral types in practice. Another relevant implementation is [App20], a source of inspiration for our work: it integrates sessionbased concurrency in the OCaml programming language. As in our interpretation, the implementation in [App20] uses the notion of continuation-passing style developed in [8]. A distinguishing feature of our work with respect to [App20] is our interest in reactive, timed behaviors, not supported by OCaml, and therefore not available in [App20]. Our current implementation still lacks some features present in [App20], such as the integration of duality and linearity-related checks into the OCaml type system. Our approach is related to our prior works on declarative interpretations of session π-calculi [App16,App9]. The first such interpretation is developed in [App16], where it is shown that declarative languages can support mobility in the sense of the π-calculus. The interpretation developed in [App9] improves over [App16] by supporting linearity and non-determinism. The works [App16,App9] are related to the present work due 56
to the declarative flavor of SRP. In contrast, our reactive interpretation yields practical implementations in ReactiveML, which are not possible in the foundational interpretations in [App16,App9]. Outside process algebraic formalisms (and type-based validation techniques), other approaches to the formal specification and analysis of services use automata- and graph-based techniques. For instance, the work [9] uses Büchi automata to specify and analyze the conversation protocols that underlie electronic services.
Appendix References 1. R. M. Amadio. A synchronous pi-calculus. Inf. Comput., 205(9):1470–1490, 2007. 2. D. Ancona, V. Bono, M. Bravetti, J. Campos, G. Castagna, P. Deniélou, S. J. Gay, N. Gesbert, E. Giachino, R. Hu, E. B. Johnsen, F. Martins, V. Mascardi, F. Montesi, R. Neykova, N. Ng, L. Padovani, V. T. Vasconcelos, and N. Yoshida. Behavioral types in programming languages. Foundations and Trends in Programming Languages, 3(2-3):95–230, 2016. 3. A. Benveniste, P. Caspi, S. A. Edwards, N. Halbwachs, P. L. Guernic, and R. de Simone. The synchronous languages 12 years later. Proceedings of the IEEE, 91(1):64–83, 2003. 4. G. Bernardi, O. Dardha, S. J. Gay, and D. Kouzapas. On duality relations for session types. In Trustworthy Global Computing - 9th International Symposium, TGC 2014, Rome, Italy, pages 51–66, 2014. 5. G. Berry and G. Gonthier. The Esterel Synchronous Programming Language: Design, Semantics, Implementation. Sci. Comput. Program., 19(2):87–152, 1992. 6. L. Bocchi, W. Yang, and N. Yoshida. Timed multiparty session types. In Proc. of CONCUR’14, volume 8704, pages 419–434. Springer, 2014. 7. G. Boudol. ULM: A core programming model for global computing: (extended abstract). In 13th European Symposium on Programming, ESOP, pages 234–248, 2004. 8. F. Boussinot and R. de Simone. The SL synchronous language. IEEE Trans. Software Eng., 22(4):256–266, 1996. 9. M. Cano, C. Rueda, H. A. López, and J. A. Pérez. Declarative interpretations of session-based concurrency. In Proc. of PPDP’15, pages 67–78. ACM, 2015. 10. P. Caspi, D. Pilaud, N. Halbwachs, and J. Plaice. Lustre: A declarative language for programming synchronous systems. In POPL 1987, Proceedings, pages 178–188, 1987. 11. M. Coppo, M. Dezani-Ciancaglini, and B. Venneri. Self-adaptive multiparty sessions. Service Oriented Computing and Applications, 9(3-4):249–268, 2015. 12. O. Dardha, E. Giachino, and D. Sangiorgi. Session types revisited. In Proc. of PPDP’12, pages 139–150, 2012. 13. N. Halbwachs. Synchronous programming of reactive systems. In A. J. Hu and M. Y. Vardi, editors, Computer Aided Verification, 10th International Conference, CAV ’98, Vancouver, BC, Canada, June 28 - July 2, 1998, Proceedings, volume 1427 of Lecture Notes in Computer Science, pages 1–16. Springer, 1998. 14. K. Honda, V. T. Vasconcelos, and M. Kubo. Language Primitives and Type Discipline for Structured Communication-Based Programming. In Proc. of ESOP’98, volume 1381, pages 122–138. Springer, 1998. 15. D. Kouzapas, N. Yoshida, R. Hu, and K. Honda. On asynchronous eventful session semantics. Mathematical Structures in Computer Science, 26(2):303–364, 2016. 16. H. A. López, C. Olarte, and J. A. Pérez. Towards a unified framework for declarative structured communications. In PLACES 2009, York, UK, 22nd March 2009., volume 17 of EPTCS, pages 1–15, 2009. 17. L. Mandel and C. Pasteur. Reactivity of Cooperative Systems - Application to ReactiveML. In 21st International Symposium, SAS 2014, Munich, Germany, 2014., pages 219–236, 2014. 18. L. Mandel and M. Pouzet. ReactiveML: a reactive extension to ML. In Proc. of PPDP’05, pages 82–93. ACM, 2005. 19. M. Neubauer and P. Thiemann. An implementation of session types. In PADL 2004, USA, June 18-19, 2004, Proceedings, pages 56–70, 2004. 20. L. Padovani. FuSe - A simple library implementation of binary sessions. URL: http://www.di.unito.it/ ~padovani/Software/FuSe/FuSe.html.
57
21. A. Scalas and N. Yoshida. Lightweight session programming in scala. In ECOOP 2016, LIPIcs. Dagstuhl, 2016. 22. N. Yoshida, R. Hu, R. Neykova, and N. Ng. The scribble protocol language. In TGC 2013, Buenos Aires, Argentina, August 30-31, 2013, Revised Selected Papers, pages 22–41, 2013.
58