Durham E-Theses LFTOP: An LF based approach to domain speci�c reasoning Pang, Jianmin How to cite: Pang, Jianmin (2006) LFTOP: An LF based approach to domain speci�c reasoning, Durham theses, Durham University. Available at Durham E-Theses Online: http://etheses.dur.ac.uk/2372/ Use policy The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro�t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in Durham E-Theses • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full Durham E-Theses policy for further details. Academic Support O�ce, Durham University, University O�ce, Old Elvet, Durham DH1 3HP e-mail: e-theses.admin@dur.ac.uk Tel: +44 0191 334 6107 http://etheses.dur.ac.uk http://www.dur.ac.uk http://etheses.dur.ac.uk/2372/ http://etheses.dur.ac.uk/2372/ http://etheses.dur.ac.uk/policies/ http://etheses.dur.ac.uk LFTOP: AN LF BASED APPROACH TO DOMAIN SPECIFIC REASONING by Jianmin P a ng The copyright of this th99le rests with the author or the university to which It was submitted , No quotation from It, or Information derived from It may be published without the prior written consent of the author or university, and any Information de rived f rom It should be acknowledged. Submitted in conformity with the req uirements fo r t he deg ree of Ph .D Department of Computer Science Un iversity of Durha m Copyright © 2006 by Jianm in P ang - 5 FEB 2007 Abstract Specialized vocabulary, notations and inference rules tailored for the description, analysis and reasoning of a domain is very important for the domain. For domain-specific issues researchers focus mainly on the design and implementation of domain-specific languages (DSL) and pay little attention to the reasoning aspects. We believe that domain-specific reasoning is very important to help the proofs of some properties of the domains and should be more concise, more reusable and more believable. It deserves to be investigated in an engineering way. Type theory provides good support for generic reasoning and verification. Many type theorists want to extend uses of type theory to more domains, and believe that the methods, ideas, and technology of type theory can have a beneficial effect for computer assisted reasoning in many domains. Proof assistants based on type theory are well known as effective tools to support reasoning. But these proof assistants have focused primarily on generic notations for representation of problems and are oriented towards helping expert type theorists build proofs efficiently. They are successful in this goal, but they are less suitable for use by non-specialists. In other words, one of the big barriers to limit the use of type theory and proof assistant in domain-specific areas is that it requires significant expertise to use it effectively. We present LFTOP - a new approach to domain-specific reasoning that is based on a type-theoretic logical framework (LF) but does not require the user to be an expert in type theory. In this approach, users work on a domain-specific interface that is familiar to them. The interface presents a reasoning system of the domain through a user-oriented syntax. A middle layer provides translation between the user syntax and LF, and allows additional support for reasoning (e.g. model checking). Thus, the complexity of the logical framework is hidden but we also retain the benefits of using type theory and its related tools, such as precision and machine-checkable proofs. The approach is being investigated through a number of case studies. In each case study, the relevant domain-specific specification languages and logic are formalized in Plastic. The relevant reasoning system is designed and customized for the users of the corresponding specific domain. The corresponding lemmas are proved in Plastic. We analyze the advantages and shortcomings of this approach, define some new concepts related to the approach, especially discuss issues arising from the translation between the different levels. A prototype implementation is developed. We illustrate the approach through many concrete examples in the prototype implementation. The study of this thesis shows that the approach is feasible and promising, the relevant methods and technologies are useful and effective. Acknowledgements I would like to thank everyone who has helped me with my research. In particular, I am very grateful to my supervisors Paul Callaghan and Zhaohui Luo. They have provided a great many suggestions for research topics, and without the foundation of their works on computer assisted reasoning my thesis would have been very different. They also gave me many support when I met problems in research and life. I missed the discussions while we had barbecues in your backyard. I would like to thank my Ph.D proposal examiners, Professor Malcolm Munro and Dr. Alex Coddington, for their valuable advice in the beginning of my research. Thanks also go to James McKinna, Steven Bradley, Conor McBride, Yang Luo, Xingyuan Zhang, Edwin Brady, Robert Pollack and all other members of the CARG Group, Durham. You are all great! I would like to thank my elder sister, Ying Pang, and the rest of my family who have provided encouragement, love, and support throughout. Finally, my greatest thanks go to my wife, Bin Wang. She takes care of my daily life and my lovely daughter so that I can focus on my research. Without her love, encouragement, and confidence in me, the ups and downs of the research would have been a harder ride. This thesis is dedicated to her with all my love. Thanks also to Durham University and the Department of Computer Science for the financial support through the Durham University Studentship. ii Declaration I declare that this thesis was composed by myself, and the work reported herein is my own unless explicitly declared otherwise. Some parts of the work have already been published in [Pang et al., 2005a; Callaghan et al., 2001; Pang et al., 2005b; Pang and Zhao, 2005; Pang et al., 2006a; Pang et al., 2006b]. Copyright Notice The copyright of this thesis rests with the author. No quotation from it should be published without their prior written consent and information dervied from it should be acknowledged. iii Contents 1 Introduction 1.1 Domain-specific reasoning . . . 1.2 Type theory, LF and generic reasoning . 1.3 Motivation of this thesis 1.4 The major contribution 1.5 Related work ... 1.5.1 Theorem prover based approach 1.5.2 Non theorem prover based approach 1.6 The structure of the thesis . 2 Statement of problem 2.1 Aim 1: The analysis of requirements of domain-specific reasoning 1 1 2 2 3 4 4 5 5 7 7 2.1.1 Basic assumptions of our research . . . . . . . . . . . . . 7 2.1.2 Aims of the LF based domain-specific reasoning approach 8 2.2 Aim 2: The analysis of LF and Plastic as a basis to support domain-specific reasoning .................... . 2.3 Aim 3: The theoretical aspects of the approach 3 Preliminaries 3.1 Typed Lambda-calculus 3.2 Type theory ...... . 3.2.1 Objects, types, and rules 3.2.2 The principle of propositions-as-types 3.3 The Logical Framework and its application 3.3.1 Logical Framework (LF) .... 3.3.2 Specifying type theories in LF 3.3.3 The type theory UTT . . . . . 3.3.3.1 Internal logic SOL and definition of IT 3.3.3.2 Inductive types . 3.3.3.3 Universes .... iv 9 9 10 10 11 11 12 13 13 14 16 17 18 18 3.3.4 Distinction between dependent product kind and IT-types 3.3.5 A new version of Logical Framework PAL+ 3.4 Implementations of type theory 19 19 20 21 22 23 23 23 24 24 25 26 26 27 28 29 32 34 3.5 Model checking . . . . . . . . . 3.6 Theorem proving . . . . . . . . 3. 7 Some basic concepts formalized in LF 3.8 3.7.1 A brief introduction to Plastic 3.7.1.1 The syntax of Plastic 3.7.1.2 Syntactic Sugar in Plastic . 3.7.1.3 Coercive subtyping implementation in Plastic. 3.7.1.4 Modules ............... . 3.7.1.5 Inductive type and family in Plastic 3.7.1.6 Inductive Relations ........ . 3.7.1.7 Inductive Relations with Large Elimination 3.7.1.8 Development of proofs in Plastic 3.7.2 Sets and relevant constants and operators 3.7.3 Fix points and their properties Summary ......... . 4 The outline of the approach 35 35 36 37 37 38 38 39 4.1 Our approach .............. . 4.2 4.3 4.4 4.5 4.1.1 An architecture of the approach 4.1.2 A methodology ......... . The techniques we use in this approach The common things for different domains in this approach The role of type theory and its framework Discussion . . . . . . . . 5 Case study: concurrency 40 5.1 Domain analysis . . . . 40 5.1.1 Process algebra . 40 5.1.2 CCS: Calculus for Communicating System. 42 5.1.3 LTS: Labelled Transition System 43 5.1.4 t-t-calculus . . . . . . . . . . . . . 44 5.1.4.1 Previous logics . . . . . 44 5.1.4.2 A brief introduction to t-t-calculus 46 5.1.4.3 A positive version of t-t-calculus with tagging fixed points 47 5.1.4.4 Semantics of t-t-calculus : 47 5.2 Congruences and Reasoning in CCS 48 5.2.1 Congruences for CCS 48 v 5.2.1.1 5.2.1.2 5.2.1.3 Trace Equivalence . . . . Bisimulation Equivalence Observational Equivalence for CCS 5.3 Formalization of the domain . 5.3.1 Formalization of CCS 5.3.2 Formalization of 11-calculus 5.4 User level reasoning system .... 5.4.1 Rules that do not involve the process operators. 5.4.2 Rules for the process operators. 5.5 User level syntax ......... . 5.6 Translation between different levels 5.6.1 The translation from user level to LF level 5.6.1.1 The translation of CCS concepts 5.6.1.2 The translation of LTS concepts 5.6.1.3 The translation of jj-calculus concepts 5.6.2 The translation from LF level to user level 5. 7 Some examples . . . 5.7.1 Ticking clock 5.7.2 Simple communication protocol 5.7.3 Example with infinite state space . 5.7.4 Some observations from the examples 5.8 Discussion . . . . . . . . . . . . . . . . . . . . 48 49 50 52 52 55 56 56 57 57 58 58 59 59 59 59 59 60 60 63 68 68 6 Case study: Verification of semantic properties of LAZY-PCF+SHAR 70 6.1 The need for explicit substitutions 72 6.2 Capture of sharing 72 6.3 Domain analysis . 6.3.1 Syntax of the language . 6.3.2 Operational semantics of the language 6.4 Special features of this domain . . . . . . . . 6.5 An implementation of LAZY-PCF +SHAR in LF 6.5.1 Translation from LAZY-PCF+SHAR expressions and types to LF expressions 6.5.1.1 Inductive definition of the syntax of LAZY-PCF+SHAR 6.5.1.2 Translation of operational semantics rules . 6.5.2 An example 6.6 Discussion ..... vi 73 74 74 76 76 76 76 77 81 83 7 The interface 7.1 Design principle . 7.1.1 General principle for designing domain user interface . 7.1.2 Principle for designing a reasoning interface based on LF 85 85 85 86 7.2 ULPIP: a protocol for communications between user-level and Plastic-level 88 7.2.1 Usage of eXtensible Markup Language (XML) 89 7.2.2 DTD for XML documents 89 7.2.3 DTD for the protocol .. 7.3 Implementation issues in our design 7.4 7.3.1 Some considerations on the implementation issues 7.3.2 An interface in Proof General style Discussion . . . 8 Translation issues 8.1 Some problems in translations ....... . 8.2 The translation from user level to LF level 8.2.1 The case of concurrency ...... . 8.2.1.1 The translation of the predefined actions 8.2.1.2 The translation of the list of hidden actions . 8.2.1.3 The translation of the list of relabelling 8.2.1.4 The translation of processes 8.2.1.5 The translation of 1-L-calculus 8.2.1.6 The translation of propositions 8.2.1.7 The translation of CCS and 1-L-calculus rules 8.2.2 The translation of definitions . 8.2.3 The translation of declaration . 8.3 The translation from LF level to user level 8.3.1 The case of concurrency ...... . 8.3.1.1 The translation of actions . 8.3.1.2 The translation of processes 8.3.2 The translation of some forms of propositions 8.4 The properties of the translations . . 8.5 The proof of the adequacy property 8.6 Discussion . . . . . . . . . . 9 Conclusion and Future Work 9.1 Stocktaking .. 9.2 Evaluation ... 9.3 Future research A The proofs of the Subject Reduction theorem vii 90 94 94 94 94 97 97 98 98 98 99 99 99 100 101 102 102 102 102 102 102 103 103 103 104 109 110 110 111 112 114 List of Figures 3.1 The inference rules of LF from [Luo, 1994]. . ....... . 15 3.2 The correspondence of LF syntax and Plastic syntax from [Callaghan and Luo, 2001] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1 The architecture implied by the approach . . . . . . . . . . . . . . . . . . . . 36 6.1 call-by-name and call-by-value 6.2 The Syntax of LAZY-PCF+SHAR 6.3 Type rules ............ . 6.4 The operational semantics of LAZY-PCF+SHAR . 7.1 A screenshot of the interface in Proof General style . 72 74 74 75 95 8.1 Mapping between domain-specific object language and a subset of LF 98 viii List of Tables 5.1 Proof procedure for ticking clock (part I) 61 5.2 Proof procedure for ticking clock (part II) 62 5.3 Proof procedure for simple protocol (part I) 64 5.4 Proof procedure for simple protocol (part II) 65 5.5 Proof procedure for Counter's property 67 ix Chapter 1 Introduction The last thing one knows in constructing a work is what to put first. -BLAISE PASCAL, FRENCH THINKER, MATHEMATICIAN, AND SCIENTIST 1.1 Domain-specific reasoning Specialised vocabulary, notations and inference rules tailored for the description, analysis and reasoning of a domain is very important for the domain. First, the role of notations and rules is cognitive in nature as they provide support for basic describing and reasoning tasks. Second, notations also have an important social role as communication interfaces between different, and possibly diverse, technical specialities involved in the domain. In addition to using specialised vocabulary and notations for description and analysis, reasoning tasks are becoming prominent tasks in some domains. The domain users hope that computer can be used not only for computing and editing, but also for reasoning. There are many common characteristics in the aspect of reasoning even for different domains. Based on the common characteristics a good engineering approach is required for supporting domain- specific reasoning. For users who do reasoning and verification work in their domain, a good computer- supported reasoning system should be powerful and convenient, i.e. the system should be domain oriented, easy to learn and use, can do all things which can be done by domain users with pen and paper, and a lot more that they can't. It should also be 'believable' in the sense that developments are rigorously checked by machine. This is especially important in domains where safety or security are key properties. Till now, for domain-specific issues researchers focus mainly on the design and implemen- tation of domain-specific languages (DSL) and pay little attention to the reasoning aspects. We believe that domain-specific reasoning is very important to help the proofs of some prop- erties of the domains and should be more concise, more reusable and more believable. It 1 CHAPTER 1. INTRODUCTION 2 deserves to be investigated in an engineering way. 1.2 Type theory, LF and generic reasoning Type theory provides good support for generic reasoning and verification. Many type the- orists want to extend uses of type theory to more domains, and believe that the methods, ideas, and technologies of type theory can have a beneficial effect for computer assisted rea- soning in many domains. Proof assistants based on type theory are well known as effective tools to support reasoning. But these proof assistants have focused primarily on generic no- tations for representation of problems and are oriented towards helping expert type theorists build proofs efficiently. They are successful in this goal, but they are less suitable for use by non-specialists. There are many model checkers which do not support reasoning strongly (such as Edinburgh Concurrency Workbench) applied in domain-specific areas, while the development of domain-specific reasoning techniques in the area of proof assistant has been remarkably slow. But we can't require that all domain users (e.g. potential users in domains which could benefit from type-theory based reasoning) are expert type theorists or that they should learn type theory first. i.e. one of the big barriers to limit the use of type theory and proof assistant in domain-specific areas is that it requires significant expertise to use it effectively. LF is a simple type theory which allows particular type theories to be specified clearly in it. The version of LF we use is presented in [Luo, 1994], which introduces the type theory UTT by specifying it in LF. (Note: this kind of LF is different from Edinburgh LF [Harper eta!., 1987].) LF provides a definition mechanism and the four basic concepts required in a type theory: types, objects in types, families of types, and objects of families of types. It makes a distinction between the types, which are required for a specific application, and kinds, which are part of the framework. LF is extended by declaring new constants and computation rules involving those constants. For example, inductive types can be added by declaring a type name, constructors, an elimination operator, and rules for computation. Luo [Luo, 1994] gives a schema which allows the addition of a large class of inductive types. These features allow LF as a suitable framework to be based on. 1.3 Motivation of this thesis We believe that reasoning systems can be built using type theory technology as a framework, but which can be used via a domain-specific interface. Such an interface would provide functionality recognizable to an expert in the domain, and would support reasoning work in that domain. But the correctness of the functionality would be supported by the underlying type theory technology. In particular, we propose that this kind of system can be developed with a general purpose 'framework' type theory, in which the domain is first formalized (by CHAPTER 1. INTRODUCTION 3 an expert) and an interface built on top of it. So in reasoning of domain-specific properties a hidden underlying support from a proof assistant (which is an implementation of the 'framework' type theory) is available and does not put extra burden on domain users, i.e. we envisage the following scenario: an implementation of a logical framework (LF) provides the core reasoning functionality and works as a server. A type theory expert encodes the type theory needed for an application domain in LF, formalizes the domain in the type theory, and gives a mapping between LF syntax and a concrete syntax for the application domain. A domain user who is an non-expert in type theory can then use the resulting system by working in the new interface which operates as a client, whilst being exposed to the minimum amount of details about the underlying type theory system. We advocate the use of this approach to provide a composition mechanism that retains the benefits of both domain-specific and generic proof assistants approaches. This is one of the motivations of our research. LF has been implemented in Plastic [Callaghan and Luo, 200Gb], a form of a proof assistant. But there is no big application based on the LF based proof assistant till now. Is it suitable to big applications? This is another one of the motivations of the research of this thesis. 1.4 The major contribution The major contribution of this thesis is that we present and investigate a sound and practical approach to domain-specific reasoning based on LF, achieving this through theoretical work, actual implementation and evaluation. The following are some major steps which lead to the substantiation of this claim. 1. The presentation of the approach • The description of the approach • The methodology about the approach • The structure of the approach 2. The prototyping implementation of the approach • The interface design • The design of the protocol for the communications between user-level and Plastic- level. 3. The case studies • The case study on concurrency. The formalisation of the relevant concepts of concurrency in LF. The formal proving of related axioms, inference rules, lemmas and theorems. CHAPTER 1. INTRODUCTION 4 - The prototyping implementation of the domain-specific interface. • The case study on LAZY-PCF+SHAR 1 . The formalisation of the relevant concepts of LAZY-PCF+SHAR in LF. The formal proving of related axioms, inference rules, lemmas and theorems. The description of the reason why we use Plastic directly other than to design a new interface. • The whole work can be seen as an experiment with use of a restricted type theory (here we mean LF). 1.5 Related work For the computer assisted verification of domain-specific properties, there are two main approaches, distinguished by their use (or not) of a theorem prover. 1.5.1 Theorem prover based approach As its name, theorem prover based approach is an approach which is based on a theorem prover. The following is a list of the major systems which belong to this kind of approach. • Isabelle [Paulson, 1994] [Paulson, 2005] provides a theorem prover based approach for users to specify concrete theory by providing a few declarations of abstract and concrete syntax, primitive proof rules and the support to user defined macros and translation functions in ML. Isabelle/Isar [Wenzel, 2002] aims to a versatile envi- ronment for human-readable formal proof documents, but doesn't explicitly consider domain-specific issues. Our approach is different from this approach by using a concise but powerful type theoretic logical framework LF and by constructing an interface for the domain users. Our approach presents both support to user convenience and strict type theoretic style correctness. • In fact, users can use type theoretic proof assistants such as Coq [Project, 2004] and Lego [Pollack, 2005] to construct their proof directly by embedding the domain con- cepts in them. The syntax extensions mechanism of Coq can help users to set up their favorite syntax, but in my opinion, this mechanism is mainly a syntax sugar or abbre- viation mechanism, the user level command is limited to tactics of Coq, so it cannot use the theorems and lemmas of the domain in a natural way. Meanwhile Coq or Lego theory may include extra features that the domain does not have, e.g. universes. 1 LAZY-PCF+SHAR is a lazy version of the functional language PCF(Programming language for Com- putable Functions) extended by adding explicit substitution in order to formalize the semantics of lazy evaluation. CHAPTER 1. INTRODUCTION 5 • Yu [Yu, 1999; Yu and Luo, 1997] takes a hybrid approach, investigating the integra- tion of model checker and proof assistant (Lego) in a number of domains. However, the relation between the model checker and proof assistant is loose, because domain users operate on Lego directly, in a way which does not reflect domain-specific proof procedure. • PVS[Owre et al., 1997] allows to specify and to verify systems using higher-order logic and presents an integration of tables, types and model checking. A relevant tool TAME [Archer and Heitmeyer, 1997] is designed to support "human-style" reasoning in a particular mathematical model (Lynch-V aandrager timed automata) through an appropriate mechanism from top layer to PVS. It concentrated on how to design the underlying theorem proving support for it, rather than a high-level interface. In other words, TAME focused on developing PVS strategies for proof steps that closely resemble the steps in hand proofs and not concern about the forms of the expressions in user layer. • Z/EVES [Saaltink, 1997] is an interactive system which can be used to develop or analyze a Z specification. It is based on the EVES system, and uses the EVES prover to carry out its proof steps, but without the knowledge of EVES or its language (Verdi). Z/EVES has focus on the Z-interface implementation above EVES, not focus on the generic way of building domain-specific interfaces as our LFTOP approach. Z/EVES cannot guarantees that any result on EVES can be translated back to Z. But some restrictions are imposed to operations to ensure that the result can be translated back to Z. [Saaltink, 1997] page 29. 1.5.2 Non theorem prover based approach Most systems of this approach are based on model checkers, and used in a single domain. For example, the system Truth (Truth/SLC) [Leucker and Noll, 2000] is a design and verification platform for concurrent systems. Its aim is to offer a modular verification system which can be easily adjusted to different settings. It is implemented in Haskell directly and does not depend on any existing theorem prover. It presents some level of user convenience, but loses the support from theorem prover. In our opinion, the proof which is done under this system is less convincing than a proof which is done under a theorem prover or proof assistant with the power of proof term generation. For example, soundness depends on a large body of code, which is infeasible to check for errors. 1.6 The structure of the thesis This thesis is divided into nine chapters. The present chapter introduces the material of this thesis, provides some background concepts and presents the motivations of the thesis. CHAPTER 1. INTRODUCTION 6 The rest of the thesis makes extensive use of the logical framework LF and its imple- mentation Plastic. Chapter 2 concentrates on the statement of problem, the aims of the thesis and the preestablished criteria for the aims. Chapter 3 introduces the preliminary concepts, relevant theories, their implementations and some basic concepts formalized in LF. In chapter 4 we give the outline of the system supporting domain specific reasoning. Architectures, methodologies, techniques and the common things for reasoning in different domains are studied in this chapter. In chapter 5 we present a case study in concurrency. We first analyze the domain, choose CCS as the specification language, LTS as the semantic model, J..t-calculus as the specification logic and formalize them in LF. Then we design the user level syntax for the reasoning system for the domain, i.e. design and implement the rules and the user level interface. In chapter 6 we present another case study in LAZY-PCF +SHAR. A similar but concise discussion is provided in this chapter. The main difference of this case study from the case study in chapter 5 is that we use the interface of Plastic directly. This is due to the observation on the suitability of the direct application of Plastic to this domain. In chapter 7 we study the issues about interface. We focus on the aspects of the design principle, the protocol and implementations. Especially we design a protocol called ULPIP for the communications between user-level and Plastic-level. In chapter 8 we discuss translation issues in this thesis. We study the translation between different levels, prove some properties of the translations. Finally in chapter 9 we conclude our work and review the work left open by this thesis. Chapter 2 Statement of problem The important thing in life is to have a great azm, and the determination to attain it. -JoHANN WOLFGANG VON GOETHE, GERMAN POET AND DRAMATIST This chapter presents the problems and aims of this thesis. Some criteria for their achievement are outlined also. 2.1 Aim 1: The analysis of requirements of domain- specific reasoning 2.1.1 Basic assumptions of our research As indicated in chapter 1, reasoning tasks are becoming prominent tasks in some domains. The domain users hope that computer can be used not only for computation, but also for reasoning. There are many different ways to support the reasoning work in many domains by computer system. For example, a lot of specific reasoning tools which are direct imple- mentations of formal systems of specific domains and generic reasoning tools such as proof assistants are already used for reasoning. We present a new approach and want to investi- gate the approach to support the domain-specific reasoning under some assumptions. The basic assumptions for our research are: • We want to benefit from the research results of type theory, so we use a type theory based proof assistant for the underlying reasoning instead of design a domain-specific reasoning system from scratch for each domain. • We want domain users to work in their domain-oriented way with familiar syntax and semantics instead of being a type theory expert first. 7 CHAPTER 2. STATEMENT OF PROBLEM 8 • We do not exclude using type theory directly by some domain users, because type theory is exactly suitable for some domains. Of course the use is through an interface which we recommend Proof General. • The requirements of domain users who want to do the domain-specific reasoning are different from the requirements of type theory experts with more general interests; so it doesn't follow that we need the same techniques for both of them. From above we can see that this isn't a matter of providing forms of sugaring for expert users, but a serious attempt to study and understand the issues behind producing computer assisted reasoning tools in a variety of domains, which will in time lead to well-engineered systems and a methodology for producing them. 2.1.2 Aims of the LF based domain-specific reasoning approach Under the above assumptions, we present an LF based approach. We are interested in the following problems: • Is it a feasible approach? • What should we do in this approach in order to realize our intention? In fact, this is one of the motivations of the research of this thesis. vVe summarize the aims as follows: 1. To analyze the characteristics of domain-specific reasoning. 2. To get better understanding on the issues behind producing computer assisted reason- ing tools in a variety of domains. 3. To provide an architecture of the presented approach and investigate the feasibility of this approach. 4. To provide a relevant methodology (i.e. methods, process etc.) of the approach. 5. Based on the above architecture and methodology, to do case studies in some domains and implement a prototype system based on the case studies. 6. Through the case studies, to investigate the suitability of the approach and analyze the advantages and disadvantages of this approach. The criteria for above aims are as follows : 1. An analysis of the characteristics of domain-specific reasoning which includes domain- specific notation, higher-level abstraction, design reuse etc. 2. To have learnt from the case studies some of the issues behind producing domain- specific computer assisted reasoning tools. CHAPTER 2. STATEMENT OF PROBLEM g 3. An architecture of the approach and the analysis of feasibility of it. 4. The existence and suitability of a relevant methodology and process. 5. Some case studies for concrete domains including the work of formalization, parser, communication protocols and translations between different levels; implementations of the prototype systems based on the case studies. 6. An analysis of suitability of the approach and the validation of it through case studies. 2.2 Aim 2: The analysis of LF and Plastic as a basis to support domain-specific reasoning LF has been implemented in Plastic [Callaghan and Luo, 2000b], a form of a proof assistant. But there are no big applications based on the LF based proof assistant till now. Is it suitable for big applications? This is another one of the motivations of the research of this thesis. So we have these aims to complete: 1. To analyze the issues of LF as an underlying basis for domain-specific reasoning. 2. 'fry to implement some big applications to make sure that this kind of system is suitable for big applications. 3. 'fry to get feedback from direct application of this system to answer the questions such as which are the theoretical and practical benefits and defects of using it instead of other proof assistants. The criteria for the above aims may be as follows: 1. An analysis of LF's suitability related to be an underlying reasoning basis. 2. Some big applications implemented in the system as a proof of the capability of the system. 3. A summary about the benefits and defects of using LF. But these works can be seen as the formalization works in Aiml for some domains. Especially the case study in Chapter 6 can be seen as the proof of the capability of LF and Plastic. So there are no strict separation between Aiml and Aim2. 2.3 Aim 3: The theoretical aspects of the approach There will be some theoretical aspects related to the approach and these will be studied during the work. The criteria for this aim is the relevant presentations of the corresponding analysis and proofs. Chapter 3 Preliminaries Histories make men wise; poems witty; the mathematics subtle; natural philoso- phy deep ; moral grave ; logic and rhetoric able to contend . - FRANCIS BACON , BRITISH PHILOSOPHER The main purpose of this chapter is to introduce the notations used throughout the thesis, and make the thesis more self-contained. 3.1 Typed Lambda-calculus As presented in [Barendregt, 1990], the lambda calculus was originally conceived by Church [Church, 1932] as part of a general theory of functions and logic, intended as a foundation for mathematics. Although the entire system turned out to be inconsistent, as pointed out in Kleene and Rosser [Kleene and Rosser, 1935], the subsystem dealing with functions only became a successful model for computable functions. This system is called now the lambda calculus. Representing computable functions as >.-terms, i.e. as expressions in the lambda calculus, gives rise to so-called functional programming. There are also typed versions of the lambda calculus. Curry [Curry, 1934] introduced a typed variant of the lambda calculus, called combinatory logic. Church [Church, 1940] gave his formulation of the simple theory of types. The two original papers of Curry and Church introducing typed versions of the lambda calculus give rise to two different families of systems. In the typed lambda calculi a la Curry terms are those of the type-free theory. Each term has a set of possible types. In the system a la Church the terms are annotated versions of the type-free terms. Each that is derivable from the way the term is annotated. The Curry and Church approaches to typed lambda calculus correspond to two paradigms in programming. In many important systems, especially those a la Church, it is the case that terms that do have a type always possess a normal form. By the unsolvability of the halting problem this implies that not all computable functions can be represented by a typed term [Barendregt, 1990]. 10 CHAPTER 3. PRELIMINARIES 11 The >.-calculus with its ,8-reduction rule is very useful for formalizing mathematics and computing expression. It is also a useful tool for expressing semantics of programming language. And ,8-reduction satisfies the Church-Rosser (diamond) property. However, for untyped (or type-free) >.-terms, some of them cannot be reduced to normal forms, this means they have infinite reduction sequence [S0rensen and Urzyczyn, 1998]. This is one reason why we need simply typed >.-calculus. In simply typed A-calculus system (i.e. >. ---> ), the properties such as Church-Rosser, Subject Reduction, and Strong Normalization hold [S0rensen and Urzyczyn, 1998]. This means every well-typed A-term can be reduced to a normal form, keep its type, and reduce to the same normal form no matter which way it is reduced. Therefore we can easily figure out whether two typable terms are ,8-equal by just reducing the terms to their respective normal forms and comparing them. 3. 2 Type theory Type theory is designed originally as a basis for formalising constructive mathematics. But scientists have found a lot of applications of it in computer science. Type theory offers a coherent treatment of two related but different fundamental notions in computer science: Computation and logical inference. This makes it possible for one to program, to understand and to reason about programs in a single formalism. Meanwhile type theory can provide nice abstraction mechanisms which support conceptually clear development of specifications, programs, and proofs. The gap in other specification languages between the programming language and the specifications vanished. So we can say that type theory is a very use- ful theory to support the technology for computer assisted reasoning, such as formalized mathematics or program verification. Our description here is based on the work of Martin-Lof, in particular Martin-Lof's book [Martin-Lof, 1984] and Nordstrom, Petersson and Smith's book [Nordstrom et al., 1990]. But we call the entities "types" where Martin-Lof calls the entities in his theory "sets". 3.2.1 Objects, types, and rules Type theory can be viewed as a formal language based on a conceptual organization of objects. A type is a collection of objects with some common property or structure. In type theory we are interested in the validity of some property or whether some object has a property. We are also interested in whether different expressions denote the same object in a type. Type theory contains rules for making judgements of the following four forms: • A is a type. • A and B are equal types. • a is an object of the type A. CHAPTER 3. PRELIMINARIES 12 • a1 and a2 are equal objects of the type A. Among the objects of a type, some are called canonical objects, which are the values of objects of the type under computation. A canonical object is in a form that the outermost constructor is an introduction constant. This form is called canonical form. The notion of computation is a basic concept in type theory, which generates an equivalence relation, the computational equality between the basic expressions in the language of type theory. In order to guarantee the harmony between the different uses of the entities in the type theory, the computation should have the property that every object has a unique value under computation and the objects which are computationally equal have the same value. There is a common pattern in the rules for introducing types in type theory. Each type will be defined by giving rules in each of four general categories: • The formation rules for A describe under which conditions we may infer that A is a type and when two types A and B are equal. • The introduction rules define the type A in that they prescribe how the canonical objects are formed and when two canonical objects are equal. The constructors for the type are introduced in these rules. • The elimination rules are a kind of structural induction rules. They allow us to define functions or programs on the type. The function, which is a primitive non- canonical constant associated with the type, is introduced in this kind of rule. It is the function which makes it possible to do pattern-matching and primitive recursion over the objects in the type. • The equality rules describe the equalities which are introduced by the computation rules for the function associated with the type. They relate the introduction and elimination rules. They show how the function defined by the elimination rule behaves on the canonical objects of the type. 3.2.2 The principle of propositions-as-types The principle of propositions as types is based on the observation by Curry and Howard [Curry and Feys, 1958] [Howard, 1980] of the close correspondence between systems of natural deduction for intuitionistic logical inference and type systems. It can be viewed as a fundamental idea in the justification of type theory as a foundation for constructive mathematics or as a basis for specification and verification of programs. The basic idea of this principle is that any proposition P corresponds to a type Prf(P), the type of its proofs, and a proof of P corresponds to an object of type Prf(P). Furthermore, one can assert a proposition to be true if and only if one has a proof of the proposition, that is, an object of the type of its proofs. So the truth of a proposition is understood by the inhabitance of the type of proofs of the proposition. The notion of canonical objects for type CHAPTER 3. PRELIMINARIES 13 Prf(P) gives a notion of canonical or direct proofs of proposition P, while the non-canonical objects of type Prf(P) may be called indirect pmofs of proposition P. There is a fundamental distinction between propositions which are formulas describing properties and facts, and judgements, which are assertions of whether formulas are true. On the basis of this distinction, a type theory with sufficient logical type structures has an internal logic and presents a logical language rather different from that of set theory or that of logic programming. The system studied by Curry and Howard were systems for which there was an equiv- alence between propositions and types. This equivalence holds for various logics and type theories: for example, an extension of the simply typed lambda calculus corresponds to full intuitionistic first-order propositional logic, as developed by Howard [Howard, 1980]; and System F corresponds to second-order propositional logic, as the former type theory and the equivalence were studied by Girard [Girard, 1972]. For this reason the propositions-as- types principle is also referred to as an isomorphism. According to the propositions-as-types principle, we have mapped proofs of a proposition to objects of the type of proofs of the proposition. The judgements of type theory must therefore be decidable, so that we can tell from the form of a judgement M : A that M is indeed an object of the type A. An alternative view is that although type theory provides a framework in which to understand both logical inference and computation, we need not identify these two things. We can treat propositions as types, but not vice versa. Zhaohui Luo [Luo, 1994] lists several reasons for viewing the identification of propositions and types as unnatural: Firstly, the logic of our system should be independent of the objects studied in it; secondly, certain types such as the natural numbers do not intuitively correspond to propositions; thirdly, type theory is often considered open to the addition of new types representing new computational or mathematical objects, but the addition of these objects should not change the way we reason in the logic. FUrthermore, results about the conservativity of type theories which identify propositions and types over their related logic [Berardi, 1990; Luo, 199Gb] show that this identification does not correspond to the traditional way of formulating logics. 3.3 The Logical Framework and its application 3.3.1 Logical Framework (LF) A logical framework may be used in various ways. The Edinburgh Logical Framework [Harper et a!., 1987] has been studied for formalisation of logical systems based on the idea of judgements-as-types. Martin-Lof's logical framework(see Part III of [Nordstrom et a!., 1990]) has been proposed by Martin-Lof to present his intensional type theory. The logical framework (LF) which we are interested here is Martin-Lof's LF with type labels on all binders (i.e.,[x : K]k rather than [x]k). The extra type labels ensure that type checking is decidable for this LF, where as for Martin-Lof's LF it is only decidable for a subset of terms CHAPTER 3. PRELIMINARIES 14 [Barthe and S0rensen, 2000][Callaghan and Luo, 2000b]. We can use LF as a meta-language to specify type theories. LF is a simple type system with terms of the following forms: • Type • El(A) • (x: K)K' • [x: K]k' • f(k) where the free occurrences of variable x in K' and k' are bound by the binding operators (x : K) and [x : K], respectively. There are five forms of judgements in LF: • r valid, which asserts that r is a valid context; • r 1- K kind, which asserts that K is a kind; • r 1- k: K, which asserts that k is an object of kind K; • r 1- k = k' : K, which asserts that k and k' are equal objects of kind K; and • r 1- K = I(, which asserts that K and K' are equal kinds The rules in LF are given in figure 3.1. We can use it to customize a specific type theory, or use it as a small type theory directly. There are several reasons to be interested in LF. • Theoretically, it allows a clearer and more satisfactory presentation. Specifically, there is a clear distinction between an object language (to be used for reasoning and program- ming) and the meta-level mechanisms which are used to define the object language. • As LF itself is a concise type theory, so its implementation system (Plastic) does not have much inherent properties which are not easy to be waived. Luo introduces LF as a meta-language for specifying a type theory (e.g. UTT(Unified Type Theory)) 3.3.2 Specifying type theories in LF To specify a type theory we just need to do two kinds of things. Firstly, we should declare new constants. Secondly, we should give computation rules(usually about the new constants). Formally, declaring a new constant k of kind K by writing k:K, is to extend the type theory(specified by means of LF) to which the constant is introduced by the following inference rule: CHAPTER 3. PRELIMINARIES Contexts and assumptions r f- K kind X if_ FV(r) r, x:K, r' valid () valid r,x:K valid r,x:K,r' f- X: K General equality rules r f- K kind r f- K = K' r f- K = K' r f- K' = K" r f- K = K r f- K' = K r f- K = K" r f- k : K r f- k = k' : K r f- k = k' : K r f- k' = k" : K r f- k = k : K r f- k' = k : K r f- k = k" : K Equality typing rules r f- k : K r f- K = K' r f- k = k' : K r f- K = K' r f- k: K' r f- k = k': K' Substitution rules r, x:K, r' valid r f- k: K r, [k/x]r' valid r,x:K,r' f- K' kind r f- k: K r, [k/x]r' f- [kjx]K' kind r,x:K,r' f- K' kind r f- k = k': K r, [k/x]r' f- [kjx]I<' = [k' jx]K' r, x:K, r' f- k' : K' r f- k: K r, x:K, r' f- k': K' r f- k1 = k2 : K r, [k/x]r' f- [kjx]k': [kjx]K' r, [kl/x]r' f- [kl/x]k' = [k 2 jx]k': [kl/x]K' r,x:K,r' f- K' = K" r f- k: K r,x:K,r' f- k' = k": K' r f- k: K r, [k/x]r' f- [kjx]K' = [k/x]K" r, [kjx]r' f- [kjx]k' = [kjx]k" : [kjx]K' The kind Type r valid r f- Type kind r f-A: Type r f- El(A) kind r f- A = B : Type r f- El(A) = El(B) Dependent product kinds r f- K kind r,x:K f- K' kind r f- (x:K)K' kind r f- K1 = K2 r,x:K1 f- K~ = K~ r f- (x:K!)Ki = (x:K2)K~ r,x:K f- k: K' rf-K1=K2 r,x:K1f-k1=k2:K (~) r f- [x:K1]k1 = [x:K2]k2 : (x:K!)K r f- [x:K]k : (x:K)K' r f- ! : (x:K)K' r f- k : K r f- f(k): [k/x]K' r,x:K f- k': K' r f- k: K ((3) r f- ([x:K]k')(k) = [kjx]k' : [kjx]K' r f- ! = !' : (x:K)K' r f- k1 = k2 : K r f- f(kl) = J'(k2) : [kl/x]K' (TJ) r f- f: (x:K)K' X if_ FV(f) r f- [x:K]f(x) = f: (x:K)K' Figure 3.1: The inference rules of LF from [Luo, 1994]. 15 CHAPTER 3. PRELIMINARIES r valid fl--k:K 16 and, for a kind K which is either Type or of the form El(A), asserting a computation rule by writing k = k':K for ki:Ki(i = 1, ... ,n), is to extend the type theory by the following equality inference rule, r 1-- ki:Ki(i = 1, ... , n) r 1-- k:K r 1-- k':K r 1-- k = k':K The special kind Type in LF corresponds to the conceptual universe of types in the type theory to be specified. Let T be any type theory specified in LF. Then, aT- context is a context of the form x1:El(A 1), ... , xn:El(An), and T has the following five forms of judgements (where r is any T- context): • r valid, which asserts that r is a valid T- context; • r 1-- A:Type, which asserts that A is a type; • r 1-- a:El(A), which asserts that a is an object of type A; • r 1-- a= b:El(A), which asserts that a and bare computationally equal objects of type A in the sense that they compute to the same value; and • r 1-- A = B:Type, which asserts that A and B are equal types in the sense that they have the same objects. A judgement in a type theory specified in LF is derivable if it is derivable in the system of LF extended by the constants and computation rules specifying the type theory. Once a type theory is specified, the user uses the type theory rather than the LF language, except that he may use LF as a definitional mechanism which may be implemented in a proof development system, e.g. Plastic. In such a use of logical framework as a meta- language, one does not use the meta-logic embedded in LF to reason about objects in the type theory, but should use the internal logic in the specified type theory for reasoning. An inductive schema is introduced to LF. The essential idea is that each finite sequence of inductive schemata specifies a collection of introduction rules (each schema in the sequence determines one of them) and hence generates an inductive data type whose meaning is given by the introduction rules, the associated elimination and computation rules. We use LFe to express the LF with inductive schema. 3.3.3 The type theory UTT The type theory UTT is specified in LFe. It consists of an internal logic, a large class of inductive data types, and universes. CHAPTER 3. PRELIMINARIES 17 3.3.3.1 Internal logic SOL and definition of II The internal logic( called SOL) consists of a universe Prop of logical propositions and their proof types. The logical universe Prop is impredicative since universal quantification can be formed for any type A and (meta-level)predicate P over A. Similar to ECC [Luo, 1990a], many of the usual logical operators can be defined by means of the impredicative universal quantification. However, the internal logic SOL of UTT by itself is only a second-order logic (hence the name). There are no types of internal predicates or internal relations in SOL over which universal quantification may be possible. The internal logic is introduced by declaring the following constants: Prop Type Prf (Prop)Type V (A:Type)((A)Pmp)Prop A (A:Type)(P:(A)Prop)((x:A)Prf(P(x)))Prf(V(A, P)) Ev (A:Type)(P:(A)Prop)(R:(Prf(V(A, P)))Prop) ((g:(x:A)Prf(P(x)))Prf(R(A(A, P, g)))) (z:Prf(V(A, P)))Prf(R(z)) and asserting the following computation rule: Ev(A, P, R, j, A( A, P, g))= f(g):Prf(R(A(A, P, g))). Then, the usual application operator can be defined as App =df [A:Type][P:(A)Prop][F:Prf(V(A, P))][a:A] Ev(A, P, [G:Prf(V(A, P))]P(a), [g:(x:A)Prf(P(x))]g(a), F), which satisfies the equality (the ,B-rule for A and App ): App(A, P, A( A, P, g), a)= g(a):Prf(P(a)). In LF, we can introduce dependent product types by declaring the following constants: II : (A:Type)((A)Type)Type >.: (A:Type)(B:(A)Type)((x:A)B(x))II(A, B) CHAPTER 3. PRELIMINARIES En : (A:Type)(B:(A)Type)(C:(IT(A, B))Type) ( (g:(x:A)B(x) )C(>.(A, B, g))) (z:IT(A, B) )C(z) and asserting the following computation rule: En(A,B,C,j,>..(A,B,g)) = f(g): C(>.(A,B,g)). For IT-types, the application operator can be defined as follows: app =dt [A:Type][B:(A)Type][F:IT(A, B)J[a:A] En(A, B, [G:IT(A, B)]B(a), [g:(x:A)B(x)Jg(a), F) 18 SOL together with the IT-types is essentially a formulation of the Calculus of Constructions [Coquand and Huet, 1988] in LF. 3.3.3.2 Inductive types Inductive types in UTT are based on the notion of inductive schemata. Any finite sequence of inductive schemata specifies a collection of introduction rules( each schema in the sequence determines one of them) and hence generates an inductive type whose meaning is given by the introduction rules ( and the associated elimination and computation rules). The similar idea has been considered by Gentzen [Gentzen, 1935], Prawitz [Prawitz, 1973; Prawitz, 1974], etc. for traditional logical systems, and by Martin-Li:if[Martin-Li:if, 1984], Backhouse [Backhouse, 1988], Dybjer [Dybjer, 1991], and Coquand and Mohring [Coquand and Paulin- Mohring, 1990] for type theories. For example, a type Nat of natural numbers can be defined as Nat =df M[8], where 8 represents the kinds of the constructors- in this case, kinds X and X -> X where X is a placeholder for the name of the inductive type. The associated introduction operator are zero =dt ~t[8]:N at and succ =dt ~2[8]:N at -> Nat. The elimination operator and computation rules are as the following: ENat =df E[GJ : (C:Nat-> Type)(c:C(zero)) (f:(x:Nat)C(x)-> C(succ(x)))(n:Nat)C(n), ENat(C, c, j, zero)= c ENat(C, c,j, succ(x)) = f(x, ENat(C, c, f, x)). 3.3.3.3 Universes The universes in UTT are the impredicative universe Prop and the predicative universes Type;(i E w) in Tarski style. In this style types in universes are represented by codes (i.e., names) and a decoding function which maps such names to appropriate types. This is contrasted against Russell style, where codes and the types they represent are identified. The theory ECC contains a hierarchy of universes in Russell style. A typical example of using universe is to prove the distinctness of constructors of in- ductive types [Smith, 1988]. Callaghan gave a proof in Plastic for the boolean type, i.e. CHAPTER 3. PRELIMINARIES 19 true f=Bool false; see section 4.5.2 of [Callaghan and Luo, 2000b] for details. Such distinct- ness cannot be proved without universes [Smith, 1988]. UTT also has the nice meta-theoretic properties such as subject reduction and strong normalisation. Goguen had proved these properties in his Ph.D. thesis [Goguen, 1994]. 3.3.4 Distinction between dependent product kind and IT-types Dependent product kind and IT-types are two different notions which often cause confusion. We can list the following differences between dependent product kind and IT-type: o The notion of dependent product kind is one in the meta-framework, while IT-type is a notion of some object type theories. • The dependent product kind provides parameterisation mechanisms which can be used to define a type theory, while a IT-type is an inductively defined construct representing the type of dependent function in an object language. o An important difference is that there is a notion of elimination for IT-types but not for dependent product kinds. For example, an object of dependent product kind 1 (A:Type)(B:(x:A)Type)Type is a family of types parameterised by a type A and a family of types B indexed by objects of type A. Representing such a family by means of an "internal" IT-type is inappropriate and leads to possible misunderstandings in use of type theory. 3.3.5 A new version of Logical Framework PAL+ In fact, a clearer explanation for distinguishing meta and object concepts is from a new version of Logical Framework PAL+[Luo, 2003] . PAL+ is a lambda-free logical frame- work which takes parameterisation and definitions as the basic notions to provide schematic mechanisms for specification of type theories and their use in practice. It is also a logi- cal framework for specification and implementation of type theories, such as Martin-Lof's type theory or UTT. As in Martin-Lof's logical framework [Nordstrom et al., 1990] and the above LF, computational rules can be introduced and are used to give meanings to the declared constants. However, PAL+ only allows one to talk about the concepts that are intuitively in the object type theories: types and their objects, and families of types and families of objects of types. In particular, in PAL+, one cannot directly represent families of families of entities, which could be done in other logical frameworks by means of lambda abstraction. Just as implied in its name, PAL+ can be seen as a successor of de Bruijn's PAL for Automath [de Bruijn, 1980]. Compared with PAL, PAL+ allows one to represent 1 In fact n is declared as a constant of this kind in UTT CHAPTER 3. PRELIMINARIES 20 parametric concepts such as families of types and families of non-parametric objects, which can be used by themselves as totalities as well as when they are fully instantiated. Such parametric objects are represented by local definitions (let-expressions). PAL+ is a correct meta-language for specifying type theories (e.g., dependent type theories), as it has the advantage of exactly capturing the intuitive concepts in object type theories, and that its implementation realises the actual use of type theories in practice. Luo [Luo, 2003] studies the meta-theory of PAL+ by developing its typed operational semantics and shows that it has nice meta-theoretic properties. As a complete implementation of PAL+ has not been done yet, so we still use LF and its implementation system Plastic as the basis of our research. 3.4 Implementations of type theory Several systems which are based on type theories have been implemented. An early im- plementation of type theory with many important contributions is de Bruijn's Automath project [de Bruijn, 1980].In this project de Bruijn introduced the idea of using type theory as a system which can serve as a framework for implementing logics, by giving a system which formalizes the underlying principles which mathematicians agree upon. Lego [Luo and Pollack, 1992] implements several different type theories: The Edinburgh Logical Framework [Harper et al., 1987]; ECC [Luo, 1990a] and the Pure Calculus of Con- structions [Coquand and Huet, 1988]. Coq [Project, 2004] is a Proof Assistant based on the Calculus of Inductive Construc- tions. ALF [Magnusson and Nordstrom, 1994] is a structure editor for Martin-Li:if's type theory in the Logical Framework, including a window-based user interface. Nuprl [Con- stable et al., 1986] implements a variant of Martin-Lof's polymorphic and extensional type theory, and unlike some other type theories, type checking in Nuprl is not decidable, so the elements of propositional types should not be interpreted as proofs but merely represents the computational contents of the associated proposition. Isabelle [Paulson, 1999] is an interactive theorem prover that supports a variety of logics, such as higher order logic (HOL), Zermelo Fraenkel set theory (ZF), and constructive type theory ( CTT). Plastic [Callaghan and Luo, 2000b] is an implementation of typed LF with coercive subtyping and universes. It is different from Lego and Coq because it is not intended to be used directly by expert users but as the underlying layer for other systems. Our further study will be based upon Plastic. CHAPTER 3. PRELIMINARIES 21 3.5 Model checking A model checking problem is a problem of checking whether a given model satisfies a given property: M f= 1/J where, the model NI represents a design, and the property 1/J represents its correctness criteria. In general, as a popular automatic verification technique, model checking has focused on automatic decision procedures for solving its verification problem. The basic idea is to determine whether a model satisfies a property expressed as a temporal logic formula by searching the state space of the model thoroughly. Therefore, to guarantee the termination, the model is often restricted to a finite state system, and properties are expressed in a propositional temporal logic like CTL or LTL, for which finite-state model checking is known to be decidable. The main obstacle encountered by model checking is the so-called state explosion problem that the size of the state transition graph grows exponentially while the size of the system grows linearly. But it is important to understand that model checking problem is not limited to finite state systems or propositional logics, symbolic model checking [McMillan, 1992; McMillan, 2005] can be used to deal with the state explosion problem. ACM awarded the 1998 ACM Kanellakis Award for Theory and Practice to Randal E. Bryant, Edmund M. Clarke, Jr., E. Allen Emerson, and Kenneth L. McMillan for their invention of "symbolic model checking", a method of formally checking system designs widely used in the computer hardware industry. The technique has shown significant promise when used for software verification and in other areas. Symbolic model checking is one of the most important formal techniques used in the computer and semiconductor industries today, and the SMV program, originally developed by Kenneth McMillan as part of his Ph.D. program, is one of the most widely used verifi- cation tools. These industries face a complexity explosion of near-crisis proportions, with six-month design cycles in which products of unprecedented complexity have to be "right" the first time for companies to survive. Symbolic model checking offers design teams shorter time to market and increased product integrity, which explains the rapid adoption of this technology by all leading semiconductor companies. Model checking is a technique for verifying finite state concurrent systems such as se- quential circuit designs and communication protocols. It has a number of advantages over traditional approaches that are based on simulation, testing, and deductive reasoning. In particular, model checking is automatic and usually quite fast. Also, if the design contains an error, model checking will produce a counterexample that can be used to pinpoint the source of the error. The method has been used successfully in practice to verify real indus- trial designs, and many companies are beginning to market commercial model checkers. We want to include model checking technology in our approach to deal with some suitable domain-specific problems. CHAPTER 3. PRELIMINARIES 22 3.6 Theorem proving Unlike model checking, theorem proving utilizes the proof inference technique in some proof system for solving the general validity of a problem. DeBruijn's Automath project was an early and influential investigation into techniques for mechanically proof-checking mathematics [de Bruijn, 1980]. Van Jutting [Jutting, 1977] formalized all of a foundational text on elementary analysis- Landau's "Grundlagen" - in Automath. Recently, more mathematics has been formalized in the MIZAR system [Try- bulec, 2005].MIZAR is based on classical first-order predicate logic, extended with second order schema, and Tarski-Grothendieck set theory. Roughly speaking, this set theory is like Zermelo-Fraenkel set theory, extended with uncountably many inaccessible cardinals. Till July 2005, over two thousands of definitions of mathematical concepts and thirty thousands of theorems are included in the Mizar database. All work done in Mizar is grouped into articles. Articles are published in a Journal of Formalized Mathematics which is largely au- tomatically type-set from information in the MIZAR database. The subjects of the articles have been mostly in the fields of analysis, topology and algebra (including some universal algebra and category theory). I think that the keys to MIZAR's success are as follows: • It started with a set theoretic framework which is known to be theoretically adequate for all of mathematics, including category theory. • A rich type theory was layered on top of the set theory. The type theory allows for the definition of subtypes and parameterized types, and has a structure facility for the definition of algebraic classes. The system copes automatically with set subtyping relationships between elements of classes that have different underlying signatures. • Much effort has been put into the organization of articles in the MIZAR database to ease and speed cross-referencing between articles. In terms of applying theorem provers to hardware and software verification, the NQTHM system of Boyer and Moore [Boyer and J S. Moore, 1979; Boyer and J S. Moore, 1997] is fruitful. Accomplishments include the checking the RSA public key encryption algorithm [Boyer and Moore, 1984] and the verification of microprocessor designs [Hunt et al., 1992]. NQTHM has also been used to formalize Godel incompleteness theorem [Shankar, 1986]. The generation of proofs in NQTHM is highly automated. The user commonly only guides proofs by perhaps giving a few high level hints and suggesting useful lemmas. NQTHM automatically guesses how to do inductions and how to prove the subgoals of inductions. NQTHM also has a linear arithmetic decision procedure tightly integrated in with the the prover program. But NQTHM's logic is weak: it is quantifier free and includes a theory of recursive functions over Lisp like S-expressions. Its strength is roughly that of Primitive Recursive Arithmetic (PRA). This logic is too weak for abstract algebra: there is no way to CHAPTER 3. PRELIMINARIES 23 define algebraic classes of objects and reason with them in ways common in algebra, though 'functional instantiation' extensions do allow some basic algebraic reasoning. The HOL system [Gordon and Melham, 1993a]is a tactic based interactive theorem prover with a classical logic similar to Church's simple theory of types [Church, 1940] but with the addition of a type polymorphism scheme similar to that found in the ML functional programming language. This theory is slightly weaker than ZF set theory. HOL has mostly been used in domains related to hardware and software verification, though its foundational theories are quite general purpose and some success has been had with more abstract math- ematics. Recently, many works about formalization of network protocols in HOL have been done [Steve Bishop and eta!., 2005]. We pay more attention to the theorem provers which are based on intuitionistic type theory and using "proposition as type" principle. They include Alf [Magnusson and Nord- strom, 1994], LEGO [Luo and Pollack, 1992] [Pollack, 2005], Coq [Project, 2004] and Plastic [Callaghan and Luo, 200Gb] etc. Alf is a proof editor based on Martin-Li:if's type theory and explicit substitution. Coq uses the Calculus of Constructions [Coquand and Huet, 1988]. Lego uses ECC [Luo, 1990a] and Plastic uses LF. They are LCF-style theorem provers [Gordon et a!., 1979]. Usually the problem itself is represented as a sequent. The sequent used in natural deduction is in the form of r f- F. We say a sequent holds when it satisfies its intended semantics. In general, theorem provers cannot prove theorem without guidance of users, i.e. they are interactive system and only experienced experts can use them effectively. From now on we'll focus on the applications of Plastic. 3. 7 Some basic concepts formalized in LF 3. 7.1 A brief introduction to Plastic Plastic is an implementation of LF with inductive types, universes and coercive subtyping. It is a proof assistant with a similar style as Lego and Coq. Plastic is implemented in functional language Haskell. It is best used with Aspinall's Proof General interface for xemacs. Currently Plastic uses a script-based model of interaction. 3.7.1.1 The syntax of Plastic The following Figure 3.2 shows the correspondence of LF syntax and Plastic syntax. Currently Plastic maintains a simple linear context. Its context may contain hypotheses (or assumptions), declarations of inductive types, and global definitions. Plastic provides a form of meta-variable to fill in information which is inferrable with simple unification techniques. New meta-variables may be added to the context at any time by claiming a name of a given type. Fresh meta-variables can appear in a term as either named (e.g. ?lemma1, where CHAPTER 3. PRELIMINARIES 24 LF syntax Plastic syntax Explanation (x:K)K' (x:K)K' dependent products K---+K' K -> K' non-dependent products [x:K]k [x :K] k .-\-abstractions f(a, b) f a b function application v FA universal quantification A LL a constructor, which builds proofs of quantifications over propositions. Ev E...FA the elimination operator of V Figure 3.2: The correspondence of LF syntax and Plastic syntax from [Callaghan and Luo, 2001] lemma1 is a name chosen by the user) or unnamed (e.g. symbol ?). If the meta-variables are not solved by constraints within the term, then they are added to the relevant context. 3.7.1.2 Syntactic Sugar in Plastic Binders in specified type theories (eg SOL) are not easy to use, so a few things are imple- mented in Plastic to make it more palatable. There are three forms: By Arrow regard an arrow as an infix operator between two terms, which produces a non- dependent binding. Converted to an application of a non-dependent binder to the two terms. Ega -> b By Symbol follow the general pattern left_bracket id : term right_bracket term. A selection of bracketing symbols is available. Eg { x :A} B By Identifier look like a functional operation binding except the opening square bracket is preceded by the name of the binder. Eg FA [x: A] B For example, tautology can be written as below. Braces denote "for all" binding, and => denotes propositional implication. tautology {p:Prop}p => p tautology FA [p:Prop] Imp p p tautology FA Prop([p:Prop] Imp p p) --expanded version. 3.7.1.3 Coercive subtyping implementation in Plastic The notion of Coercive Subtyping is first inducted into LF in [Luo, 1999]. Since then many studies on it are done in [Luo and Soloviev, 1999; Luo, 2004]. Coercive Subtyping is viewed as a mechanism of abbreviation of the meta-language (LF), not a part of a particular object CHAPTER 3. PRELIMINARIES 25 type theory. The mechanism of subtyping is expressed as a fundamental part of LF. So object type theories can make use of it by virtue of their definition in LF. A coercion is a function c:K -> K', which lifts an object of kind K to kind K'. The coercive definition rule is as follows: f:(x:K)K' ko:Ko Ko to the list [a1, ... , an]. • Coercion between parameterised inductive types: General schematic rules are provided to represent natural propagation of the basic coercions to other structured (or param- eterised) inductive types. For example, I::( A, B) is a subtype of E(A', B') if A is a subtype of A' and B is a subfamily of B'. Plastic can be used to test ideas on coercive subtyping. Plastic implements parameterised and dependant coercions, non-dependent subkinding, and the lifting of coercions over induc- tive types [Callaghan and Luo, 2001]. The coercions are implemented by coercion insertion during type checking. It is justified by the coercion completion results [Soloviev and Luo, 2000]. The relevant parameters are calculated by using the meta-variable mechanism. 3.7.1.4 Modules A Plastic program consists of a collection of modules. Its form is as follows: > module Modulename where; > import importedmodule1; > import importedmodulen; %%declarations, definitions and proof scripts CHAPTER 3. PRELIMINARIES 26 Technically speaking, a module is a sequence of declarations, definitions and proof scripts which begins with the keyword module. Concrete examples can be seen in the following subsections. 3.7.1.5 Inductive type and family in Plastic The syntax for definition of simple inductive type is as follows: > Inductive [D:Type] > Constructors > [C1 :M1] > [Cn:Mn] Where D is the name of the new defined type and Mi (i=1, ... n) is a term in the form of (xi: N1) ... (xm: Nm) D or D. N1, ... Nm are existed types or D itself. We can see that the syntax is similar to Lego. For example, natural number type Nat is introduced like this: > Inductive [Nat:Type] > Constructors > [zero: Nat] > [succ: (n:Nat)Nat] The constants Nat, zero, and succ are declared in the current context, and the elimina- tion rule E_Nat is defined as per Luo's scheme. Inductive families are introduced similarly, with dependent product kinds instead of just Type: > Inductive > > [Vec:(n:Nat)Type] > > Constructors > [vnil:Vec(zero)] > [vcons:(m:Nat)(x:A)(l:Vec(m))Vec(succ(m))] are options which affect what is generated for the inductive family. are declarations which are in force for the inductive family, and (by default) discharged after it is created. 3. 7 .1.6 Inductive Relations Plastic supplies Lego-style inductive relations. To define an inductive relation the Relation flag in the declaration is needed. The only difference in handling from conventional inductive CHAPTER 3. PRELIMINARIES 27 types is that Prf must be included everywhere it is required and a Prop is yielded for the relation. For example: for natural numbers, the "less than or equal" relation: >Inductive [le:(x,y:El Nat)El Prop] Relation > Constructors > > [leO [leS (m:El Nat)Prf(le zero m)] (m,n:El Nat)(ih:Prf(le m n))Prf(le (succ m) (succ n))]; Because there is no syntactic sugar for handling Props, so Prf must be added explicitly. The resulting elimination operator is this (El is omitted in this output): Hyp E_le : (C_le:Nat -> Nat -> Prop) ((m:Nat)(Prf (C_le zero m))) -> ((m:Nat) (n: Nat) (Prf (le m n)) -> (Prf (C_le m n)) -> (Prf (C_le (succ m) (succ n)))) -> (x: Nat) (y: Nat) (Prf (le x y)) -> (Prf (C_le x y)) 3.7.1.7 Inductive Relations with Large Elimination As an extension to the above, relations may be given large elimination, that is: instead of producing Props, it may produce (larger) Types. It is triggered with the flag Relation_LE (replacing the plain Relation flag). This follows what Lego does, and allows equality to be defined as an inductive relation with more useful elimination behavior, as shown below. > Inductive [A:Type] > [Eq : (x,y:El A)El Prop] > Relation_LE > Constructors > [eqr:(a:El A)Prf(Eq a a)]; Hence the elimination operator is this: E_Eq : (A :Type) (C_Eq:El A -> El A -> Type) ((a:El A)El (C_Eq a a)) -> (x:El A)(y:El A)El (Prf (Eq Ax y)) -> El (C_Eq x y) CHAPTER 3. PRELIMINARIES 28 3.7.1.8 Development of proofs in Plastic In the style of Lego and Coq a goal-directed proof state controls which subgoals the user must prove next. Expert users like this style. Plastic used a more flexible model, where user can work on any unsolved meta-variable in the current linear context. Further, user need not completely solve a meta-variable before attempting another meta-variable. This flexibility is identified as being useful to applications like Mathematical Vernacular [Callaghan and Luo, 1998]. Meta-variables are the central notion: proof is the process of developing instantiations for them. A Claim for the kind which represents the goal is the first line for a proof. The proof commands may act on that claimed meta-variable, which could introduce further meta-variables (i.e. sub-goals). The following are some of the main commands: Refine t: this is a command similar to Lego's corresponding command. It computes a term t to instantiate a meta-variable (e.g. M) of known type. It is implemented in terms of the meta-variable preprocessor. Cut is applied to M using the computed term t, and any new meta-variables arising (i.e. sub-goals) are inserted in the context immediately before M. In other words, the evaluation of a refinement command of the form Refine t proceeds as follows: • First, the system tries to check whether the term t is well-typed in the current context. If it is not well-typed, the system should report error message, Otherwise do the next step. • Second, the system tries to unify the current goal with the type of the refinement term t. There are several possibilities: 1. the unification succeeds: that means the current goal is proved. 2. the unification fails: then, the system tries to specialize the refinement term t by applying it to a new meta-variable of the right type. There are two possibilities: if the specialization succeeds, then several new goals are gener- ated in order to prove the current goal. If the specialization fails, then the refinement step fails. Intros: when used on a meta-variable M:(x:K)K', the context appearing after and includ- ing M is replaced with a hypothesis x:K and a new meta-variable !vf':K'. This creates a branch in the context. Return: it marks closure of an Intros, i.e. all meta-variables introduced by (and since) the Intros have been solved. The action is to abstract the solution for M' by x:K and cut the result into the context existing prior to the corresponding Intros command. ReturnAll: It closed all opened lntros, namely, all meta-variables introduced by all Intros have been solved. It releases all hypotheses. CHAPTER 3. PRELIMINARIES 29 We shall give some concrete proofs and examples of applications in following sections and chapters. 3. 7.2 Sets and relevant constants and operators Set theory and the theorem of fixed points are very useful to give a clear semantic interpreta- tion of formal systems. There are different representations of set. We use logical predicates to represent set in this thesis. Although there are many papers which pay more attention to setoids [Barthe et al., 2003], which are more suitable to support extensional concepts such as quotients and subsets, but we don't want to use setoids here and we think that a simple treatment to set is enough. At first, we define some special sets and operators in >.-notation as follows: Pred Full set Emptyset Meet Union Not Minus Subset Eqset Single >.A:Type.A -> Prop >.A:Type.>.x:A.tautology >.A:Type.>.x:A.absurd >.A:Type.>.B, C:Pred(A).>.x:A.(B(x) and C(x)) >.A:Type.>.B,C:Pred(A).>.x:A.(B(x) or C(x)) >.A:Type.>.B:Pred(A).>.x:A.(not B(x)) >.A:Type.>.B, C:Pred(A).Meet A B (Not A C) >.A:Type.>.B, C:Pred(A).'v'x:A(B(x)-> C(x)) >.A:Type.>.B, C:Pred(A).and (Subset A B C)(Subset A C B) >.A:Type.>.x:A.Eq x Then, we give some relevant predefined elements (Pi_, La_, ap_ ) which are defined in a system module called Function in Plastic. The following is an episode for the definitions: %---------------------------------------------------------------- > module Function where; Function spaces (non-dependent) > Inductive > [ A,B: Type ] > [Pi_ : Type ] > Constructors > [ La_ : (f: (x:El A) El B) Pi_]; Now, the means to use a Pi type. >Claim ap_ : (A,B:Type)(f:Pi_ A B) -> (_:A)B; CHAPTER 3. PRELIMINARIES > Intros A B pi a; >Refine E_Pi_ ? ? ([_:Pi_ A B]B) ? pi; > Intros f; > Refine f a; > ReturnAll; %---------------------------------------------------------------- In addition, tautology and absurd are defined in Higher-Order logic as follows: P =;- Q =dt V x: Prf(P).Q tautology =dt V P :Prop. P =} P absurd =dt V P :Prop. P 30 In Plastic they are defined in a system module (called SoLBasics) for Second-Order logic, the following is an episode for the definitions : %---------------------------------------------------------------- > module Sol_Basics Yhere; Definitions of Common Logical Constants > import Sol; %---------------------------------------------------------------- Tautology. > [tautology= {P:Prop}P=>P :Prop]; > Claim prf_tautology : Prf tautology; > Refine LL; > Intros P· . > Refine LL; > Intros p; > Refine p; > ReturnAll; %---------------------------------------------------------------- Absurd. > [absurd = {A:Prop}A : Prop ] ; > > [not [A:Prop]A =>absurd Prop-> Prop]; CHAPTER 3. PRELIMINARIES >Claim E_absurd Prf (absurd=> {N:Prop}N); > Refine LL; > Intros x; > Refine x; > ReturnAll; %---------------------------------------------------------------- Finally, We can define set and related constants and operators in Plastic as follows: %---------------------------------------------------------------- > [Pred = [A:Type](Pi_ A Prop)]; > [Fullset = [A:Type](La_ A Prop [x:A]tautology)]; > [Emptyset = [A:Type](La_ A Prop [x:A]absurd)]; > [Meet= [A:Type] [B:(Pred A)] [C:(Pred A)](La_ A Prop > [x:A](and (ap_ A Prop B x) (ap_ A Prop C x)))]; > [Union= [A:Type] [B:(Pred A)] [C:(Pred A)](La_ A Prop > [x:A] (or (ap_ A Prop B x) (ap_ A Prop C x)))]; > [Pnot = [A:Type] [B:(Pred A)](La_ A Prop > [x:A](not (ap_ A Prop B x)))]; > [Minus= [A:Type] [B:(Pred A)] [C:(Pred A)] > > [Subset > > [Eqset > > [Single Meet A B (Pnot A C)]; [A:Type] [B:(Pred A)] [C:(Pred A)](FA A [x:A]((ap_ A Prop B x) => (ap_ A Prop C x)))]; [A:Type] [B:(Pred A)] [C:(Pred A)]( and (Subset ABC) (Subset A C B))]; [A :Type] [x: A]( La_ A Prop ([y: A](Eq Ax y)))]; %---------------------------------------------------------------- 31 From the above definitions we can see that the definitions are longer than the corre- sponding definitions in Lego. The reason is that Plastic has no !ego-like implicit syntax and thus requires most things to be made explicit. CHAPTER 3. PRELIMINARIES 32 3.7.3 Fix points and their properties The theory of fixed points is very useful for giving denotational semantics of programming languages. It can also be used in program analysis and verification of program properties, etc. In this subsection, based on the above definitions, we give some formalization of the theory and theorems which are formally proved in Plastic. In the following definitions we assume that ¢> is a monotonic function from power set of E to power set of E. Definition 1 (Prefixed point.) A subset S ~ E is a prefixed point of¢> if ¢>(S) ~ S Definition 2 (Postfixed point.) A subsetS~ E is a postfixed point of¢> if S ~ ¢>(S) Definition 3 {Fixed point:) A subset S ~ E is a fixed point of¢> if S is both a prefixed point and a postfixed point of¢>. Definition 4 (Greatest Fixed point:) A subset S ~ E is a greatest fixed point of¢> if S is a fixed point and for any fixed point T of¢>, T ~ S. Definition 5 (Least Fixed point:) A subset S ~ E is a least fixed point of¢> if S is a fixed point and for any fixed point T of¢>, S ~ T. The following are main formal definitions related to the above definitions: > [Mono = [A:Type] [F: Pi_ (Pred A) (Pred A)] [C,D: (Pred A)] > (( Subset A C D) => (Subset A (ap_ (Pred A) (Pred A) F C) > (ap_ (Pred A) (Pred A) F D)))]; > [F_Mono:(A:Type)(F: Pi (Pred A) (Pred A)) > (C,D: (Pred A))( Prf(Mono A F CD))]; > [prefixp = [A:Type] [F:(Pi_ (Pred A)) (Pred A))] [P: Pred A] > ( Subset A (ap_ (Pred A)) (Pred A) F P) P)]; > [postfixp = [A:Type] [F:(Pi_ (Pred A) (Pred A))] [P: Pred A] > ( Subset A P (ap_ (Pred A) (Pred A) F P))]; > [lfixp [A:Type] [F:(Pi_ (Pred A) (Pred A))] (La_ A Prop [x:A] > ({P: (Pred A)}((prefixp A F P) => (ap_ A Prop P x))))]; > [gfixp [A:Type] [F:(Pi_ (Pred A) (Pred A))](La_ A Prop [x:A](Ex (Pred A) > ( [P: (Pred A)] (and (postfixp A F P) (ap_ A Prop P x))))) ] ; CHAPTER 3. PRELIMINARIES 33 We have proved the relevant properties of set and fixed points using Plastic. These form our bases for defining J.L- calculus. The following are main theorems related to fixed points we have proved, but we just show a proof of one theorem: • Theorem 3. 7.1 (Tarski(Tarski, 1955/)Let E be a set, P(E) be the power set of E and il>:P(E) -+ P(E) be a monotonic function (i.e. VS, S' E P(E)(S ~ S' ---> il>(S) ~ il>(S'))), Then i1> has a least fixed point J.LS.il>(S) and a greatest fixed point vS.il>(S) given by J.LS.il>(S) = n{S' ~ Elil>(S') ~ S'} vS.il>(S) = u{S' ~ EIS' ~ il>(S')} J.LS.il>(S) is the least prefixed point since it is the meet of all the prefixed points. vS.il>(S) is the greatest postfixed point since it is the union of all the postfixed points. • Theorem 3.7.2 For every prefixed point P, least fixed point is a subset of P,i.e.: VP.prefp(F, P)-+ lfp(F) ~ P • Theorem 3. 7.3 Least fixed point is a prefixed point, i.e.: prefp(F, lfp(F)) Proof The following is our proof of this theorem in Plastic: %---------------------------------------------------------------- Least fixpoint is a prefixed point >Claim lfixp_isprefixp: (Fi:(Pi_(Pred(A)) (Pred(A)))) > (Prf(prefixp Fi (lfixp Fi))); > Intros Fi; > Refine LL; > Intros x; > Refine LL; > Intros H· ' > Refine LL; > Intros xi; > Refine LL; > Intros Hi; > Refine App ? ? (App ? ? Hi); >Refine App?? ( App?? (App?? (F_Mono A Fi (lfixp Fi) xi)) x); > 2 Refine App ? ? lfixp_lessp; > 2 Refine Hi; > Refine H; > ReturnAll; CHAPTER 3. PRELIMINARIES 34 > lfixp_isprefixp; %---------------------------------------------------------------- Q.E.D. The reason why we give this example here is to show the reasoning style of Plastic in this area, other theorems can be proved similarly. • Theorem 3.7.4 Least fixed point is a postfixed point, i.e.: postfp(F, lfp(F)) • Theorem 3. 7.5 Every post fixed point P is a subset of greatest fixed point, i.e.: VP.postfp(F, P)--> P <;;; gfp(F) • Theorem 3.7.6 Greatest fixed point is a prefixed point, i.e.: prefp(F, gfp(F)) • Theorem 3.7.7 Greatest fixed point is a postfixed point, i.e.: postfp(F, gfp(F)) • Theorem 3.7.8 (Reduction lemma (Kazen, 1983; Winskel, 1989]} VP.P <;;; gfp(F) <--> P <;;; F(gfp(>.Q.(P u F(Q)))) • Theorem 3.7.9 (Least fix point fold and unfold) VP.P <;;; lfp(F) <--> P <;;; F(lfp(F) uP) • Theorem 3.7.10 (Greatest fix point base) VP.P <;;; P'--> P <;;; gfp(>.Q.(P' u F(Q))) • Theorem 3.7.11 (Greatest fix point fold and unfold) VP.P <;;; gfp(F) <--> P <;;; F(gfp(F) uP) 3.8 Summary We present preliminaries of the thesis in this chapter. Based on these basic concepts we can expand our study on the goals of the thesis. Higher order logic and inductive data type are two important features in LF (compared with some non-theorem prover approach) which help us to formalise the required concepts very easily. From the above we can see that the power of expressive higher order logic simplifies the encoding of several concepts such as set, predicate and fixed points. We can also see that the inductive data type is very useful to formalise data types. Their concrete application will be discussed in following chapters of the thesis. Chapter 4 The outline of the approach Something attempted, something done. -HENRY WADSWORTH LONGFELLOW, AMERICAN POET This chapter presents the outline of our approach. We emphasize the difference of our approach and other approaches from architectures, the underlying theories and methodolo- gies. 4.1 Our approach For the computer assisted domain-specific reasoning, there are two main approaches, dis- tinguished by their use (or not) of a theorem prover. We know that theorem prover based approaches lack strong support for domain-specific syntax and proof style; systems of the non theorem prover based approaches lack the certainty of proof and genericity. Our ap- proach aims to reduce the weakness of the above approaches and help the domain users who are not type theoretic experts to do proof in a familiar syntax, and with the support of an exactly customized type theory. It aims at a balance between user convenience and certainty of proof. There are new advantages too, since type theory provides clear methods, useful tools and good ideas about how to do computer assisted formal proof. In particular, induction is an important and powerful technique in type theory. We expect that providing reasoning tools which offer good support for induction, via the underlying type theory basis, will lead to a better appreciation of proofs and to wider use of such tools in many domains. The key feature of our approach is its use of LF and its associated reasoning techniques to formalize a problem domain and to present a domain-oriented interface suitable for use by people who aren't experts in type theory. We just need to do some work for formalizations, translations and communications to implement a reasoning system for each domain. That 35 CHAPTER 4. THE OUTLINE OF THE APPROACH Formalized In Plaslk: Plastic syntax or Proor term Plastic Fonnallzed In Plastic SERVER Figure 4.1: The architecture implied by the approach 36 saves much work. i.e. the formalization and translation work are the main work in this approach other than the concrete reasoning system. 4.1.1 An architecture of the approach An architecture implied by the approach is shown in figure 4.1. It is a client-server structure, where the implementation of LF is a 'server' which handles the important reasoning steps. Clients use this reasoning functionality, and present a simplified version of it to users. There are three layers: • The upper layer is a domain oriented interface which is operated by the domain user. For a specific domain, it gives access to the domain language, specification logic, semantic model and the reasoning system by using notations which are familiar to the domain users. These notations are customized in our new interface (Called LFTOP), so domain users need not use the underlying LF notations directly. • In the bottom layer, every component in the upper layer is represented by its corre- sponding formalization in LF. Because we use Plastic [Callaghan and Luo, 2000b] as the implementation of LF. So the format of the formalization follows Plastic (See Chapter 7 for more details). • The middle layer provides a bridge between the upper layer and the bottom layer. It includes tools such as parsers and translators. These tools implement the automatic transformation from domain-specific syntax and proof terms to Plastic syntax and proof terms. Following Aspinall [Aspinall, 2000], we have designed an XML-based protocol ULPIP for communications between the various layers. This encodes the dialogues that can occur between the various layers. This may be extended with domain-specific features. CHAPTER 4. THE OUTLINE OF THE APPROACH 37 4.1.2 A methodology Using this approach for each specific domain we should do all or some of the following steps: • Formalize the domain-specific specification language, logic and semantic model in LF. • Compare the original syntax and the formalized syntax. Implement parsers and trans- lators to do the translation automatically. • Design the reasoning system and commands which can be used by domain users in the interface. • Find the corresponding command, group of commands or prove new lemmas in Plastic to simulate the effect of each user command in the interface. • Implement the translations of user level commands to Plastic commands. • Design and implement a concise protocol for communications between user level inter- face and the underlying Plastic system. • Design and implement GUI related issues. We shall follow the above steps in our case studies of Chapter 5 and 6. We can see them as the applications of this methodology. 4.2 The techniques we use in this approach Many technologies should be used in this approach. We list them as follows: • Functional programming technology: We use functional programming language Haskell to implement our system. We get a lot of benefit from the features of high order, list comprehension and lazy evaluation [Pang and Zhao, 2005; Pang et al., 2005b; Pang et al., 2006b] in it. As Plastic is implemented in Haskell, so the combination of our implementation and Plastic implementation is convenient. • Parser technology: We use Happy [Gill and Marlow, 2005] as a generator of the Parser. We just need to present the BNF format of the domain specific language, Happy can generate the corresponding Haskell modules as the Parser. • Translation technology: Using Grammar-directed translation technology the work of translation and parsing is done at the same time. The translations are clear for un- derstanding. • Model checking technology: We intend to use model checking technology to solve some subproblems which have finite states. But this was not attempted in this thesis. CHAPTER 4. THE OUTLINE OF THE APPROACH 38 • Communication technology: How to deal with the problem of communications between the layers? We can design some protocols for communication. We can also use the framework of Proof General [Aspinall, 2005a]. • LF based computer assisted reasoning technology: We use LF based computer assisted reasoning technology in the bottom layer. A lot of benefits are gotten in a way which domain users need not have the knowledge of LF in detail. We use the above technologies in our approach. The details will be given in the following chapters. 4.3 The common things for different domains in this approach There are many common things even for different domains in this approach. The following are the main common things. • The communication protocols: Different domain can use same communication proto- cols. We design communication protocols called ULPIP for this purpose. We describe these protocols in HaXML. Meanwhile the framework of Proof General can be used directly. • The underlying type system: In our approach we use Plastic (the implementation of LF) as the underlying type system for all domains. • The similar translation modules: We use Grammar-directed translation technology, the translation modules are similar both in skeleton and strategy. The reuse of the above items provides a good way for us to implement the relevant things. 4.4 The role of type theory and its framework Type theory and the relevant logical framework (LF) are the basis of this approach. The formal reasoning is carried out in a system (here we mean Plastic) which is an implementation of the logical framework. The benefits from type theory and the logical framework can be obtained naturally. Research results from type theory (such as proof assistants) can be used without any difficulty. Especially, the correctness guaranteed by type theory increases the credibility of the work done in the approach. In fact, we can view the approach as programming the formal system in a better programming language (ie, writing the key parts in type theory rather than inC/Java/Haskell). CHAPTER 4. THE OUTLINE OF THE APPROACH 39 4.5 Discussion In this chapter we give an outline of our approach. The approach tries to inherit all the advantages over non-type-theory based approach from type theory based approach in a way where the users need not have a lot of knowledge of type theory. In our design we use three layers of the interface to attain this effect. For example, it can inherit the proof terms which is one of the major differences between type theory based theorem provers with other non- type-theory based theorem provers and automatic verifiers. Proof terms are A.-terms of which the correctness can be checked by type checking algorithms implemented in a type theory based proof assistant. Therefore proof terms give us more confidence on the proof. The proof checking of Plastic helps to ensure the correctness of the reasoning in our approach in a way that is not noticed by domain users. All these features are benefit from the structure of the approach. However the structure of this approach is more complicated than most other approaches. But this should be balanced against the positive features, and we believe the balance is in our favor. For this kind of system, the overheads in a multi-layer approach are relatively small, so 'efficiency' is not a big problem here. Chapter 5 Case study: concurrency If you want to understand today, you have to search yesterday. -PEARL BUCK, AMERICAN FEMALE WRITER This chapter presents a specific domain -concurrency- as a domain for the case study. Building on Yu's work in Lego [Yu, 1999; Yu and Luo, 1997], we choose concurrency as the domain of interest for this case study. This domain is relatively complex, requiring the interaction of three formal systems, hence a demanding case study. Some issues of this work has been published in [Pang et a!., 2005a]. Firstly, this chapter introduces the basic relevant concepts of concurrency and then we give a deep study of it. 5.1 Domain analysis Concurrent systems are quite different from ordinary sequential systems. Instead of focus- ing on input-output behavior and termination of the sequential systems, they focus on the interactions and communications between components. Usually the interactions and com- munications are described by competing for access to shared resources which is corresponding to shared variable model or exchanging messages which is corresponding to message pass- ing model. Process algebra represents a mathematically rigorous framework for modelling concurrent systems of interacting processes. 5.1.1 Process algebra The term process algebra includes a collection of theories that support mathematically rig- orous (in)equational reasoning about systems consisting of concurrent, interacting processes. The field grew out of a seminal book due to Milner [Milner, 1980] and has been an active area of research since then. In particular, researchers have developed a number of different 40 CHAPTER 5. CASE STUDY: CONCURRENCY 41 process algebraic theories in order to capture different aspects of system behavior; however, each such formalism generally includes the following characteristics: 1. A language, or algebra, is defined for describing systems. 2. A behavioral equivalence is introduced that is intended to relate systems whose be- havior is indistinguishable to an external observer. 3. Equational rules, or axioms, are developed that permit proofs of equivalences between systems to be conducted in a syntax driven manner. Some formalisms include a refinement ordering, in this case, the theories allow one to determine if a system is "greater than or equal to" (i.e. refines) another. The relevant literature typically refers to each theory as a process algebra; so the field of process alge- bra contains many process algebras. Process algebras derive their motivation from the fact that a system design often consists of several different descriptions of the system involving different levels of detail. The behavioral equivalence or refinement relation provided by a process algebra may be used to determine whether these different descriptions conform to one another. More specifically, higher-level descriptions of system behavior may be related to lower-level ones using the equivalence or refinement ordering supplied by the algebra. Related systems may be used interchangeably inside larger system descriptions; this facili- tates compositional system verification, since low-level designs of system components may be checked in isolation against their high-level designs. This section surveys some of the main features of process algebra. The next subsection introduces CCS, the process algebra that we use throughout the chapter to illustrate the principles we cover in the case study. Calculus for Communicating Systems (CCS) is a good example of the message passing model. The application of the approach LFTOP in this domain involves three formal systems and their associated technology. These are: • Specification language: The state system under consideration is described in a spec- ification language which usually is a kind of process algebra, such as Calculus for Communicating Systems (CCS), or Communicating Sequential Processes (CSP). We choose CCS as our specification language and focus on Pure CCS [Milner, 1989] in this chapter. • Semantics: The specification is transformed into a representation which is a semantic model, e.g. LTS (Labelled Transition System),or Timed Automata. We use LTS as the semantic model to give the relevant operational semantics. • Logic: Properties to be checked are given as formulas of a specification logic, such as J.L-calculus, Propositional Linear Temporal Logic (PLTL), or Computation Tree Logic (CTL). We choose J.L-calculus as the logic here: it is sufficiently powerful for our purposes, and other temporal logics can be defined as abbreviations of J.L-calculus. CHAPTER 5. CASE STUDY: CONCURRENCY 42 5.1.2 CCS: Calculus for Communicating System The Calculus for Communicating System is an algebraic theory intended to describe commu- nications between, and computations of, abstract processes. It is an algebra for specifying and reasoning about concurrent systems. As an algebra, CCS provides a set of terms, op- erators and axioms that can be used to write and manipulate algebraic expressions. The expressions define the elements of a concurrent system and the manipulations of these ex- pressions reveal how the system behaves. The operators in the set may be used to construct system descriptions from definitions of subsystems. The basic building blocks of these descriptions and system definitions in all existing process algebras are actions. Intuitively, actions represent atomic, uninterrupted execution steps, with some actions denoting internal execution and others representing po- tential interactions with its environment that the system may engage in. In CCS, both communication and computation are abstractly represented by actions. In other words, ac- tions represent either inputs/outputs on ports or internal computation steps. The former are sometimes called external, as they require interaction from the environment. To formalize these intuitions, let A = L U { T} be a set of actions, T be a distinguished action called 'silent' action which models internal or invisible or idling actions. L is a set of labels having two disjoint subset: L + is a set of names, and L- is a set of co-names. We let a, b, c range over names, a, b, c range over co-names, and a, f3 range over A. If l E L, then its complement action l E L, and we have l = l. Then an action in CCS has one of the following three forms. • a, where a E L +, represents the act of receiving a signal on port a. • a, where a E L +, in other words, a E L- , represents the act of emitting a signal on port a. • T is a distinguished action called 'silent' action which models internal or invisible or idling actions. The syntax of the pure CCS version can be expressed as follows: E:: =Nil I X I a.E I E1 + E2 I E1IE2 I E\L I E[f]lrec X.E where • Nil (called empty process) represents stopped or deadlocked computation, so it cannot perform any actions. • X is a process variable. • a.E (called prefix) can perform action a and then behave as E. • E1 + E2 (called summation) represents choice- the process can evolve either as E 1 or as E2. CHAPTER 5. CASE STUDY: CONCURRENCY 43 • E1IE2 (called parallel composition) represents the parallel independent performing of Et and E2 or communicating through complement actions of them. • E\L (called restriction) represents a process which behaves like E but cannot perform actions in L or their complement actions. • E[f] (called relabelling) behaves like E, but the actions are renamed by a bijection f:L ---> L, where f has the property that f(Z) = f(l); we can extend the domain off to A and let f(r) = T. o rec X.E (called recursion) represents a recursive process which behaves like the process E applied to rec X.E. Value-passing CCS is a process calculus in which actions consist of sending and receiv- ing values through communication ports, and the transmitted data can be tested using a conditional construct. The syntax form of the value-passing version is as follows: E:: =Nil I X I p(x).E I p'(e).E I T.E I Et +E2 I E1IE2I E\L I E[f]l if b then E I Rec X.E We just need to describe the different expressions from pure CCS as follows: • p(x).E (called input prefix) behaves as a process which can receive a value, say v, over channel p, and bind the result to a variable x, binding results in substitution [v jx] of the formal parameter x by the actual parameter v; • p'(e) (called output prefix) behaves as a process which send value e over channel p; • if b then E behaves as E when b is true, otherwise no action is done. 5.1.3 LTS: Labelled Transition System Labelled Transition System (LTS) is very useful in representing operational semantics of formal systems such as CCS. Its definition is as follows. Definition 6 A labelled transition system T is a triple (S, L,--->), where Sis a set of states, L is a set of transition labels and ---><:;; S x L x S is a transition relation. Normally we writes~ s' for (s,a,s') E--->, and~ for the relation {(s,s')ls ~ s'}. When LTS is used to describe the operational semantics of concurrent systems the labels are interpreted as actions which can take place in the system. The system is considered as being in one particular state at any given time, changing states by performing actions in accordance with the transition relation. In an LTS, if we set a state as a start state, then the LTS is called rooted LTS. It is a quadruple (S, L, --->,p), where p is the start state (also be called root), the others are the same as in the above triple. CHAPTER 5. CASE STUDY: CONCURRENCY 44 5.1.4 J.L-calculus 5.1.4.1 Previous logics For previous logics J. Bradfield and C. Stirling gave a good introduction in [Bradfield and Stirling, 2003]. Many explanations in the following paragraphs are extracted from theirs for self-containment of this thesis. Hennessy-Milner Logic(HML) [Hennessy and Milner, 1980] is a primitive modal logic of actions. In addition to the boolean operators the syntax of HML has a modality < a > ¢, where a is a process action. A structure for the logic is a labelled transition system. The constants tt and f f are two atomic formulas of the logic. The meaning of < a > ¢ is "it is possible to do an a-action to a state where¢ holds". By inductively defining when a state (a process) of a transition system has a property, the formal semantics are given; for example, E I= < a >

"lj) and < f3 > "lj) as subformulas of < n U f3 > "lj), < f3 > "lj) as a subformula of < n; f3 > "lj); and < n > "lj) as subformula of< n* > "lj). The size of r is proportional to I1). It shows that the filtered model is indeed a model, in that [E] f= "lj) iff E f= "lj) for "lj) E r. Consequently if ¢is a satisfiable PDL formula, then it has a model with size 0(21<1>1), and in fact 21<1>1 suffices- see [Fischer and Ladner, 1979] for more details. Although CTL, CTL* and modal ~-t-calculus all have the finite model property, the filtration technique does not apply. If one filters T through a finite set r containing \fFQ unintended loops may be added. For example if Tis E; ~ Ei+ 1 for 1 :=::; i < n and Q is only true at state En then E; f= \fFQ for each i. But when n is large enough the filtered model will have at least one transition Ej ~ E; when i :=::; j < n , with the consequence that E; ¥ \f FQ . The initial approach to showing the finite model property utilises semantic tableaux where one explicitly builds a model for a satisfiable formula with small size. But this technique is very particular, and subsequent more CHAPTER 5. CASE STUDY: CONCURRENCY 46 sophisticated methods based on automata are used for optimal results. 2. CTL model-checking: Apart from the common things, CTL has some obvious differ- ences from PDL. At first, although it is a state-based logic, but it uses path operators internally- evaluating the formula V[¢>U7/I] at a state involves considering all paths from that state. Thus, at first glance, one might expect to lose the obvious exponential upper bound on model-checking. However, this turns out not to happen, and in fact CTL is not difficult to model-check. This was shown in [Clarke et al., 1986] by a direct construction; it also follows from the fact that CTL is a simple fragment of the modal p.-calculus. The model-checking procedure of [Clarke et al., 1986] is an example of a global technique. This procedure proceeds by model-checking subformulas from the bottom up, doing a full pass over the state space for subformulas before considering the superformula. Here is an English outline of the algorithm in the original paper: • to check tt, -.¢>, ¢> 1 /\¢>2, check the subformulas and perform the boolean operation; • to check < a > ¢>, [a]¢, check the subformula ¢>, and then apply the semantic definitions; • to check 3[¢>U 1/JJ, check the subformulas, then find the states at which 1/1 holds, and trace backwards along paths on which ¢> holds; • to check V[¢>U7/IJ, check the subformulas, then make a depth-first traversal of the system, doing the following: if a state satisfies 1/J, mark it as satisfying V[¢>U 1/JJ; otherwise, if it fails ¢>, mark it as failing V[¢>U1/J]; otherwise, after processing the successors, mark it as satisfying V[¢>U1/J] iff all its successors do. In fact, CTL can be translated into modal p.-calculus. The relevant algorithms include global, backward-looking model-checking algorithms, and local forward-looking algorithms. From today's perspective, it is interesting to see that this previous CTL algorithm has elements of both: the code for 3[¢>U1/J] is doing exactly the computation by approximation of the p.-calculus translation; but the code for V[¢>U7/I] is doing tableau model-checking. 5.1.4.2 A brief introduction to p.-calculus The use of fixedpoint operators is an important defining feature of p.-calculus. Using them in program logics goes back at least to D. Park [Park, 1969]. However, using them in modal logics of programs dates from work of Pratt, Emerson, Clarke and Kozen. Pratt's version [Pratt, 1982] used a fixedpoint operator like the minimization operator of recursion theory, and this has not been studied further. Fixedpoint operators were added by Emerson and Clarke to a temporal logic to capture fairness and other correctness properties [Emerson and Clarke, 1980]. Kozen [Kozen, 1983] introduced the modal p.-calculus which we still use today, and established a lot of basic results. The expressive power of modal p.-calculus is intimately connected to finite-state automata on infinite trees [Vardi and Wolper, 1986]. Classically, CHAPTER 5. CASE STUDY: CONCURRENCY 47 a J.t-formula denotes a predicate on states. Typical properties to be expressed and analyzed are safety and liveness assertions. The formula vx. < a > x, for example, denotes the set of all states allowing for an infinite sequence of a-actions. Especially, the modal operator < a > (for action a) constructs a property of the actual state from a property of a next state. Thus, it relates present to a (possibly infinite) future. For detailed explanation of various modal (and temporal) logics consult, e.g., Colin Stirling in [Stirling, 1992]. 5.1.4.3 A positive version of J.t-calculus with tagging fixed points Considering that we are based on an intuitionistic type theory, we choose a positive version of J.t-calculus with tagging fixed points [Winskel, 1989]. It is enough to express all the temporal properties we need. Because formulas with negation operators can be transformed to some normal forms with negation operators occurring only before atomic formulas [Walukiewicz, 1995]. The data irrelevant version of the tagged J.t-calculus is as follows: cl>:: = Z I cl>Vcl> I cl>Acl> I ci> I [K]ci> I J.lZ.Uci> I vZ.Uci> where U is a tag which is a subset of states, K ranges over subset of labels and Z ranges over a set of assertion variables. The tag-free fixed points J.lZ.ci> and vZ.ci> are special cases with empty tag. The formula true is defined as tt =def vZ.Z, and the formula false is defined as ff =def J.lZ.Z. 5.1.4.4 Semantics of J.t-calculus : We use labelled transition system ( S, L, { _!_. :l E L}) to give the operational semantics of J.t-calculus. Here S is a set of states, L is a set of transition labels, and for each l E L a transition relation_!_. is a subset of S x S, i.e. _!_.~ S x S. The semantics of formula¢> is repre- sented by [¢>~p (where [¢>~p ~ S) and it is given by induction on the structure of¢> as follows. [Z~P [ci> v \[J~p [ci> 1\ \[J~p [< K > ci>~P [[K]ci>~P [J.tZ.Uci>~P [vZ.Uci>~p p(Z) [ci>~p u [w~p [ci>~P n [w~P {s E Sl3o: E K.3s' E S.s ~ s' and s' E [cl>~p} {s E SIVa: E K.Vs' E S.s ~ s' implies s' E [cl>~p} {s E SIVP ~ S.[ci>[PjZ]~pjU ~ P implies s E P} {s E SI3P ~ S.P ~ [ci>[P/Z]~p U U and s E P} where the environment p assigning a subset of S to each assertion variable Z. Properties of concurrent system are usually represented by assertions which are formulas of J.t-calculus. The judgement that a state s satisfies a property ci> is now defined by : s f- ci> iff s E [ci>~P for all p. CHAPTER 5. CASE STUDY: CONCURRENCY 48 5.2 Congruences and Reasoning in CCS 5.2.1 Congruences for CCS A congruence for an algebra is an equivalence relation with the following extra substitution property: equivalent systems can be used interchangeably inside any larger system. We can explain this property formally. Define a context C[] to be a system description with a "hole", [ ]; given a system description PI, then, C[p1] represents the system obtained by "filling" the hole with Pl· Then an equivalence ~ is a congruence for a language, if whenever p 1 ~ P2, then C[p1] ~ C[p2] for any context C[] built using operators in the language. From the above description we can see that relations that are congruences for some languages maybe not for others. Congruence is an important concept in analysis of the relevant systems. In process algebras a notion of behavioral congruence [Cleaveland and Smolka, 1999] often be used as a basis for system analysis. In this subsection we define a relation that relates systems with respect to their "observable" behavior and study congruences for CCS. We first define an equivalence relation on states in an arbitrary LTS in each case; since CCS may be viewed as an LTS, these relations may then be used to link CCS system descriptions. The suitability of the equivalence from the standpoint of the observable behavior is considered. Furthermore whether or not the relation is a congruence for CCS is studied. 5.2.1.1 '!race Equivalence Language equivalence is a well-studied equivalence in state machine theory, where two ma- chines are equivalent if they accept the same sequences of symbols. Individual CCS system descriptions may be converted into rooted LTS's. But rooted LTS does not contain accepting states, and consequently we cannot use the notion of language equivalence from finite-state machine theory directly. However, if every state in a rooted LTS is accepting state, then the language of the machine contains the execution sequences, or traces, that a machine may engage in. So, to relate two descriptions of a system exactly when the machines for them have the same traces might be a reasonable attempt at defining a behavioral equivalence for CCS. Definition 7 Let (P, A,->) be a LTS. 1. Let s = ao ... an-1 E A* be a sequence of actions. Then p .!... p' if there are states Po, ... ,pn such that p = po,p; ~ Pi+l for 0 ~ i < n, and p' = Pn . 2. s is a strong trace of p if there exists p' such that p .!... p1 • We use S(p) to represent the set of all strong traces of p. 3. p ~s q exactly when S(p) = S(q). CHAPTER 5. CASE STUDY: CONCURRENCY 49 The reason to use the term strong traces is that the definition given above does not distinguish between internal and external actions (i.e. all may appear in a strong trace). In contrast, T action is treated in a special manner in the traditional definition of traces. Since CCS is an LTS whose states are system descriptions, so the definition of ;:.:-;s might be used to CCS systems. Unfortunately, since CCS permits the definition of nondeterministic systems, ;::.:-; 8 suffers from severe deficiencies which is illustrates as in the following examples. 1. Let PI =df a.pi and P2 =df a.p2 + a.Nil. Then PI ;:.:-;s P2, however P2 can reach a "deadlocked" state (i.e. Nil) after an a-transition while PI cannot. 2. Let p be a.b.Nil+a.c.Nil and q be a.(b.Nil+c.Nil). Then S(p) = S(q) = {e, a, ab, ac}, Sop ;::.:-; 8 q. However, after an a-transition q can perform both a b and a c, whereas p must reject one or the other of these possibilities. From the above examples we can see that even though two nondeterministic systems have the same traces, they may go through inequivalent states in performing them I. In particular, trace equivalent systems can have different deadlocking behavior. So this kind of trace equivalence is not adequacy for nondeterministic systems such as CCS. 5.2.1.2 Bisimulation Equivalence The observation in the previous paragraph suggests that a nondeterministic system such as CCS needs an equivalence which has a recursive flavor: execution sequences for equiva- lent systems ought to pass through equivalent states. This intuition leads the definition of bisimulation, or strong equivalence. Definition 8 Let (S, A,-->) be an LTS. A relation R ~ S x S is a bisimulation if, whenever < p, q > E R, then the following conditions hold for any a E A and p', q' E S. 1. if p ~ p', then q ~ q' for some q' such that< p', q' >E R, 2. if q ~ q', then p ~ p1 for some p' such that < p', q' > E R. From the above definition we can see that if two systems are related by a bisimulation, then it is possible for each to simulate the other's behavior. For a relation to be a bisimu- lation, related states must be able to match transitions of each other by moving to related states. Next we shall give bisimulation equivalent for two states. Two states are bisimulation equivalent exactly when a bisimulation relating them is found. Definition 9 Systems p and q are bisimulation equivalent, or bisimilar, if there exists a bisimulation R containing< p, q >. We write p,..., q whenever p and q are bisimilar. As CCS may be viewed as an LTS, so we can use ,..., to relate CCS processes. The following examples show some differences of bisimulation equivalent with trace equivalent. 1This situation cannot occur in deterministic systems. CHAPTER 5. CASE STUDY: CONCURRENCY 50 2. a.b.Nil + a.c.Nil ""a.(b.Nil + c.Nil) 3. a.b.Nil + a.b.Nil,..... a.b.Nil Bisimulation equivalence has a lot of pleasing properties. 1. For any LTS it is indeed an equivalence; i.e. the relation ,..... is reflexive, symmetric and transitive. 2. "' implies ';:::',s and if the LTS is deterministic in the sense that every state has at most one outgoing transition per action, then ,..... coincides with ';:::',s. 3. It can be shown in a precise sense that two equivalent systems must have the same deadlock potential. 4. "' is a congruence for CCS, in other words, if p "' q, then p and q may be used interchangeably inside any larger system. However, for the process algebras which allow asynchronous execution such as CCS, ,..... does suffer from a major flaw: it is too sensitive to internal computation. In particular, the definition does not take account of the speciality of the action T. For example, the systems a.r.b.Nil and a.b.Nil are not bisimulation equivalent, even though an external observer cannot detect the difference between them. Nevertheless, ,..... has been studied extensively in many literatures, and for process algebras in which internal computation in one component can affect the behavior of other components indeed, it is a reasonable basis for verification. But this flaw is an inducement to consider other equivalence to suit CCS. 5.2.1.3 Observational Equivalence for CCS From above subsubsection we can see that bisimulation is too sensitive to internal computa- tion. This subsubsection presents a coarsening of bisimulation equivalence that is intended to relax the sensitivity of the former to internal computation. The introduction of weak transitions starts a further progress in new equivalent relations. Definition 10 Let (P, A,--->) be an LTS with T E A, and let p E P. 1. if s E A •, then s E (A - { T}) * represents the action sequence obtained by deleting all occurrences ofT from s. 2. Let s E (A - { T} )*' then p ~ p' if there exists s' such that p ~ p' and s = s' From the above definition we can see that s returns the visible content (i.e. non-r el- ements) of sequences; in particular, if a= T then a= c; if a E A- {r} then a= a. In addition, if a sequence of r-transitions leads from p top' then p ~ p'. We now define weak bisimulations as follows. CHAPTER 5. CASE STUDY: CONCURRENCY 51 Definition 11 Let (P, A,->) be an LTS with T E A, then a relation R c::;; P x P is a weak bisimulation if, whenever < p, q >E R, then the following hold for all a E A and p', q' E P: 1. if p ~ p', then q ~ q' for some q' such that< p1 , q1 >E R 2. if q ~ q', then p ~ p' for some p' such that< p', q' >E R States p and q are observationally equivalent, or weakly bisimilar, or weakly equivalent, if there exists a weak bisimulation R containing < p, q >. In this case we write p ~ q As CCS is an LTS whose action set contains T, the definition of~ may be used to relate CCS system descriptions. We have the following observations. 1. for any process p, T.p ~ p 2. a.r.b.r.Nil ~ a.b.Nil Consequently weakly bisimilar would appear to be a viable candidate for relating CCS system descriptions. But unfortunately it is not a congruence for CCS. To see the reason, consider the context C[ J given by [ J + b.Nil. Let p be r.a.Nil, q be a.Nil, then we know that p ~ q. However, C[p] ';f C[q]. To see this, note that C[p] 2. a.Nil. This transition must be matched by a weak e:-labelled transition from C[q]. But the only such transition C[q] is C[q] ~ C[q]. However, a.Nil ';f C[q], since the latter can engage in a b-labelled transition that cannot be matched by the former. This shortage of ~ arises from the interplay between + and the initial internal com- putation that a system might engage in, in particular, the only CCS operator that breaks the congruence-hood of~ is +. Milner [Milner, 1980; Milner, 1989] solved this problem by adopting a way which is to focus on finding the largest CCS congruence ~L that implies~. Such a largest congruence is guaranteed to exist [Hennessy and Milner, 1985]. Definition 12 Let (P, A,->) be an LTS with T E A, and let p, q E P, then p ~L q if the following hold for all a E A and p1 , q' E P. 1. if p ~ p', then q ~ q' for some q' such that p1 ~ q'. 2. if q ~ q', then p ~ p1 for some p' such that p1 ~ q'. We have the following remarks about the above definition. 1. It should be noted that for p ~L q to hold, any r-transition of p must be matched by a =*-transition of q. In particular, this weak transition must consist of a non-empty sequence of r-transitions. 2. The definition is not recursive; the targets of initial matching transitions need only be related by ~. CHAPTER 5. CASE STUDY: CONCURRENCY 52 3. ~L is a congruence for CCS indeed and is the largest CCS congruence entailing ~, i.e. p ~L q implies p ~ q, and for any other congruence R such that pRq implies p ~ q, pRq also implies p ~L q. Consider the following examples. 1. T.a.Nil r:j:JL a.Nil, since the 2. transition of the former cannot be matched by a ~ transition of the latter. 2. a.T.b.Nil ~L a.b.Nil 3. For any p,q, if p ~ q, then r.p ~L r.q. 5.3 Formalization of the domain 5.3.1 Formalization of CCS The formalization is based on the system Plastic which is an implementation of LF. The Plastic system is a concise system with very few predefined types. We use the predefined type Nat as the basic type to help us in the formalization of actions. The formalization of actions is as follows: > [Base= Nat]; > Inductive > [Actb:Type] > Constructors > [base:(b:El Base)Actb] > [comp:(b:El Base)Actb]; > Inductive > [Act: Type] > Constructors > [tau:Act] > [act:(a:El Actb)Act]; > [Comp = [x:El Actb] E_Actb ([b:El Actb] Actb) ([a:El Base](comp a)) > ([al:El Base](base al)) x:El Actb -> El Actb]; We formalise processes in an inductive type like the following: > [Var = Nat] ; CHAPTER 5. CASE STUDY: CONCURRENCY 53 > Inductive > [Process:Type] > Constructors > [Nil:Process] > [var:(v:El Var)Process] > [dot:(a:El Act)(E: Process)Process] > [choice:(E1: Process)(E2: Process)Process] > [par:(E1: Process)(E2: Process)Process] > [hide:(E: Process)(L:El (List Actb))Process] > [ren:(E: Process)(£: El (List (Pair Base Base)))Process] > [rec: (E: Process)Process]; The operational semantics of CCS processes can be given by a labelled transition system with S to be the set of process, L to be the set of actions and the transition relations to be defined by the following transition rules: Dot: a a.E-> E E2 ~E' ChoiceR: -------,-- Et +E2 ~ E' E2 ~E' ParR: ------ EtiE2 ~ EtiE' Et ~ E~ E2 ~ E~ Tau2: EtiE2 .!... E~ IE~ HideT: E .!... E' E\L.!... E'\L E[(rec x.E)/x] ~ E' Rec: rec x.E ~ E' Et ~E' ChoiceL: ------,::--- Et +E2 ~ E' Et~E' ParL: ---=---- EtiE2 ~ E'IE2 Taul: Hide: Et ~ E~ E2 ~ E~ E1IE2.!... E~IE~ E~E' I E\L ~ E'\L (a, a ¢. L) E~E' Rename: E[/] /~) E'[!] We define the transition relation as an inductive relation with large elimination. The above rules are corresponding to the constructors of the inductive relation. > Inductive > [TRANS:(a:El Act)(E1:El Process)(E2:El Process)El Prop] Relation_LE > Constructors > [Dot:(a:El Act)(p:El Process) Prf( TRANS a (dot a p) p)] CHAPTER 5. CASE STUDY: CONCURRENCY 54 > [Chol:(a:El Act)(p:El Process)(pi:El Process)(p2:El Process) > (t:Prf(TRANS a pi p))Prf(TRANS a (choice pi p2) p)] > [Chor:(a:El Act)(p:El Process)(pi:El Process)(p2:El Process) > (t:Prf(TRANS a p2 p))Prf(TRANS a (choice pi p2) p)] > [Parl:(a:El Act)(p,pi,p2:El Process)(t:Prf(TRANS a pip)) > Prf(TRANS a (par pi p2) (par p p2))] > [Parr:(a:El Act)(p,pi,p2:El Process)(t:Prf(TRANS a p2 p)) > Prf(TRANS a (par pi p2) (par pi p))] > [Taui:(n:El Base)(pi,p2,qi,q2:El Process) > (ti:Prf(TRANS (act (base n)) pi qi)) > (t2:Prf(TRANS (act (comp n)) p2 q2)) > Prf(TRANS tau (par pi p2) (par qi q2))] > [Tau2:(n:El Base)(pi,p2,qi,q2:El Process) > (ti:Prf(TRANS (act (comp n)) pi qi)) > (t2:Prf(TRANS (act (base n)) p2 q2)) > Prf(TRANS tau (par pi p2) (par qi q2))] > [Hide:(a:El Actb)(p,q:El Process)(L:El (List Actb)) > (t:Prf(TRANS (act a) p q)) > (pi:El (Prf(is_false (Or (member (Actb) Eq_Actb a L) > (member (Actb) Eq_Actb (Comp a) L))))) > Prf(TRANS (act a) (hide p L) (hide q L))] > [Hidet:(p,q:El Process)(L:El (List Actb)) > (t:Prf(TRANS tau p q)) > Prf(TRANS tau (hide p L)(hide q L))] > [Ren:(a:El Act)(p,q:El Process)(£: El (List (Pair Base Base))) > (t:Prf(TRANS a p q)) > Prf(TRANS (rename f a )(ren p f)(ren q f))] > [Rec: (a: El Act)(pi,p2: El Process) > (t: Prf(TRANS a (subst pi (succ zero) Cree pi)) p2)) > Prf(TRANS a Cree pi) p2)] CHAPTER 5. CASE STUDY: CONCURRENCY 55 Where (subst t n s) means to replace the variables in term t which are equal ton with s, i.e. we use de Bruijn's index for expressing substitutions. So (subst pl (succ zero) (rec pl)) is pl[(rec pl)/x]. This method to express substitutions bring some difficulties in translation issues. We shall do more explanation in section 5.7.4. It is dangerous if we use inductive relation with large elimination without careful con- sideration about avoiding paradox. Adams [Adams, 2004] pointed out that using large elimination without limitation may lead to paradox 2 . But our formalization is safe, be- cause we did not use the dangerous features. Meanwhile, in order to make sure its safety we have used the corresponding formalization of this transition (i.e. the above TRANS) in Lego with ordinary inductive relation. There is no such kind of paradox in Lego, because there is no universe Type which is its own type. 5.3.2 Formalization of M-calculus We use Plastic's second order logic to formalize It-calculus with the help of our previous formalization and proof of properties of set and fixed point. The details are as follows: > [Label = Act :Type]; > Inductive > [Modality :Type] > Constructors > [Modal: (L:El (List Label))Modality] > [Nmodal: (L:El (List Label))Modality]; > [State = Process] ; > [MTRANS > [K: Modality] [sl: State] [s2 : State](Ex Label ([a: Label] (and (Eq Bool (Modal_check a K) true) (TRANS a sl s2))))]; > [ Form = Pred State] ; > [ Tag = Pred State ] ; > [MuvarF Pi_ Var Form] ; > [MuOr = [A,B: Form] (Union State A B)]; 2 Dr. Callaghan will issue his modified version of Plastic to avoid this kind of paradoxes. CHAPTER 5. CASE STUDY: CONCURRENCY > [MuAnd [A,B: Form] (Meet State A B)]; > [MuDia = [K: Modality] [F: Form]( La_ State Prop ([s: State](Ex State > ( [s 1 : State] (and (MTRANS K s s 1 ) (ap_ State Prop F s 1 ))))))] ; > [MuBox > [K: Modality] [F: Form] (La_ State Prop ([s: State](FA State ([s 1 : State] ((MTRANS K s s 1 ) => (ap_ State Prop F s 1 ))))))]; > [MuTagnu = [T: Tag] [F: (Pi_ Form Form)]( La_ State Prop ([s: State] > (Ex Form ([P: Form] (and (Subset State P (Union State > (ap_ Form Form F P) T ))(ap_ State PropPs))))))]; > [MuTagmu = [T: Tag] [F: (Pi_ Form Form)]( La_ State Prop ([s: State] > (FA Form ([P: Form] ((Subset State (Minus State > (ap_ Form Form F P) T ) P) => (ap_ State Prop P s))))))]; 56 The above formalization of syntax and semantics give us a basis for representing the domain concepts and properties. On LF level we can prove some useful lemmas which are corresponding to the rules on the user level. Meanwhile these give an another way to validate that the rules are correct. 5.4 User level reasoning system The basic reasoning steps which a user can make are given by a user-level reasoning system. This can be described via several groups of rules, derived from the standard rules of J-L- calculus and CCS, and augmented with several useful lemmas. We divide the rules into the following groups. 5.4.1 Rules that do not involve the process operators. These rules do not depend on CCS components. • Rules related to basic logic: This is a simple logic including the basic operators for and(/\), or(V) etc. They are mainly used to express and prove the side conditions which appear in some of the CHAPTER 5. CASE STUDY: CONCURRENCY 57 p,-calculus rules and combined assumptions. Here we just list the and-relevant rules as follows: cf> 1\ w Fst: cf> cf> 1\ w Snd: W • Rules related to set: These are basic rules which deal with set membership. Singlein: --{-} a E a • Rules related to p,-calculus: s E V Inr: U V s E U s E U Inl: sE UuV The following show a subset of the rules related to tagged p,-calculus. True:-,- s r tt a 1 s-+s Exintro with a: -K-(a E K) s ---+ s' Dia with s': 81 f- cf> (s.!!.. s') sf-cf> ., ld sf-cl>[vZ.(Uu{s})cf>/Z](sdU) v_un.o : sf- vZ.U cf> l" where notation s ~ s' means (s, s') in the transition relation ~ and s !!.., s' means ::Ja E Ks~s'. 5.4.2 Rules for the process operators. The following are a subset of the rules for pure CCS. The rules with prefix lnv _ are not standard rules in CCS, but are lemmas added to the reasoning system for user convenience. Dot: a.E~E R E[(rec X.E)/X] ~ E' ec· · recX.E~ E' I rec X.EJ ~ E2 nv_rec: a E![rec X.EI/ X] -+ E2 Inv_dot: a.E1 ~ E2 Users can apply commands which are corresponding to the above rules in the system to prove relevant properties. 5.5 User level syntax We design the user level syntax of the concepts of this domain by trying to keep their original form. The following is the description of the syntax of the user level in BNF: CHAPTER 5. CASE STUDY: CONCURRENCY ids .. letter .. - digit .. quasiletter .. - nontauact .. act .. - topcmd .. proc .. muform .. - K .. - Tag .. cmd .. - letter quasiletter* a I ... I z I A I ... I Z o I ... 19 letter I digit I - I ' ids I ids- nontauact I tau ids : process '1-' muform I ids : process - act - > process I ids : Allnat act proc Nil I ids I act . proc I proc + proc I proc 'I' proc I proc \ {nontauact, ... ,nontauact} I proc [nontauactjnontauact, ... ,nontauactjnontauact] I Rec ids proc ids I muform 'II' muform I muform & muform I < K > muform I [K] muform I Mu ids.Tag muform I Nu ids.Tag muform - I {act, ... ,act} I -{act, ... ,act} {-} I {ids, ... ,ids} Fst ids I Snd ids I RDia ids I RRec I RDot I RPair I RSinglein I RTrue I RExintro ids I Rend I RNuunfold I Rvpair I Rbox ids ids I RNubase I Rinr I Rinl I Rhypchange ids ids ids I Rinverdotl ids ids I Rinverrec ids ids ids ids I Rinverchoi ids ids ids ids I Rinverpar ids ids ids idsl Req I RModule ids I Rlmport ids I RUndo I RChol I RChor I RParl I RParr I RTaul ids I RTau2 ids I RHide I RHidet I RRen I Rlndn act proc ids I RHyp ids 58 In the above BNF description, "act" represents act of CCS, "topcmd" represents target or goal we want to prove, "proc" represents process of CCS, "muform" represents the formula of J.L-calculus, "K" represents a set of acts, "Tag" represents Tags of the J.L-calculus and "cmd" represents commands which user can use in the interface. In addition, the quoted parts by ' ' (such as '1-' and 'II') are used for distinguishing them from the meta-symbols. Users will use the above syntax to define their concurrent system and prove the properties of this system. Obviously the syntax is very similar to the domain users. 5.6 Translation between different levels The translation between different levels is very important in this approach. It realizes an important step of the implicit support of the proof assistant Plastic. Using grammar-directed technology, the translation can be implemented automatically. 5.6.1 The translation from user level to LF level To implement the translation from user level to LF level is to implement the transformation from the user level grammar to LF level grammar. We just present the outline of the CHAPTER 5. CASE STUDY: CONCURRENCY 59 translation in this chapter, concrete discussions will be given in chapter 8. 5.6.1.1 The translation of CCS concepts The translation of CCS concepts includes the translation of the CCS grammar of the concepts to the corresponding formalised parts of them in LF. This work is fulfilled by the parser and translator of the CCS automatically. We just need to customise the parser and the translator. The concrete parts of them, especially the translation of the relevant parts, will be given in chapter 8. 5.6.1.2 The translation of LTS concepts The translation of LTS concepts includes the translation of the LTS grammar of the concepts to the corresponding formalised parts of them in LF. This work is fulfilled by the parser and translator of the LTS automatically also. We just need to customise the parser and the translator. The concrete parts of them, especially the translation of the transition relation, will be given in chapter 8. 5.6.1.3 The translation of JJ.-calculus concepts The translation of JJ.-calculus concepts includes the translation of the JJ.-calculus grammar of the concepts to the corresponding formalised parts of them in LF. This work is fulfilled by the parser and translator of the JJ.-calculus automatically also. We just need to customize the parser and the translator. The concrete parts of them, especially the translation of the relevant parts, will be given in chapter 8. 5.6.2 The translation from LF level to user level The translation from LF level to user level is the reverse process of the the translation from user level to LF level. More details please to refer to chapter 8. In fact, we may not do this translation thoroughly, as when we use commands on user level, from the view of domain users, it just needs to do the transformation on user level. We can keep this transformation and then translate the result of the transformation to LF level, furthermore check the result to see whether this is consistent with the result of the LF level. 5. 7 Some examples The following examples have been chosen to illustrate the approach and issues that arise from it, in particular for translation, rather than to show new functionality enabled by the approach. CHAPTER 5. CASE STUDY: CONCURRENCY 60 5.7.1 Ticking clock Firstly, we use an example of ticking clock to show the basic aspects of this approach. This example also shows a translation problem which we'll discuss in detail in chapter 8. This example was taken from [Stirling, 1992; Yu, 1999] and discussed in [Pang eta!., 2002]. Cl =def tick.Cl This process can just perform only one action tick and it will tick forever. We can use the CCS syntax rec x. tick.x to express the process 3 . The ticking clock process has a simple property: the clock is able to tick. We can express the goal of proving this property in the following form: Abletick: Cl 1- < {tick} > tt where the judgement p: s 1- means that we want to prove that process s satisfies the property and use the name p to memorize the property. The table 5.1 and table 5.2 show our proof on the two different levels. The rows with Goal in the first column show the relevant parts of the proof state, and rows with CMD show the command issued by the user. The User-level column shows what the user should expect to see, whilst the Plastic column shows the corresponding LF form or the equivalent command sequence for Plastic. In our prototype, we can complete the proof by issuing just the user-level commands, and the prototype is able to translate the more complex LF terms back to their simple user-level forms. From table 5.1 and table 5.2 we can see that the goals and commands on the user-level are more concise and user oriented, one step on the user-level proof usually corresponds to several steps in Plastic. On the user-level steps some information about the real parameters should be given for translating the interface command to Plastic commands. In fact, the Plastic level proof can be hidden from the user and the group of commands in Plastic which corresponding to one command on user-level is linked by tactical. 5.7.2 Simple communication protocol Now we use an example of a simple communication protocol, taken from [Cleaveland et al., 1993] and discussed in [Pang et a!., 2005a]. The protocol specification can be formalized as the parallel combination of three basic processes: a sender, a receiver and a medium that connects sender and receiver. The sender initially waits for a message to send, after which it passes the message to the medium using the channel from and then awaits an acknowledgement on the channel ack_to. When the medium receives a message along its channel from it makes it available on its channel to, and when it receives an acknowledgement on its channel ack_from it makes it available on its channel ack_to. When the receiver gets a message on channel to, it announces that 3 in [Milner, 1989], this process should be expressed by fix(X = tick.X), so the CCS syntax here is a little bit different from it. CHAPTER 5. CASE STUDY: CONCURRENCY 61 User Level Plastic Level Goal Abletick: Claim Abletick: Cl 1- < {tick}> tt Prf ( ap_ Process Prop (MuDia (Modal (cons Label tick (nil Label))) Mutt) Cl) CMD Rule Dia Cl Refine App ? ? (lemma_dia_ccs ? ? ? Cl ? ) ; Goal ?2 Cl {tick} Cl ? m2 Prf( -> : MTRANS (Modal (cons Label tick (nil Label))) Cl Cl) ?1 Cl 1- tt ? m1 : Prf( ap_ State Prop Mutt Cl) CMD Rule True Refine lemma_true; Goal ?2 Cl {tick} Cl ? m2 Prf( -> : MTRANS (Modal (cons Label tick (nil Label))) Cl Cl) CMD Rule Exintro tick Refine LL; Intros -· Refine LL; Intros H; Refine App ? ? (App ? ? H tick); Goal ?3 (tick in {tick}) ? m3 : Prf( and 1\ (Cl t~ Cl) (Eq Bool (ModaLcheck tick (Modal (cons Label tick (nil Label)))) true) (TRANS tick Cl Cl)) CMD Rule Pair Refine App ? ? (App ? ? p_pair ?); Goal ?s tick in {tick} ? m5 : Prf( Eq Bool (ModaLcheck tick (Modal (cons Label tick (nil Label)))) true) ? Cl tick ·4 -> Cl ? m4 : Prf(TRANS tick Cl Cl) CMD Rule Rec Refine Rec; Table 5.1: Proof procedure for ticking clock (part I) CHAPTER 5. CASE STUDY: CONCURRENCY 62 User Level Plastic Level Goal ?5 tick in {tick} ? m5 : Prf( Eq Bool (ModaLcheck tick (Modal (cons Label tick (nil Label)))) true) ?6 tick .Cl tick Cl ? m6 Prf ( --> : TRANS tick (subst (dot tick (var one)) (succ zero) (rec (dot tick (var one)))) Cl) CMD Rule Dot Refine Dot; Goal ?5 tick in {tick} ? m5 : Prf ( Eq Bool (Modal_check tick (Modal (cons Label tick (nil Label)))) true) CMD Rule Singlein Refine App ? ? Eq...refl; Goal (no new goal) (no new goal) CMD Rule end ReturnAll; Table 5.2: Proof procedure for ticking clock (part II) the message is available for receipt and then sends an acknowledgement along the channel ack_from. The corresponding and assistant processes in the interface are defined as follows: SENDER= rec X (send.from-.ack_to.X); MEDIUM= rec X (from.to-.X + ack_from.ack_to-.X); RECEIVER rec X (to.receive-.ack_from-.X); PROTOCOL = (SENDER I MEDIUM I RECEIVER) \{from,to,ack_from,ack_to}; SENDER!= from-.ack_to.(rec X send.from-.ack_to.X); PROTOCOL! = (SENDER! I MEDIUM I RECEIVER)\{from,to,ack_from,ack_to}; The following are the translations of the above definitions in LF: %--------------------------------------------------------------------------- Def SERVICE = rec (dot send (dot receive (var one))) : El Process Def SENDER = rec (dot send (dot from' (dot ack_to (var one)))) : El Process Def SENDER! = dot from' (dot ack_to SENDER) : El Process Def MEDIUM = rec (choice (dot from (dot to' (var one))) (dot ack_from (dot ack_to' (var one)))) : El Process Def RECEIVER CHAPTER 5. CASE STUDY: CONCURRENCY = rec (dot to (dot receive' (dot ack_from' (var one)))) Def PROTOCOL hide (par (par SENDER MEDIUM) RECEIVER) (cons Actb fromb El Process (cons Actb ack_tob (cons Actb tob (cons Actb ack_fromb (nil Actb))))) : El Process Def PROTOCOL! hide (par (par SENDER! MEDIUM) RECEIVER) (cons Actb fromb (cons Actb ack_tob (cons Actb tob (cons Actb ack_fromb (nil Actb))))) : El Process 63 %--------------------------------------------------------------------------- The PROTOCOL process has a simple property: it is able to send. We can express the goal of proving this property in the following form: Abletosend: PROTOCOL f- <{send}> tt Table 5.3 (continued in table 5.4) shows a proof of this property on two different levels. It is easy to see that the forms on user level are much simpler than those on LF level. 5. 7.3 Example with infinite state space Now we consider some examples with infinite state space. We choose an example from [Dam, 1995]. This example presents a counter and its property "a counter can count forever". In this thesis we want to prove that the counter is always able to perform up. Use CCS notation, the counter can be expressed in the following form: Cnt = rec x.up.(xldown.Nil) The corresponding processes in our interface are defined as the following: Cnt = rec X up.(X I (down. Nil)); CHAPTER 5. CASE STUDY: CONCURRENCY 64 User Level Plastic Level Goal Abletosend: Claim Abletosend: PROTOCOL 1- < {send}> tt Prf ( ap_ Process Prop (MuDia (Modal (cons Label send (nil Label))) Mutt) PROTOCOL) CMD Rule Dia PROTOCOL! Refine App ? ? (lemma_dia_ccs ? ? ? PROTOCOL! ? ) ; Goal ?2 PROTOCOL {send} PROTOCOL! ? m2 : Prf( MTRANS (Modal (cons -> Label send (nil Label))) PROTOCOL PROTOCOL!) ?1 PROTOCOL! 1- tt ? m1 : Prf( ap_ State Prop Mutt PROTOCOL!) CMD Rule True Refine lemma_true; Goal ?2 PROTOCOL {send} PROTOCOL! ? m2 Prf( MTRANS (Modal (cons -> : Label send (nil Label))) PROTOCOL PROTOCOL!) CMD Rule Exintro send Refine lemmaJExintro ? ? ? send Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ? 3 PROTOCOL s~d PROTOCOL! ? m3 : Prf(TRANS send PROTOCOL PROTOCOL!) CMD Rule Hide Refine Hide; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ?5 (SENDER I MEDIUM I RECEIVER) ? m5 : Prf ( TRANS send send (par (par SENDER MEDIUM) RECEIVER) -> (SENDER! I MEDIUM I RECEIVER) (par (par SENDER! MEDIUM) RECEIVER))) ?6 not (send in {from,ack_to, ? m6 : Prf ( to,ack..from} V send-in is..false (Or (member Actb Eq-Actb {from,ack_to,to,ack..from}) sendb (cons Actb fromb (cons Actb ack_tob (cons Actb tob (cons Actb ack..fromb (nil Actb)))))) (member Actb Eq-Actb (Comp sendb) (cons Actb fromb (cons Actb ack_tob (cons Actb tob (cons Actb ack..fromb (nil Actb)))))) CMD Rule Hyp Distinct Refine Distinct; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ?5 (SENDER I MEDIUM I RECEIVER) ? m5 : Prf ( TRANS send send (par (par SENDER MEDIUM) RECEIVER) -> (SENDER! I MEDIUM I RECEIVER) (par (par SENDER! MEDIUM) RECEIVER))) Table 5.3: Proof procedure for simple protocol (part I) CHAPTER 5. CASE STUDY: CONCURRENCY 65 User Level Plastic Level CMD Rule Parl Refine Parl; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ?7 (SENDER I MEDIUM ) s~d ? m7 : Prf ( TRANS send (SENDER! I MEDIUM ) (par SENDER MEDIUM) (par SENDER! MEDIUM))) CMD Rule Parl Refine Parl; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ?s SENDER send SENDER! ? mB Prf ( -> : TRANS send SENDER SENDER!)) CMD Rule Rec Refine Rec; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) ?g send .SENDER! send ? m9 Prf ( TRANS send -> : SENDER! (subst (dot send (dot from-(dot ack_to (var one)))) (succ zero) (rec (dot send (dot from-(dot ack_to (var one)))))) SENDER!)) CMD Rule Dot Refine Dot; Goal ?4 send in {send} ? m4 : Prf( Eq Bool (Modal_check send (Modal (cons Label send (nil Label)))) true) CMD Rule Singlein Refine App ? ? Eq_refl; Goal (no new goal) (no new goal) CMD Rule end ReturnAll; Table 5.4: Proof procedure for simple protocol (part II) The following is the translation of the above definition in LF: %-------------------------------------------------------------- Def Cnt = Cree (dot up (par (var (succ (zero))) (dot down Nil)))) ; %-------------------------------------------------------------- This is an example with infinite state space. Systems which based on model checker technology are hard to prove properties of the counter. But in our approach to prove this property is not difficult. The property "Always able to perform up" can be expressed in CCS as the following form: CHAPTER 5. CASE STUDY: CONCURRENCY 66 v X. < up > ttl\ [-]X. So "counter has this property" can be expressed as follows: Cnt 1- v X. < up > ttl\ [-]X. According to the semantics of v operator, this goal can be split to the following sub-goals: 3S.S ~( tt 1\ [-]S) (5.1) and Cnt E S (5.2) We take the infinite set { cnt(i) li E Nat } as this S, where cnt(O)=Cnt, cnt(l)=Cnt I (down.Nil), ... ,cnt(i+1)=cnt(i) I down. Nil). In other word, Scan be defined as: to: S ""df).. s: Process 3 n:Nat. Eq s cnt(n) . We can split (5.1) to two separate sub-goals: S ~< up > tt (5.3) and S ~ [-]S (5.4) Sub-goal (5.2) can be proved by the membership of S. Sub-goal (5.3) can be split to and V s E S 3s1 • s ~ s' s' E tt by the semantics of<> operator. Takes' ass ldown.Nil, the two sub-goals are changed \:Is E S.s ~ sidown.Nil (5.5) and sidown.Nil E tt (5.6) Sub-goal (5.6) is proved easily. Sub-goal (5.5) can be proved by induction on natural number. Table 5.5 shows a proof of (5.5) on two different levels. It is easy to see that the forms on user level are similar to standard CCS notation and much simpler than those on LF level. By the semantics of [ ] operator, sub-goal (5.4) is: \:Is E S Vs' 3a.s ~ s' implies s' E S It can be proved by induction on natural number also. So we get the proofs by means of the semantics of 11-calculus formulas and induction. The above example shows that infinite state space is no problem for this approach. We can list the reason as follows: • First, Plastic is not limited to finite state system. It relies on the techniques such as structural induction to prove properties in infinite domains. • Second, CCS is not limited to finite state system. CHAPTER 5. CASE STUDY: CONCURRENCY 67 User Level Plastic Level Goal Alwaysup: Claim Alway sup: Allnat zero Cntn (n: Nat) Prf (TRANS up (Cntn n) (Cntn (succ n) )) CMD Rule Rindn zero Cntn H Refine E...Nat ([nl:Nat]Prf(TRANS up (Cntn n1) (Cntn (succ n1)))); Intros n H ; Goal ?t (Cntn zero) ~ ? m1 : Prf (TRANS up (Cntn (succ zero)) (Cntn zero) (Cntn (succ zero))) ?2 (Cntn (succ n)) ~ ? m2 : El (Prf (TRANS up (Cntn (succ (succ n))) (Cntn (succ n))(Cntn (succ (succ n))))) CMD Rule Parl Refine Parl; Goal ?1 ( Cntn zero) ~ ? m1 : Prf (TRANS up (Cntn (succ zero)) (Cntn zero) (Cntn (succ zero))) ?3 (Cntn n) ~ ? m3 : El (Prf (TRANS up (Cntn (succ n)) (E...Nat ( [n1: El Nat] State) Cnt ([nl:El Nat] [S:El State] par S (dot down Nil)) n) (E...Nat ( [n1: El Nat] State) Cnt ([nl:El Nat] [S:El State] parS (dot down Nil)) (succ n)))) CMD Rule Hyp H Refine H; Goal ?I (Cntn zero) up ? m1 Prf (TRANS up -+ : (Cntn (succ zero)) (Cntn zero) (Cntn (succ zero))) CMD Rule Rec Refine Rec ; Goal ?4 up.((Cntn (succ zero)) ? m4 : El (Prf (TRANS up I down.Nil) (subst (dot up up ((Cntn (succ zero)) (par (var (succ zero)) (dot down Nil))) -+ I down.Nil) (succ zero) (rec (dot up (par (var (succ zero)) (dot down Nil))))) (Cntn (succ zero)))) CMD Rule Dot Refine Dot; Goal (no new goal) (no new goal) CMD Rule end ReturnAll; Table 5.5: Proof procedure for Counter's property CHAPTER 5. CASE STUDY: CONCURRENCY 68 • Third, J-t-calculus is not limited to finite state system. In the process of finding a proof, however, users often gain invaluable insight into the system or the property being proved. 5.7.4 Some observations from the examples From the above examples, we make some observations: • Automatic translation between user level and LF level is feasible. This translation is a superset of the implicit syntax mechanisms provided by most proof assistants. Although this example is simple, it already demonstrates several non-trivial features. • Consider the translation of subgoal ?m6 in the example of ticking clock: the user level form is significantly simpler than the LF version. In particular, the LF form uses the term operator subst, which has been programmed as part of the formalization. There is currently no user-level indication or representation of subst: we treat it as an inherent mechanism of the formalization, and hence one that does not need to be shown. The user is interested in concrete processes, not hypothetical ones which are subject to substitution. Hence, uses of subst must be normalised away, to show the term after substitution. Plastic now implements a normalization operation which removes the obvious use of a set of operations by computation. We shall return to this issue in chapter 8. • Almost all rules on the user level correspond to lemmas on LF level. This keeps the correspondence between the two levels explicit and implies that we need not translate every concepts on LF level to the user level, but just those which have some clear user level correspondence. This also makes the translation from LF level to user level feasible. 5.8 Discussion In this chapter, to the domain of concurrency, we have analyzed the characteristics of domain-specific reasoning, formalized the notations of CCS, LTS and J-t-calculus in LF. While we do the formalization we get a better understanding not just on the issues behind producing domain-specific computer assisted reasoning tools, but also on the knowledge of the domain. The architecture, methodology and process of the approach presented in Chapter 4 led us to carry out the case study. Meanwhile this case study enriched them in many aspects. We get the following feedback: • The formalization of a domain may include many aspects such as: CHAPTER 5. CASE STUDY: CONCURRENCY 69 The notations of the domain which is corresponding to the notations of CCS for the domain of concurrency. The set of rules on user level which is a very important part in the case study and affects the usefulness and convenience of the ultimate system. The notations for the description of semantics which is corresponding to the notations of LTS of the case study. The notations for the description of logic properties which is corresponding to the notations of J.L-calculus of the case study. • The formalization or its skeleton is not just limited to this case study. For example, the skeleton of the formalization of LTS and J.L-calculus can be used in other domains. This gives a kind of reusability of the formalizations. We have demonstrated how our approach is used to prove properties of concurrency through some simple examples. Although some of them have infinite state space, their structures are very simple and therefore can be handled by CCS. The case study shows that our approach combines induction, semantic reasoning, domain-specific interface, abstraction and composition methods with LFTOP to verify the properties of domain specific systems. All of the lemmas and inference rules based by the domain-specific reasoning system are for- mally proved in Plastic and therefore a coherent system which firmly ensures the correctness of proofs on that level is constructed. This case study also shows that LF is suitable to be an underlying reasoning basis. Although the structure is complicated, but domain user can get benefit without noticing the complexity of the structure. Although the formalization Work is intended to be done by some experts of type theory. We find that to allow users to develop their own lemmas (i.e. to extend the formalization rather than just to work inside it) is useful. Other things such as how to improve the under- standability of proofs( e.g. representing traces of computation and using Natural Language to explain proof steps) are also important. As the size of examples increases, we may also need to study techniques to help users organize their proofs and developments, such as al- lowing multiple contexts for reasoning. These issues are also listed in our future work in chapter 9. Chapter 6 Case study: Verification of semantic properties of LAZY-PCF+SHAR Human felicity is produced not so much by great pieces of good fortune that seldom happen, as by little advantages that occur every day. - BENJAMIN FRANKLIN, AMERICAN PRESIDENT As another case study, in this chapter we use the approach to verify some semantic properties of a functional programming language LAZY-PCF+SHAR [Seaman and Iyer, 1996]. In [Seaman and Iyer, 1996] the authors present a very good explanation of the need of sharing. We extract some explanations from [Seaman and lyer, 1996] here to show the basic notations and problems in this case study. From a theoretical view, functional languages are easy to reason about, especially within the framework of call-by-name or call-by-value evaluation. But implementing a functional language strictly according to call-by-name causes a lot of problems, especially the problem of efficiency, this is due to the fact that arguments that are referred to more than once are copied and possibly re-evaluated each time they are needed. However, functional languages should have the referential transparency, so this value will always be the same. In practice the unnecessary re-evaluation is usually avoided by sharing the argument among each of its references so that there is only one copy of the argument at any time. When the value of the argument is first needed, the argument is evaluated and the original copy of the argument is replaced by its value. This value is the one used for later references to the argument. So sharing can be characterised by a lack of duplication of the argument and by updating the original copy of the argument when it is evaluated. This method of evaluation is referred 70 CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 71 to as call-by-need usually. It provides the same resulting values as call-by-name, but has different behavior due to the reduction of unnecessary re-evaluation. For the call-by-need implementation additional improvements may be made in the usual way by analyzing the behavior of given programs in the implementation and performing some program transfor- mations which improve the behavior of the program without affecting its results. Since the behavior of a program is often depended on by many optimisations, the sharing involved in implementing lazy evaluation must be taken into consideration. Thus an operational model of the call-by-need implementation which is easy to reason with is essential for us to carry out analysis of the programs. In [Seaman and lyer, 1996] such a model is presented as an operational semantics of lazy evaluation with sharing, this semantics is also proven to yield the same results as the call-by-name semantics. We know that a substitution is an operation defined externally to the semantics rules. And the fact that function application is defined in terms of substitution is one of the main factor to lead the simplicity of call-by-name and call-by-value semantics. But substitution is the operation that allows an argument to be duplicated. This point makes it unsuitable for formalizing sharing. Thus in order to avoid duplication an operational model of lazy evaluation with sharing must be able to explicitly determine when and how an argument is substituted. By incorporating the actions which carry out substitution explicitly in the semantics rules this can be done. Fortunately, in papers such as [Abadi eta!., 1991] [Field, 1990] much work has already been done on explicit substitutions. In these papers the reduction system Aa is considered. Aa includes some syntax and rules which can carry out substitution explicitly. However, the sharing occurring in lazy evaluation implementations is not captured by this system, because it duplicates arguments and does not update them upon evaluation. In other words, these papers present studies of the reduction system with emphasis on optimality of reduction strategies, while the goal in paper [Seaman and lyer, 1996] is to more closely model the sharing found in implementations in order to have a more accurate model for analysis. The paper [Seaman and lyer, 1996] fixes the reduction strategy by the operational seman- tics and emphasizes the suitability of the system for reasoning about sharing. That's why we followed Seaman's work in Coq [Seaman and Felty, 1993], and choose verification of proper- ties of the operational semantics of a lazy functional language (called LAZY-PCF+SHAR) as an another domain of interest for case study. Since this domain uses the concept of explicit substitution to deal with the problem of substitution, so it is very different from our previous case study for concurrency. Because the proof technique of this domain is very similar to the technique of LF in Plastic, we can use Plastic directly and need not design a new interface for this domain. This tells us that for some domains we need not define new interfaces, the interface for Plastic is useful for them. Another reason of studying this kind of domain is to check the power of Plastic which is an implementation of LF, to see whether or not the style of explicit substitution in LAZY-PCF +SHAR affects the reasoning in Plastic greatly. CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 72 e[e'jx]!v call-by-name: ( ( .XX:t.e )e') ! v ll b l . (e'! v') 1\ (e[v' jx] ! v) ca - y-va ue. ((, . ) ') ! AX.t.e e v Figure 6.1: call-by-name and call-by-value 6.1 The need for explicit substitutions Substitution can be used in explaining the call-by-name and call-by-value methods. The following shows the process. In order to evaluate the term ((-Xx:t.e)e') by call-by-name order, just substitute the term e' for x in e and evaluate. For evaluating the term ((-Xx:t.e)e') by call-by-value order, first evaluate e' to v', then substitute v' for x in e and evaluate. An inference rule may be used to describe the formal semantics of application. In order to conclude what is below the line, the premise above the line must be true. Also, if evaluating a term e results in a term v, this is denoted as e ! v. Then formally the rules for the evaluation of application can be described as figure 6.1: From the Figure 6.1 we can see that, this definition of substitution simplifies the formal- ization of these evaluation orders. But it is still not clear how this definition of substitution could be used to describe call-by-need evaluation. The original argument, e', should be substituted for the occurrence of x which will be accessed first, and the result of evaluating e' should be substituted for any remaining occurrences. However, before the program is run it is not known which occurrence of x will be evaluated first. And the argument e' should be evaluated only if it is needed. The reason is that the details of the actual process of substi- tuting a term for a variable in another term are abstracted away. So, in order to implement lazy-evaluation the suitable semantics needs to be able to control the substitution process so that the substitution and evaluation of arguments can take place while the function body is being evaluated. Fortunately, in [Abadi et al., 1991] [Field, 1990] explicit substitutions are introduced to implement the idea of incorporating rules into the semantics which directly carry out substitution. They are used to define systems of rewrite rules for the lambda calculus with no prescribed evaluation strategy. Unfortunately, these rules do not capture sharing, though they incorporate explicit rules to carry out substitution. In spite of this, explicit substitution provides a mechanism for a relatively simple formalization of lazy evaluation. Paper [Abadi et al., 1991] gave an example of a system called -Xa by using explicit substitutions. This calculus evaluates A-terms which may include unevaluated substitutions. 6.2 Capture of sharing Though the operational semantics for call-by-need have some similarity to the -Xa-calculus, they differ from explicit substitutions in the following ways in order to capture sharing. CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 73 o Substitutions are not allowed to occur within an expression by the syntax of the terms of the language. Instead, a term is evaluated with respect to a single substitution at the outermost level, called the operational semantics environment. This environment is a list of variable bound to expressions which corresponds to an explicit substitution. • Not as in app rule of the ..\a-calculus, which destroys sharing, in function application environments are not duplicated and distributed to subexpressions. This is the second difference which captures the first characterisation of sharing. • The expression that a variable points to in the environment may be replaced by the value that it evaluates to. This is the third difference which captures the second characterisation of sharing. So the original copy of an argument is allowed to be replaced by its evaluated value. Not as in the ..\a-calculus, environments are not eliminated upon reaching a value, but are maintained throughout the evaluation. This ensures that the value can be used later. Thus a relation between expression-environment pairs can be used to express an evalu- ation. This pair of an expression with an environment is called a configuration. A configu- ration for an expression e and an environment A is denoted as < e, A >. A list of binding of typed variable to expressions gives the structure of an environment. It can be formally described as the following form: A::= [ ]l[x:t >---> e]A For convenience, if an environment containing more than one binding, the bindings will be separated by commas instead of square brackets. The evaluation relation between a program and its final value in terms of inferences and axioms is defined as a natural semantics [Plotkin, 1981] [Kahn, 1987]. This natural semantics can be used to define the operational semantics of LAZY-PCF +SHAR. In this framework, since an expression is evaluated directly to its final value, so this style of semantics is often referred to as "big-step" or "one step" semantics. Properties or theorems about the evaluation relation defined with these semantics can be proved by induction on the height of the proof justifying the evaluation relation. 6.3 Domain analysis LAZY-PCF+SHAR is a lazy version of the functional language PCF(Programming language for Computable Functions) extended by adding explicit substitution in order to formalize the semantics of lazy evaluation. Its semantics are defined as inference rules in the style of natural operational semantics [Kahn, 1987] or "big step" semantics which is called deduc- tively defined systems. Usually the verification of this kind of properties is done by hand or on paper, but recent years the work can be done by interactive theorem provers. We try this for the case study in this chapter to show that Plastic is powerful enough to be this CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 74 kind of theorem provers. Meanwhile we shall get more information about the merits and shortcomings in doing reasoning by using Plastic. We also want to compare the style and characterization of Plastic with those of Coq and Lego. 6.3.1 Syntax of the language Types: t Expressions: e 0 I true I false I x I succ(e) I pred(e) I iszero(e) I if(et, e2, e3) I )..x:t.e I e1 e2 I J.lX:t.e I < e, [x:t ,__, et] > Figure 6.2: The Syntax of LAZY-PCF+SHAR The syntax of LAZY-PCF +SHAR is shown in Figure 6.2. It includes constants, variables, the conditional, lambda abstraction, primitive functions, function application, J.l operator and closure which acts as the syntactic vehicle for implementation of lazy evaluation. 6.3.2 Operational semantics of the language The type judgement rules of this language are listed in figure 6.3 where r is a type envi- ronment which is a mapping of variables to types. r[s/x] denotes a perturbed environment which respects r on all variables other than x, and binds x to type s. We say that an expression e has type t in type environment r if r f- e:t can be justified by inferences based on the type judgement rules. CO: -- CT: :---:---:--""7 f- O:nat f- true:bool CF: VAR: f- false:bool r[t/x] f- x:t CS: r f- e:nat CP: r f- e:nat r f- succ(e):nat r f- pred(e):nat CZ: r f- e:nat COND: r f- e 1 :bool r f- e2 :t r f- e3 :t r f- iszem(e):bool r f- if(et, e2, e3):t ABS: r[s/x] f- e:t APP: r f- e1:s-+ t r f- e2:s r f- >.x:s.e:s-+ t r f- ete2:t REC: r[t/x] f- e:t CLO: r f- e1:s r[s/x] f- e:t f f- J.lX:t.e:t f f-< e, [x:s ,__, et] > :t Figure 6.3: Type rules CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF +SHAR 75 OS_CO: OS_CT: ------,,.--------,-----~ < 0, A > 1 < 0, A > < true, A > 1 < true, A > OS_CF: OS__L: -:------:-------,---.,.---- < false, A> 1 < false, A > < >..x:t.e, A> 1 < >..x:t.e, A > os_po: . < e, A> 1 < 0, A' > OS_P: < e, A > 1 < succ(e'), A' > < pred(e), A> 1 < 0, A'> < pred(e), A> 1 < e', A'> < e,A > 1 < O,A' > OS_ZF·. < e,A > 1 < succ(e'),A' > OS_ZT: ( ) ( ) < iszero e ,A> 1 < true,A' > < iszero e ,A> l OS_S: < e,A > 1 < e',A' > < succ(e), A> 1 < succ(e'), A'> < e, A> l < e', A'> OS_Varl: [ I [ I < x, x:t >---> e A > l < e', x:t >---> e' A' > < y, A> l < e', A'> 1\y ¢. x OS_Var2: [ I [ I --->eA> l --->eA'> < e1, A> 1 < >..x:s.e, A'> < e[nxjx], [nx:s >---> e2IA' > l < e', A"> OS_Appl: ) <(e1e2,A> l Ifl'r < e1,A > l < e2,A' > l < e',A" > ue: --~--~~-~~~~~-~~-~-­ < if(et,e2,ea),A > l < e',A" > I L'r.\ 1 < e1,A > l < ea,A' > l < e',A" > trase· ------~--~~-~-~~----- . < if(et,e2,ea),A > l < e',A" > R < e[nxjx], [nx:s >---> JlX:t.eiA > 1 < e',A' > oc: A 1 'A' < JlX:t.e, > < e , > CL: < e, [x:t >---> et]B > 1 < e', [x:t >---> e~IB' >, e' is neither nat nor bool << e, [x:t >---> e1l >, B > 1 << e', [x:t >---> e~l >, B' > CL': < e, [x:t >---> etiB > 1 < e', [x:t >---> e~IB' >, e' is nat or bool << e, [x:t >---> e1l >, B > 1 < e', B' > Figure 6.4: The operational semantics of LAZY-PCF+SHAR CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 76 The rules in Figure 6.4 are rules to reflect the operational semantics of the language. There is a set N F of expressions which represents the normal form of expressions in the language. It is as follows: N F = 0 I true I false I succn(o) I F F = >..x:t.e I < F, [x:t ,..... e1] > 6.4 Special features of this domain One special feature of this domain is the logic for proving properties. In this domain the rules for operational semantics are not enough to reflect the requirement of the proving of properties. How to design the tool for this kind of domain specific reasoning? We know that many domains have not clear logic for proving properties. Domain users usually use an informal logic for their reasoning. In fact, LF and UTT themselves are good candidate for being a logic. For this kind of domains, we discuss the representation mainly, and let the logic to be the logic of that of LF or UTT. So we use Plastic directly in this domain by using Plastic's interface (i.e. the interface customized in Proof General). Therefore the main work for this kind of domains is the formalization work. 6.5 An implementation of LAZY-PCF+SHAR in LF We use LF as the meta-language to represent expressions of the object language LAZY- PCF +SHAR. In our encoding here, we focus on the reflection of the explicit substitution. 6.5.1 Translation from LAZY-PCF+SHAR expressions and types to LF expressions 6.5.1.1 Inductive definition of the syntax of LAZY-PCF+SHAR The following module Syntax is our inductive definition of the syntax of LAZY-PCF +SHAR in LF. > module Syntax where; (****************************************************************************) (* syntax.lf: Inductive definition of the syntax (* of LAZY-PCF+SHAR (* Includes types, variables and terms (****************************************************************************) > import Pi; > import Nat; CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 77 > Inductive > [Ty : Type] > Constructors nat_Ty : Ty] [ bool_Ty : Ty] > > > [ arr: (e1: Ty)(e2: Ty)Ty]; > Inductive > [Vari : Type] > Constructors > [X : (i: El Nat)Vari] ; > Inductive > [Tm : Type] > Constructors > > > > > > > > > > > > : : (tm1 (tm1 (v : (tm1 (tm1 Tm)(tm2 : Tm)(tm2 : El Vari)Tm] Tm)Tm] Tm)Tm] Tm)Tm] Tm)(tm3 natural numbers boolean values function types zero true false lambda abstractions function applications if e1 then e2 else e3 variables successor predecessor zero test fixed point operator Tm)Tm] ; > --closure, e]> 6.5.1.2 Translation of operational semantics rules The following module OSrules expresses the operational semantics rules in LF. > module OSrules where; (****************************************************************************) ( * OSrules .lf (* CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 78 (* This file contains the definition of the operational (* semantics rules for LAZV-PCF+SHAR, as well as a definition (* of the Ap function and some related properties. (****************************************************************************) > import Typecheck; > import Rename; (************************************************) (* OScons (abbrev.) (OScons v t e A) (cons VTT ((v,t),e) A) (************************************************) > [OScons = [v: El Vari] [t : El Ty] [e : El Tm] [A: El OS_env] > cons VTT (pair VT Tm (pair Vari Ty v t) e) A] ; (********************************************************) (* Ap: (* (* (* (* (* (* (* *) Inductively defines the relation characterised by *) the Ap function. *) *) (Ap a FA F' n t) <--> Ap(F,a)=a]> *) New variables may not come from *) the Domain of OS env A. *) *) (********************************************************) > Inductive > [Ap : (ptml,ptm2: El Tm)(pose: El OS_env)(ptm3: El Tm) > (pv: El Vari)(pt: El Ty) El Prop] > Relation_LE > Constructors > [Ap_abs: (nv,v:El Vari)(t: El Ty)(a,e,ne: El Tm)(A: El OS_env) > (pl: El (Prf(not (member Vari nv (OS_Dom A))))) > (p2: El (Prf(Rename nv v e ne))) > > > > [Ap_clos: Prf( Ap a Cabs v t e) A ne nv t)] (n,v:El Vari)(s,t: El Ty)(a,e,ne,el: (pl: Prf (Ap a e (OScons v s el A) Prf (Ap a (clos e v s el) A (clos El Tm)(A: El OS_env) ne n t)) ne v s el) n t)]; CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+Sl-IAR 79 (****************************************) (* OSrules *) (* *) (* Definition of Operational Semantics *) (* *) (* -> *) (* *) (****************************************) > Inductive > [OSred : (confl:El Config)(conf2: El Config)El Prop] > Relation_LE > Constructors > [OS_CO: (A: El OS_env)Prf(OSred (cfg o A) (cfg o A))] > [OS_CT: (A: El OS_env)Prf(OSred (cfg ttt A) (cfg ttt A))] > [OS_CF: (A: El OS_env)Prf(OSred (cfg fff A) (cfg fff A))] > [OS_L: (A: El OS_env)(e : El Tm)(t: El Ty)(x: El Vari) > > > > > > > > > > > > > > > [OS_PO: [OS_P: [OS_ZT: [OS_ZF: [OS_S: Prf(OSred (cfg (abs x t e) A) (cfg (abs x t e) (A,A1: El OS_env)(e: El Tm) (p1: Prf(OSred (cfg e A) (cfg o A1))) Prf(OSred (cfg (prd e) A) (cfg o Al))] (A,Al: El OS_env)(e,el: El Tm) (pl: Prf(OSred (cfg e A) (cfg (sue e1) A1))) Prf(OSred (cfg (prd e) A) (cfg e1 Al))] (A,A1: El OS_env)(e: El Tm) (p1: Prf(OSred (cfg e A) (cfg o Al))) Prf(OSred (cfg (is_o e) A) (cfg ttt Al))] (A,A1: El OS_env)(e,el: El Tm) (p1: Prf(OSred (cfg e A) (cfg (sue el) A1))) Prf(OSred (cfg (is_o e) A) (cfg fff Al))] (A,Al: El OS_env)(e,el: El Tm) (p1: Prf(OSred (cfg e A) (cfg e1 Al))) > Prf(OSred (cfg (sue e) A) (cfg (sue el) A1))] A))] > [OS_Var1: (A,Al: El OS_env)(e,el: El Tm)(t:El Ty)(x: El Vari) > (pl: El (Prf(not (member Vari x (OS_Dom A))))) > (p2: Prf(OSred (cfg e A) (cfg el Al))) > Prf(OSred (cfg (var x) (OScons x t e A)) > (cfg el (OScons x t e1 A1)))] > [OS_Var2: (A,Al: El OS_env)(e,el: El Tm)(t:El Ty)(x,y: El Vari) > (pl : El (Prf (not (Eq Vari x y)))) CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF +SHAR 80 > (p2: El (Prf(not (member Vari x (OS_Dom A))))) > (p3: Prf(OSred (cfg (var y) A) (cfg ei Ai))) > Prf(OSred (cfg (var y) (OScons x t e A)) > (cfg ei (OScons x t e Ai)))] > [OS_Appl: (A,Ai,A2: El OS_env)(ei,e2,eni,en2,enf: El Tm) > (t:El Ty)(n: El Vari) > (pi : Prf (OSred (cfg ei A) (cfg eni Ai))) > (p2: El (Prf(Ap e2 eni A en2 n t))) > (p3: Prf(OSred (cfg (clos en2 n t e2) Ai) (cfg enf A2))) > Prf(DSred (cfg (appl ei e2) A) (cfg enf A2))] > [DS_IfTrue: (A,Ai,A2: El OS_env)(ei,e2,e3,en: El Tm) > (pi : Prf (OSred (cfg ei A) (cfg ttt Ai))) > (p2 : Prf (OSred (cfg e2 Ai) (cfg en A2))) > Prf(DSred (cfg (cond ei e2 e3) A) (cfg en A2))] > [OS_IfFalse: (A,Ai,A2: El OS_env)(ei,e2,e3,en: El Tm) > (pi : Prf (OSred (cfg ei A) (cfg fff Ai))) > (p2 : Prf (OSred (cfg e3 Ai) (cfg en A2))) > Prf(OSred (cfg (cond ei e2 e3) A) (cfg en A2))] > [OS_Fix: (A,Ai: El OS_env)(e,ei,en: El Tm)(t:El Ty)(x,nx: El Vari) > (pi: El (Prf(not (member Vari nx (OS_Dom A))))) > (p2: El (Prf (Rename nx x e ei))) > (p3: Prf(OSred (cfg (clos ei nx t (fix x t e)) A) > (cfg en Ai))) > Prf(OSred (cfg (fix x t e ) A) (cfg en Ai))] > [OS_CL: (A,A1: El OS_env)(e,e1,en,e2: El Tm)(s,t:El Ty)(x: El Vari) > (p1: Prf(OSred (cfg e (OScons x t e1 A)) > (cfg en (OScons x t e2 A1)))) > (p2: El (Prf (TC (OS_Dom_ty (OScons x t e1 A)) en s))) > (p3: El (Prf (not (or (Eq Ty s nat_Ty) (Eq Ty s bool_Ty))))) > Prf(DSred (cfg (clos e x t ei ) A) (cfg (clos en x t e2) A1))] > [OS_CL': (A,Ai: El OS_env)(e,e1,en,e2: El Tm)(s,t:El Ty)(x: El Vari) > (p1: Prf(OSred (cfg e (OScons x t ei A)) > (cfg en (OScons x t e2 A1)))) > (p2: El (Prf (TC (OS_Dom_ty (OScons x t e1 A)) ens))) > (p3: El (Prf (or (Eq Ty s nat_Ty) (Eq Ty s bool_Ty)))) > Prf(DSred (cfg (clos e x t e1 ) A) (cfg en A1))]; CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR81 6.5.2 An example Using the above definition we can prove semantic properties of LAZY-PCF+SHAR. We have proved many properties related to LAZY-PCF +SHAR. The successful proofs of these properties reflect the power and suitability of Plastic as the reasoning tool for this domain. To show these, we introduce some definitions firstly. Definition 13 (Type Context of an Environment) Context([ ]) =l_ Context([x:t >--+ e]A) = Context(A)[t/x] where l_ is the mapping that is undefined for each variable. Definition 14 (Dom) Dom(H) is used to denote the domain of a context Hand Dom( A) is used to denote the set of variables which have bindings in the operational semantics environment A. Definition 15 (Context extension) 1. H is an extension of H; 2. if H' is an extension of Hand x '/. Dom(H'), then H'[t/x] is an extension of H. We can also define this more formally as: 1. HI-H 2. if H' 1- H and x '/. Dom(H'), then H'[t/x]l- H. The following is a semantic property described in LF: Ap(a, fun, A) =< b, [n:t >--+a] > -> n '/. Dom(A) This property shows the following fact: while applying a function fun to an expression a if the environment variable n is used to represent the expression a, then n should not be in the environment before the applying. We give this property a name "ApNewVar", and present a proof in Plastic as the following. The motivation of showing this code here is just to let reader know the profile of the proof. For further understanding please refer to the manual of Plastic [Callaghan, 2000a]. >Claim ApNewVar : (a,fun,b: El Tm)(A: El OS_env)(n : El Vari)(t: El Ty) > (pl: El (Prf (Ap a fun A b n t))) > Prf (not (member Varin (OS_Dom A))); > Intros a fun b A n t pl; > Refine E_Ap ([a,fun:El Tm] [A: El OS_env] [b:El Tm] [n: El Vari] [t:El Ty] > Prf(not (member Vari n (OS_Dom A)))) ? ? a fun A b n t pl; > 2 Intros nv; CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR82 > Intros v t1 tm1 e ne A1 pr1 pr2; > Refine pr1; > ReturnAll; > Intros n1 v s t1 a1 e ne e1 A1 pr1 pr2; > Refine LL; > Intros H; > Refine App ? ? pr2 ?; > Refine App ? ? (p_inr ? ? ) ?• . ' > Refine H; > ReturnAll ; > ApNewVar; Now we discuss the proof of a main theorem- Subject Reduction theorem. First, we need to give some necessary definitions and lemmas. Definition 16 (Valid environments) 1. [ J is a valid environment. 2. If A is a valid environment and Context(A) 1- e:t, then [x:t,...... e]A is a valid environ- ment. This definition implies that the free variables of an expression bound in a valid environ- ment must be bound in the remainder of the environment. This is due to the fact that if an expression has a type in some type context then the free variables of that expression occur in the domain of the type context. The definition of valid environments can be extended to configurations by requiring that the environment of a configuration is valid and that the expression of the configuration has some type in the type context of the environment. Definition 17 (Valid configumtions) If A is a valid environment and for some t, Context( A) 1- e:t, then < e, A > is a valid configumtion. This concept is very important. Because the operational semantics are designed to yield meaningful results only when they are applied to valid configurations. Lemma 6.5.1 If< e,A >!< e',A' >,then Context(A') 1- Context(A). Proof The proof is by induction on the height of the inference justifying < e,A >!< e',A' >. Q.E.D. This lemma shows that if one configuration evaluates to another configuration, then the type context of the second environment extends the type context of the first environment. CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR83 Theorem 6.5.1 (Subject Reduction Theorem) JfContext(A) f- e:t, A is valid, and< e,A >l< e',A' >,then Context(A') f- e':t, and A' is valid. Proof The proof is carried out by case analysis on the inference justifying < e, A > 1 < e', A' > and relevant inductions. The proof is very big, please refer to the appendix A for further details. Q.E.D. The theorem shows that the evaluation preserves the type. In other words the type of an expression is the same as the type of its normal form. 6.6 Discussion Generally speaking, we have done the following tasks in the case study for this domain. • An analysis on the concepts of the domain, especially on the features of functional programming and explicit substitution; • A formalization of the concepts of the domain, this includes the definitions of the relevant concepts in LF, the proofs of the relevant lemmas etc. • An explanation of how to do reasoning in this domain, this includes the proofs of many relevant domain properties. This case study also gives us a deep understanding of the application of proof assistant Plastic. We learn a lot of features of Plastic in doing domain-specific reasoning. • The case study shows that proof assistants are suitable to some domains directly. Our approach does not exclude this direct use. For this kind of domains, the suitability of our approach is depended on the power and suitability of the underlying proof assistant (here is Plastic). Our conclusion is: the Plastic system is qualified to be an underlying system. • The case study can also be seen as a big application of Plastic. Because it includes a lot of formalization work and proof work. • The metavariable mechanism provided by Plastic is very useful in doing proofs. It gives us a very flexible means when we want to conquer difficult problems. For example, using this mechanism we can prove some temporary lemmas in the environment of the proof procedure of the main property. This concords with the reasoning habit of human being. CHAPTER 6. CASE STUDY: VERIFICATION OF SEMANTIC PROPERTIES OF LAZY-PCF+SHAR 84 • Compare with proofs by hand, doing proofs in Plastic is stricter and more convincing. But the proofs in Plastic are not as understandable as the proofs by hand. But for this domain this shortcoming can be relieved by using a well defined formalization of the domain. For the domain such as we discussed in chapter 5 this shortcoming can be relieved by using a well defined domain specific interface. • Compare with Coq and Lego, doing proof in Plastic is overloaded with details. The reason is that Plastic is a concise implementation of LF. There are no many modules accompanied Plastic inherently. But this is also a merit of Plastic, because it forces its user to customize a fitted one for his application. This avoids users from using extra properties inherited from the proof assistant. Chapter 7 The interface Only those who have the patience to do simple things perfectly ever acquire the skill to do difficult things easily. -FRIEDRICH SCHILLER, GERMAN DRAMATIST AND POET. In this chapter we investigate the aspects of interface related to this approach. We focus on the principle of the design and implementation. 7.1 Design principle 7.1.1 General principle for designing domain user interface Jakob Nielsen [Nielsen, 2005] presented ten general principles for design of user interface. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines. The following are the ten general principles. • Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. • Match between system and the real world: The system should speak the users' lan- guage, with words, phrases and concepts familiar to the user, rather than system- oriented terms. Follow real-world conventions, making information appear in a natural and logical order. • User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. • Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. 85 CHAPTER 7. THE INTERFACE 86 • Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. • Recognition rather than recall: Make objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. • Flexibility and efficiency of use: Accelerators - unseen by the novice user - may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. • Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. • Help users recognize, diagnose, and recover from errors: Error messages should be ex- pressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. • Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. The above ten principles give us a good guide to design general interface. But for different domains and different kind of users the relevant interfaces should have their own principles. 7.1.2 Principle for designing a reasoning interface based on LF We follow the above principles and consider the speciality of our system. The design of an interface to help LF based reasoning depends on the intended users. Novices need some way to define the goal, to view the result of the reasoning and to provide reasoning guidance. They want a simple interface with limited functionality, so that they do not become confused and/or issue instructions at variance with their intentions. More experienced users may also require ways to define new theories, to browse through libraries of theorems, definitions, etc and to switch between one part of a proof attempt and another. System developers want access to the underlying system. They want multiple views onto the underlying system and reasoning process and want a rich functionality. So different focus leads to different style of interface. But in general a good interface must: assist users to understand the current reasoning attempt; provide mechanisms for them to interact with the reasoning process; avoid bewildering them with too much information, while providing what is required; and help them explore their options without imposing too high a cognitive load. For our design of the domain-specific interface, we focus on the following issues: CHAPTER 7. THE INTERFACE 87 • Use LF as the underlying meta-theory. • Provide domain-specific user-friendly notations and reasoning with no acquaintance with the underlying meta-theory. • Can be customized to many domains. So, the major design principles of LFTOP are as follows: • Suitable to several domain-specific reasoning system without many changes of their code. • Support communications between different levels. • Reuse of customizing steps. • High level user level language with automated translation into the underlying meta- language. • Support a variety of user interfaces (GUI, command line). • Multiple views: Different views make different information and manipulation of infor- mation explicit and easy to understand. A declarative representation of a proof, such as a proof term, can have advantages over a procedural representation of a proof, such as a list of tactics. Representing a proof at a high level of abstraction may make its structure clearer. Domain-specific proof support may extend as far as providing views which visualize objects at the level of domain users. More concretely, we are interested in the following principles: Principle 1: There should be a number of complementary views of the proof construction and the user should be able to choose to see any number of the views simultaneously. Principle 2: Within any view the user should be able to invoke operations that are meaningful in that view. Principle 3: For multiple-part commands the interface should provide defaults for any variables that the user does not specify. Principle 4: If there is a choice for the default values then the option which results in the simplest proof step and the easiest to undo should be chosen. Principle 5: There should be a high level of flexibility in which the user can articulate commands to the prover. Principle 6: The user interface should support the user by displaying only infor- mation that is relevant in the current state. Principle 7: The user interface should support several concurrent proof construc- tions. CHAPTER 7. THE INTERFACE 88 We can divide these to the following three levels: • At first, most immediate level, an interface should be designed to make the customary interactions with the theorem prover as convenient as possible. • Secondly, somewhat deeper level, an interface should provide supplementary services in the theorem prover itself. • Thirdly, even deeper level, an interface should provide derived proof rules that allow the user to reason in familiar ways, e.g., using its favorite logic and syntax. In order to implement a useful interface, we have done the following: • The translation from the user level domain-specific language to LF. • The translation from the user level commands to Plastic commands, tacticals or proved lemmas. • The maintenance of the correspondences between the two levels. These include map- pings between the rules, definitions, and induction schemas used by both levels, as well as the correspondences between explicit terms used in the proofs performed by both levels. • The communication protocols between the two· levels, Our design of the interface is implemented in Haskell and through two different ways. One is using Proof General as the based tool, the other is implemented directly in Java. 7.2 ULPIP: a protocol for communications between user-level and Plastic-level Communications between user-level and Plastic-level need a protocol. We have designed a protocol called ULPIP for conducting the User-Level to Plastic-level Interactive Proof. A proof using this protocol may have the following parts: • Proof begins by issuing a target claim 1 • Proof proceeds by successive proof steps • Proofs in different levels keep correspondence. This protocol should have a good extensibility to different domains. Our case studies try to confirm this point. 1on different level the claim has different form CHAPTER 7. THE INTERFACE 89 7.2.1 Usage of eXtensible Markup Language (XML) XML is a descriptive markup language. It is used popularly in the area of data exchange and memory. Now it is one of the main tools for publishing and assembling information in the internet. XML provides both programmers and authors of document with a friendly environment. XML's rigid set of rules helps make documents more readable to both humans and machines. XML is extensible. It allows developers to create their own DTDs[Laurent, 1999] which create extensible tag sets and can be used for many applications. This presents us a clear flexible means to represent and understand a protocol. Meanwhile Aspinall's work [Aspinall, 2000] [Aspinall and Liith, 2003] inspired us to use XML to do this. In fact, the main reason why we use XML to describe the protocol is that it can offer the following properties we want: • Platform independent: XML is platform independent. If there is an XML parser available on the system, the XML description can be used. If there is no suitable XML parser available for your target platform, XML is so simple that writing your own parser is fairly easy; • Consistent: through the usage of DTD, XML can be consistent. A DTD specifies a set of rules for the XML file to follow; • Rapid prototyping: using a stylesheet the results can be seen immediately in a browser; • Constraint definitions: XML can contain constraint definitions; as well for the form of the XML itself, as for external resources we can add constraint definitions; • Easily extensible: because XML is a metalanguage, it is by nature an extensible lan- guage; • Reusability: it is relatively easy to fit an existing piece of XML into another. 7.2.2 DTD for XML documents The basic unit of XML documents is element. An element consists of a start tag, contents and an end tag. The start tag is at the beginning of the element with the form < tag- name>. The end tag is at the end of the element with the form < jtag- name>. An element can be nested defined. Document type definition (DTD) is usually used to present a set of constraint rules for an XML document. A constraint rule is called a declaration of an element. A DTD of an XML document will indicate the legal elements, the correct order of the elements and some constraint rules which the elements should satisfy. A declaration of an element consists of the parts which are quoted by "<>", "!ELEMENT" is followed by the name of the element. The structure of an XML document can be represented by a tree. The root note is the root element of the DTD, the other notes are the other elements of the DTD, the leaves of CHAPTER 7. THE INTERFACE 90 the tree are of the basic element types (#PCDATA or #CDATA 2 ). We created our DTDs for the definition of the protocol. DTDs present a clear picture of the protocol syntax, and, especially with a validating parser, can enforce very precise syntactic requirements. Flexibility in adding or changing a protocol is particularly important when designing new functions. This is an area where the XML approach really stands out. It is very simple matter to modify the DTDs, change a dispatch table in the code, and test a new feature or command; it is certainly much easier than modifying ad hoc parsing code. In fact, this flexibility invites software re-use as well. 7.2.3 DTD for the protocol We benefited from XML and HaXml (a Haskell package for XML) in our protocol design and implementation. Using a DTD file the messages can be divided into several classes. But they can be distinguished in two parts: those sent to Plastic and those originating from Plastic. Messages sent to Plastic include configuration commands, actual proof commands and commands for inspecting various aspects of Plastic state( e.g. the Entity %reqcmd in the following DTD file). Messages sent from Plastic consist of error dialogues or status display and messages to configure user-level components( e.g. the Entity %respcmd in the following DTD file). The following is part of the DTD file for our simple protocol: 91 CHAPTER 7. THE INTERFACE 92 93 The reason why we include DTD's in XML documents, is that the DTD is a vocabulary of the XML elements that we use. It lets us do validation in our XML document and it also describes what kind of element for example the 'goalname' element is (e.g. is it PCDATA ?) . From the contents of the DTD file we can see that this design has some advantages in readability of code both to human being and machine. This provides a good foundation to the concrete implementation. To model our protocol faithfully in Haskell, the tool HaXML is used. From the above DTD file, HaXML generates a series of Haskell datetypes, one for each element, along with functions to read and write XML. So the type security given by the DTD extends into our program, detecting the reception of invalid XML immediately and making it nearly impossible to send messages containing invalid XML. CHAPTER 7. THE INTERFACE 94 7.3 Implementation issues in our design 7.3.1 Some considerations on the implementation issues We have the following considerations on the implementation issues: • We should remember the user proof in the interface other than forget it. • The construction of a proof is often a trial-and-error process, and keeping track of the current partial proof helps the step back and redo certain parts of it as the proof is being constructed. • Proof editing and maintenance is an important feature of an interface. Could we use Proof General in our interface? The answer is yes. We have implemented the interface in two different ways, one is using JAVA and Haskell, the other is using Haskell and Proof General. The first way is more complex than the second one, but it's more specific. The implementation of the second way is simple, it provides a unified Proof General style interface. 7.3.2 An interface in Proof General style Figure 7.1 is a screenshot of our prototype of the interface in Proof General style: From the figure 7.1 we can see that LFTOP can be used directly as a domain-specific reasoning tool by domain users. In this figure we show two different windows, one is a window of definitions and commands for domain users, the other is an information window for showing the corresponding information on Plastic level. So, if domain users really care about the process of the underlying reasoning, it is very easy from this window. This provides a convenient means to learn how to do reasoning directly in Plastic . 7.4 Discussion We have so far presented the principles which we follow in the design of our interface and some issues of implementation. The main idea is to reflect our view of the approach. Our prototype of implementation is based on the above principles. XML presents us a platform independent extendible language for representing protocols. It is easier both to human being and machine to understand the protocol. This can get a more accurate implementation. We are inspired by Aspinall's work for Proof General Kit [Aspinall, 2005b]. But our focus is different from his work. Our purpose is to facilitate different specific domains to one proof assistant, not like Proof General Kit which tries to provide facilities to many different proof assistant other than different domains. Through the study of this chapter, we got the following results: 1. An analysis of the issues related to design of interface; CHAPTER 7. THE I NTERFACE 1~ <(l:ero)>(lfu I {-JZ) 1 (nil Labol))) Prop (llt1'8gnu (EIIptyaet State) (La_ rora roB (JS :El rora)l))) PIIOTOCOL1)) Figure 7.1: A screenshot of the interface in Proof General style 95 CHAPTER 7. THE INTERFACE 96 2. A better understanding on the issues of the interface; 3. A protocol for communications between user level and Plastic level. Our implementation is just a simple prototype and is in a preliminary stage. Many improvements are needed. For example, from the domain user's point of view: 1. There is no easy way for interoperability with other tools. 2. Not enough domain-specific options are provided. 3. There is no mechanism for domain users to customize their favorite style and commands of reasoning. 4. The output from Plastic has not been well organised and decorated yet. We hope that the protocols and components will be improved in stages by successive generalization to more specific domains. Chapter 8 Translation • Issues Eternal truths will be neither true nor eternal unless they have fresh meaning for every new social situation. - FRANKLIN ROOSEVELT, AMERICAN PRESIDENT Translation between different levels in this approach is very important. How to guarantee the successful forward and backward translation is one of the main problems which should be solved in the approach. A good correspondence between these levels is a critical issue. 8.1 Some problems in translations One interesting question is the effect of computation on the translation. Originally, we intended that the translation is a bi-directional map between the object language and a subset LFr of the framework language LF. It was suggested that manipulation of terms in LFr results in further terms in LFr, e.g. that LFr is closed under computation. In fact it is incorrect. We found a counter-example as mentioned in [Pang et a!., 2002], that is the subst problem. Our revised model is shown in figure 8.1, where a superset LFM of LFr contains terms which can result from manipulations of terms in LFr, either by computation or by appear- ing as a sub-goal after a rule application, but that LFM can be mapped back to LFr by computation involving a distinguished set of operators. Such operators will be elements of the formalization whose purpose is to explain manipulations on terms, but don't correspond to observable phenomena in the domain, hence should not necessarily be shown to users. The aforementioned subst fits this pattern. Dr. Callaghan had studied this problem and found a way to solve it. This idea was embodied in the new version of Plastic. Plastic now implements a normalization operation which removes the obvious use of a set of operations by computation. So the above subst is removed in the relevant terms after this kind of 97 CHAPTER 8. TRANSLATION ISSUES 98 Object Language Subset L~ as a middle area Subset LFr used in translation Figure 8.1: Mapping between domain-specific object language and a subset of LF normalization operation. This feature of Plastic presents a good method to keep the corre- spondence between the user level and LF level. Using this feature and forcing the use of lemmas or tactics relevant to domains, the results are guaranteed to be in terms of LFr. 8.2 The translation from user level to LF level We use [ (r, -)] to express the translation map in the context r. For translations which are not dependent on the context, we omit the context item. The translation is implemented by parsing and translating in our Haskell and Happy program. Next we'll discuss the case of concurrency mainly. Of course, for the case of Chapter 6, as there is no distinction of the LF layer and user layer, the translations are not needed. 8.2.1 The case of concurrency In this subsection we discuss the translations of the relevant elements in concurrency. 8.2.1.1 The translation of the predefined actions The actions in CC S on the user level have three different forms: • tau which express the idling or internal action; • string which is not ended with - to express base action; • string which is ended with - to express complement action. We have the predefined actions named by one, two, ... ,ten; and one-,two-, ... ,ten-on the user level for convenience. These actions can be used without declaration. In fact we can expand the predefined actions by giving the definition of the same string on the LF level. CHAPTER 8. TRANSLATION ISSUES We use the following types to express actions on LF level. > [Base = Nat] ; > Inductive > [Actb:Type] > Constructors > [base : (b: El Base)Actb] > [comp: (b :El Base)Actb]; > Inductive > [Act:Type] > Constructors > [tau:Act] > [act:(a:El Actb)Act]; The translation is: { if [a] = ::~ (base a) if act (comp b) if a is tau; a is not ended with -; a is b-; 8.2.1.2 The translation of the list of hidden actions gg For the list of hidden actions (They should be base actions), we have a quite direct translation as follows: [] = (nil Actb) [a, A]= (cons Actb (base a) [A]) where (nil Actb) is the empty list of type Actb on the level of LF. 8.2.1.3 The translation of the list of relabelling For the list of relabelling (it is needed to consider just base actions ), we have a quite direct translation as follows: []=(nil (Pair Base Base)) [a/b, A]= (cons (Pair Base Base) (pair Base Base a b) [A]) 8.2.1.4 The translation of processes We use inductive type on LF level to formalize the basic concepts for CCS. For the following CCS syntax: E:: =Nil I X I a.E I El + E2 I EliE2 I E\L I E[f] I rec X.E Its formalization is like this: CHAPTER 8. TRANSLATION ISSUES 100 > Inductive > [Process:Type] > Constructors > [Nil:Process] > [var:(v:El Var)Process] > [dot:(a:El Act)(E: Process)Process] > [choice:(El: Process)(E2: Process)Process] > [par:(El: Process)(E2: Process)Process] > [hide:(E: Process)(L:El (List Actb))Process] > [ren:(E: Process)(f: El (List (Pair Base Base)))Process] > [rae: (E: Process)Process]; The translation is: • [(r, Nil)D =Nil • [(r,X)] = { var n if X is the nth item in r, here n is a natural number on LF level to reflect deBruijn index; var X if X is not in r. • [(r,a.E)] =dot [a] [(f,E)] • [(r, El + E2)] =choice [(r, El)] [(r, E2)] • [(r,EliE2)] =par [(r,El)] [(r,E2)] • [(r, E\L)] =hide [(r, E)] [L] • [(r,E[J])] = ren [(r,E)] [!] • [(r, Rec X.E)] = rec [(X:r, E)] where X:f express putting X as the first element of the context. 8.2.1.5 The translation of p,-calculus The form of p,-calculus formula on the user level is like the following: F:: =A I FIIF I F&F I < K > F I [K]F I Mu Z.UF I Nu Z.UF where K is range over subsets of labels, U is a tag which is a subset of states. A is an assertion variable. For the form of p,-calculus formula on the user level, we have the corresponding definition on LF level as follows: CHAPTER 8. TRANSLATION ISSUES > [Pred = [A:Type](Pi_ A Prop)]; > [Form = Pred State]; > [Tag= Pred State]; > [VarF : Form]; > [MuOr = [A,B: Form] (Union State A B)]; > [MuAnd = [A,B: Form] (Meet State A B)]; > [MuDia = [K: Modality] [F: Form]( La_ State Prop ([s: State](Ex State > ([s': State] (and (MTRANS K s s') (ap_ State Prop F s'))))))]; > [MuBox = [K: Modality] [F: Form]( La_ State Prop ([s: State](FA State > ([s': State] ((MTRANS K s s') => (ap_ State Prop F s'))))))]; 101 > [MuTagnu = [T: Tag] [F: (Pi_ Form Form)]( La_ State Prop ([s: State](Ex Form > ([P: Form] (and (Subset State P (Union State (ap_ Form Form F P) T )) > (ap_ State PropPs))))))]; > [MuTagmu = [T: Tag] [F: (Pi_ Form Form)]( La_ State Prop ([s: State](FA Form > ([P: Form] ((Subset State (Minus State (ap_ Form Form F P) T) P) > => (ap_ State Prop P s))))))]; The translation is: • [VarFD = VarF • [fl&f2D = MuAnd [flD [f2D • [flllf2D = MuOr [fq [f2D • [[klfD = MuBox [kD [JD • [< k > !] = MuDia [kD [JD • [Mus st fD = MuTagnu [st] (La_ Form Form ([s:Form] [JD )) • [Nus st f] = NuTagnu [stD (La_ Form Form ([s:Form] [JD )) 8.2.1.6 The translation of propositions The claim to prove that a process has some property on the user level can be translated to a claim to prove a proposition on LF level. For the claim of the following form: pname: P 1- F where P is a process, F is a JJ-calculus formula and pname is a string as the name for this proof. We have: [pname: P 1- FD = Claim pname: Prf (ap_ State Prop [F] [([ ],P)D) where [ ] expresses the empty context. CHAPTER 8. TRANSLATION ISSUES 102 8.2.1. 7 The translation of CC S and JL-calculus rules The CCS and JL-Calculus rules which are corresponding to the commands on the user level are translated to lemmas on LF level. There is a "one to one correspondence" between these commands on user level and lemmas on LF level. For example, if we want to use the rule Dia with s'· ~(s & s') • si-'l> on the user level, we use the command "Rule Dia s' ", this is corresponding to the LF level lemma called lemma_dia_ccs, so the translation is: [Rule Dia s'D = Refine App ? ? (lemma_dia_ccs ? ? ? s' ?) ? where s' is a name of a definition for a process which we keep the same name on the two levels, ? is a place holder(i.e. an unnamed metavariable in Plastic). 8.2.2 The translation of definitions The definitions on user level are translated to the definitions on LF level. For example, for the process definition SENDER= P, we do the following translation: [SENDER= PD = [ SENDER = [([], P)] ] 8.2.3 The translation of declaration The declarations on user level are translated to the hypothesis on LF level. For example, for the declaration Act a,b ; we do the following translation: [Act a,bD = [ a,b: Act] 8.3 The translation from LF level to user level We use L- j to express this converse translation. Next we'll discuss the case of concurrency mainly. 8.3.1 The case of concurrency 8.3.1.1 The translation of actions For actions the translation is as follows: • L tau j = tau • L act (base i) j = i • L act (comp i) j = i- CHAPTER 8. TRANSLATION ISSUES 8.3.1.2 The translation of processes For process, the translation is as follows: • l (r, Nil) J = Nil • l (r, var v) J = { ~ if X is the vth element in r; otherwise. • l (r, dot a p )J = laJ . l (r, p) J • l (r, choice pl p2 )J = l (r, pl) J + l (r, p2 )J • l (r, par pl p2) J = l (r, pi) J ll (r, p2) J • l (r, hide p L )J = l (r, p) J \ lLJ • l (r, ren p f) J = l (r, p )J [ l f J ] • l (r, rec p) J = rec X. l (X:r, p) J where X is a fresh symbol which is different from the elements of r. 8.3.2 The translation of some forms of propositions 103 For some forms of propositions, we recognize these forms and translate them back to the user level. Some of the forms and the corresponding translations are as follows: • l Prf (ap_ State Prop F P) J = l ([- ],P) J f- l F J • l Prf ( and pl p2 ) J = l pl J & l p2 J • l Prf ( MTRANS m pl p2 )J = l pl J - l m J -> l p2 J • l Prf ( TRANS a pl p2 )J = l pl J - l a J -> l p2 J In my mind, the rules, definitions and declarations need not be translated back. Because users already know them and type them by themselves. 8.4 The properties of the translations Lemma 8.4.1 The translations[-] and l- J satisfy the following property: for a user level action a, l [a] J = a. Proof Using the definitions of the translations, the proof proceeds by case analysis on the structure of action. Q.E.D. CHAPTER 8. TRANSLATION ISSUES 104 Lemma 8.4.2 The translations [-] and l - J satisfy the following property: for a user level process p and context r, l [(r,p)] J = (r,p). Proof Using the definitions of the translations, the proof proceeds by induction on the structure of process. Q.E.D. Lemma 8.4.3 The translations [-] and l - J satisfy the following property: for a user level J.L-calculus formula F, l [F] J = F. Proof Using the definition of the translations, the proof proceeds by induction on the structure of J.L-calculus formula. Q.E.D. Lemma 8.4.4 The translations [-] and l - J satisfy the following property: for a user level proposition P, l [P] J = P. Proof Using the definition of the translations, the proof proceeds by induction on the structure of proposition. Q.E.D. 8.5 The proof of the adequacy property Theorem 8.5.1 Our translations[-] and l - J are adequate, i.e. they satisfy the following adequacy condition for inference: If G is a goal on the user level under assumptions A1. A2, ... Am; we use a command 8 on this goal and get G1, G2, ... , Gn as the subgoals on the user level. Then [G] is a goal on the LF level under assumptions [AI], [A2], ... [Am] , and after apply [8], we get G}, G2, ... , G~ as the subgoals, and l c: J = G;, [G;] = c: fori= 1,2, ... n. Where= express the syntactical equal under a conversion. Proof We prove this theorem by case analysis on the commands: • For the command "Fst h", we have [Fst h] =Refine App? ? p_fst h . The h must be a name for a hypothesis of the form "p & q" and the goal on the user level is p. After applying this command, the goal is solved (i.e. there is no subgoal). On the LF level, the goal is [p] and h is a name for a hypothesis of the form [p & q], After applying "Refine App ? ? p_fst h", the goal is solved; so the theorem holds for this case. A similar proof can be used to the command "Snd h". CHAPTER 8. TRANSLATION ISSUES • For the command "RDia s' " (i.e. Rule Dia s'), we have [RDia 81] = Refine App ? ? (lemma_dia_ccs ? ? ? s' ? ) ?. 105 The goal on the user level must be in the form of 8 I-< K > F. After applying this K command, the subgoals are: 81 I- F and 8 ---+ 81 • The goal on the LF level is: Prf ( ap_ Process Prop ( [ F]) [8]); after applying the corresponding command: "Refine App ? ? (lemma_dia_ccs? ? ? s' ? ) ?", the subgoals are : (Prf (ap_ State Prop [F] [8'] )) and (Prf (MTRANS [K] [8] [8'])), according to the definition of the translation L - J, we can see that: L (Prf (ap_ State Prop [F] [8'] )) J = 8 1 I-F, and L (Prf (MTRANS [K] [8] [8'])) J = 8 ~ 8 1 • By the definition of the translation [-],we know that: [81 I-F] = Prf (ap_ State Prop [F] [8'] ) and [8 ~ 8 1] = Prf (MTRANS [K] [8] [8']) So the theorem holds for this case. • Consider the command "RRec" (i.e. Rule Rec), we have [RRec] = Refine Rec, the goal on the user level must be in the form of (rec X.pl) ~ p2. After applying this command, the subgoal is: pl[(rec X.pl)/X] ~ p2. The goal on the LF level is: (Prf(TRANS [a] [rec X.pl] [p2])); after applying the corresponding command "Refine Rec", the subgoal is: (Prf(TRANS [a](8ub8t [pl] one (rec [pl])) [p2])). By our definition of L - J, we can see that: L (Prf (TRANS [a] (subst [pl] one (rec [pl])) [p2]) J = L ( L subst J [pl] one (rec [pl])) J ~ p2 = pl[(rec X.pl)/X] ~ p2. By our definition of[-] , we can see that: [pl[(rec X.pl)/X] ~ p2] = (Prf(TRANS [a](8ub8t [pl] one (rec [pl])) [p2]). So the theorem holds for this case. Remark: If the above (rec X.pl) are predefined using a name, then after using this command, the predefined name for this will be changed to (rec X.pl). • Consider the command "RDot" (i.e. Rule Dot), we have [RDot] = Refine Dot, the goal on the user level must be in the form of a.p ~ p. After applying this command, the goal is solved. The goal on the LF level is: (Prf(TRANS [a] [a.p] [p])); after applying the corresponding command "Refine Dot", the goal is solved. So the theorem holds for this case. • For the command "RTrue" (i.e. Rule True), we have [RTrue] = Refine lemma_true, the goal on the user level must be in the form of 8 I- tt. After applying this command, CHAPTER 8. TRANSLATION ISSUES 106 the goal is solved. The goal on the LF level is: (Prf(ap_ State Prop Mutts)); after applying the corresponding command "Refine lemma_true", the goal is solved. So the theorem holds for this case. • Consider the command "RPair" (i.e. Rule Pair), we have [RPair] =Refine App? ? (App? ? p_pair ?) ?, the goal on the user level must be in the form of pl & p2. After applying this command, the subgoals are pl and p2. The goal on the LF level is: (Prf (and [pl]' [p2]')). where [pl] = (Prf ([pl]')) and [p2] = (Prf ([p2]')) ; after applying the corresponding command "Refine App ? ? (App ? ? p_pair ?) ?", the subgoals are (Prf ([pl]')) and (Prf ([p2]')). By the definition of[-], l- J and lemma 8.4.4, the theorem holds for this case. • Consider the command "RChol" (i.e. Rule Chol), we have [RChol] = Refine Chol, the goal on the user level must be in the form of: pl + p2 ~ q. After applying this command, the subgoal is: pl ~ q. The goal on the LF level is: (Prf (TRANS [a] [pl + p2] [q])); after applying the corresponding command "Refine Chol'', the subgoal is: (Prf (TRANS [a] [pl] [q])). By our definition of l- J, we can see that: l (Prf (TRANS [a] [pl] [q])) J = pl ~ q. By our definition of[-], we can see that: [pl ~ q] = (Prf (TRANS [a] [pl] [q])). So the theorem holds for this case. We can use the similar proof to the commands RChor, RParl, RParr, RHide, RHidet and RRen. • Consider the command RTaul n (i.e. Rule Taul n), we have [RTaul n] =Refine Taul n, the goal on the user level must be in the form of (pllp2) ~~ (qllq2). After applying this command, the subgoals are: pl ~ ql and p2 ~ q2. The goal on the LF level is : (Prf (TRANS [tau] [pllp2] [qllq2])); after applying the corresponding command "Refine Taul n", the subgoals are: (Prf (TRANS [n] [pl] [ql])) and (Prf (TRANS [n-] [p2] [q2])). By our definition of l - J, we can see that : l (Prf (TRANS [n] [pl] [ql])) J = pl ~ ql and l (Prf (TRANS [n-] [p2] [q2])) J = p2 ~ q2. By our definition of[-], we can see that: [pl ~ ql] = (Prf (TRANS [n] [pl] [ql])) and [p2 ~ q2] = (Prf (TRANS [n-] [p2] [q2])). So the theorem holds for this case. We can use the similar proof to the commands RTau2 n. CHAPTER 8. TRANSLATION ISSUES • Consider the command "RSinglein" (i.e. Rule Singlein), we have: [RSinglein~ =Refine App? ? Eq_refi ?, 107 the goal on the user level must be in the form of "s E { s }". After applying this command, the goal is solved. The goal on the LF level is : (Prf (Eq Bool (ModaLcheck s (Modal (cons Labels (nil Label)))) true)), after applying the corresponding command "Refine App ? ? Eq_refi ?", the goal is solved. So the theorem holds for this case. • For the command "RExintro a", we have: [RExintro a~ =Refine lemma_Exintro? ? ? a, the goal on the user level must be in the form of pl ~ p2. After applying this command, the subgoals are: "a E m" and "pl ~ p2". The goal on the LF level is: (Prf (MTRANS [m~ [pl~ [p2~)), after applying the corresponding command : "Refine lemma_Exintro ? ? ? a", the subgoals are: (Prf (Eq Bool (ModaLcheck [a~ [m~ ) true)) and (Prf (TRANS [a~ [pl~ [p2~)). By the definitions of l- J and [-~, we can see the theorem holds for this case. • Consider the command "Rend" (i.e. Rule end), because this rule does not be used to any goals, so the theorem holds for this case. • Consider the command "RHyp s" (i.e. Rule Hyp s), we have [RHyp s~ = Refines. The form of the goal on the user level depends on the hypothesis named by s. After using this command, the goal is solved. The form of the goal on LF level depends on the corresponding hypothesis of the same name s. After applying the corresponding command "Refine s", the goal is solved. So the theorem holds for this case. • Consider the command "RNuunfold" (i.e. Rule Nuunfold), we have: [RNuunfold] =Refine App? ? Nu_unfold ?. The goal on the user level must be in a form of "s 1- Nu x t f". After using this command, the subgoal is in a form of "s 1- f[(Nu x (tU{s}) f)/x]". The form of the goal on LF level is like (Prf (ap_ State Prop [NuX t n [sD)). After using the corresponding command "Refine App ? ? Nu_unfold ?", the subgoal is in a form of : "(Prf (ap_ State Prop (ap_ Form Form (La_ Form Form ([Z:EI Form] [f~ ) [(Nu x (tu{s}) fD))) [s~)) ". By the definitions of l - J and [-~ , we can see the theorem holds for this case. • Consider the command "RNubase" (i.e. Rule Nubase), we have: [RNubase~ = Refine App? ? Nu_base ?. The goal on the user level must be in a form of "s 1- Nu x t f". After using this command, the subgoal is in a form of "s E t". The form of the goal on LF level is like: CHAPTER 8. TRANSLATION ISSUES 108 (Prf (ap_ State Prop [NuX t n [sm. After using the corresponding command "Refine App ? ? Nu_base ?", the subgoal is in a form of (Prf (ap_ State Prop [t] [s] )). By the definitions of l- J and[-], we can see the theorem holds for this case. e For the command "Rinr", we have [Rinr] =Refine App? ? p_inr ?. The goal on the user level must be in a form of "s E U + V". After using this command, the subgoal is in a form of "s E V". The form of the goal on LF level is like: (Prf (ap_ State Prop [U + V] [s])). After using the corresponding command "Refine App? ? p_inr ?", the subgoal is in a form of: (Prf (ap_ State Prop [V] [s] )). By the definition of l- J and [-],we can see the theorem holds for this case. We can use the similar proof for the command "Rinl". • For the command "Rbox s' h" (i.e. Rule box s' h), there is no corresponding lemma on the LF level, but a tactical which includes a relevant lemma. [Rbox s' h] = Refine lemma_box_ccs' Then_T (Intros s') Then_T (Intros h). The goal on the user level must be in a form of "s f- [K] F" . After applying this command, the subgoal is in a form of "s' f- F" with a hypothesis "s !5.. s' " named by h. The form of the goal on LF level is like (Prf (ap_ State Prop [[K] F] [s])). After applying the corresponding tactical, the subgoal is in a form of: (Prf (ap_ State Prop [F] [s'])) with a hypothesis (Prf (MTRANS [K] [s] [s'])) named by h. By the definitions of l- J and [-],we can see the theorem holds for this case. • For the command "Rinverdotl al a2" (i.e. Rule inverdotl al a2), we have: [Rinverdotl al a2] = Refine lemma_dot_eq_p' a2 al. The goal on the user level must be in a form of "pl = p2". After applying this command, the subgoal is in a form of "al.pl ~ p2". The form of the goal on LF level is like (Prf (Eq Process pl p2)). After applying the corresponding command, the subgoal is in a form of (Prf (TRANS [a2] [al.pl] [p2])). By the definitions of l- J and [-],we can see the theorem holds for this case. The similar proof can be used to the commands "Rinverrec a p pl p2", "Rinverchoi a pl p2 p " and "Rinverpar a pl p2 p". • For the command "Rvpair", we have [Rvpair]= Refine lemma_vpair. The goal on the user level must be in a form of "s f- Fl & F2". After applying this command, the subgoals are "s f- Fl" and "s f- F2". The form of the goal on LF level is like (Prf (ap_ State Prop [Fl&F2] [sm. After applying the corresponding command, the subgoals are (Prf (ap_ State Prop [Fl] [s])) and (Prf (ap_ State Prop [F2] [s])). By the definition of l - J and [-], we can see the theorem holds for this case. CHAPTER 8. TRANSLATION ISSUES 109 • For the command "Rhypchange h1 h2 h3", there is no corresponding lemma on the LF level, but a tactical: [Rhypchange hl h2 h3] = Refine App ? ? (App ? ? h1 ? ) Then_T (Refine LL Then_ T (lntros h2 Then_ T ( Refine LL Then_ T ( Intros h3 ) ) ) ) . This command just changes the hypothesis and does not affect the goal. So the theorem holds for this case. • For commands such as "RModule n" and "Rim port n", because there is no goal when these commands are used, so the theorem holds for these cases. • For the command "Req", we have [Req] = Refine App ? ? Eq_refl ?. The goal on the user level must be in a form of a1 = a2. After applying this command, the goal is solved. The form of the goal on LF level is like (Prf (Eq A a1 a2)). After applying the corresponding command "Refine App ? ? Eq_refl ?", the goal is solved. So the theorem holds for this case. • For the command "Rindn a p h", we have: [Rlndn a ph] = Refine E..Nat ([nl:Nat]Prf(TRANS [a] ( [p] nl) ( [p] (succ nl)))) Then_T (Intros n h). The goal on the user level must be in a form of "Allnat a p". After applying this command, the subgoal is like: "p (n + 1) ~ p ((n + 1) + 1)" with a hypothesis "p n ~ p (n + 1)" named by h. The goal on the LF level is like (n:El Nat)El (Prf (TRANS [a] ( [p] n) ( [p] (succ n)))). After applying the corresponding command, the subgoal is (Prf (TRANS [a] ( [p] (succ n)) ( [p] (succ (succ n))))) with a hypothesis "(Prf (TRANS [a] ( [p] n) ([p] (succ n))))". By the definitions of l - J and [-], we can see the theorem holds for this case. 8.6 Discussion Q.E.D. From the above description we can see that how to guarantee the forward and backward translation successfully is one of the main problem which should be solved in the approach. In this chapter we give a revised model for this problem and a concrete proof of an important theorem. We have proven the relevant lemmas and properties. The study of this chapter shows that the translations and the relevant methods and technologies are suitable to our purpose. Chapter 9 Conclusion and Future Work If you wish to succeed, you should use persistence as your good friend, experience as your reference, prudence as your brother and hope as your sentry. - THOMAS EDISON, AMERICAN INVENTOR This thesis has presented an approach to domain specific reasoning. The case studies in above chapters demonstrate the success of this approach. More specifically, the case studies demonstrate Plastic's capabilities for reasoning with many different domains. Further, they show how the capabilities can be used by presenting user friendly interfaces. The work in this thesis also lays the ground for further research into domain specific reasoning based on LF and its implementation. 9.1 Stocktaking This thesis has concentrated on showing how the approach can provide a good way for domain specific reasoning. We have seen how the approach works. Our approach has the following features: • User convenience: The approach is different from the pure proof assistant approach, the user can use an interface in their familiar way to do the verification, i.e. we provide a bridge between the domain and the underlying formalization. • Generality: Compared to automatic model checkers, this approach can handle verifica- tions of more complex properties, since it does not have the limitations of model check- ers to finite state problems. Furthermore, by providing a simple reasoning framework where problems can be correctly decomposed and model checking used as a decision procedure on feasible sub-problems, the approach could support wider use of model checking technology. 110 CHAPTER 9. CONCLUSION AND FUTURE WORK 111 • More convincing: Because the approach is based on a type theoretic proof assistant, a proof in our approach is a constructive proof, so it is more convincing than one arising from a model checker or from a directly programmed system. The trusted code-base is smaller (i.e. the type-checker), and the formalization will have been developed more rigorously. Plus, it is simple to obtain independent and automatic verification of results. e Structural complexity: However, the structure of this approach is more complicated than most other approaches. But this should be balanced against the positive features, and we believe the balance is in our favor. For this kind of system, the overheads in a multi-layer approach are relatively small, so 'efficiency' there is not a concern. One may also view LF as a better (i.e. more precise or articulate) programming language for implementing such formal systems, so this use of LF is a strong advantage. e Scalability: The examples of chapter 5 were deliberately kept simple, in order to make certain points about feasibility and translation, but the question remains of how this approach will work with larger and more realistic examples, such as security protocols. Given the generality of the underlying formalization and the result of chapter 5 and chapter 6, it appears that the only limit on trying larger examples is the performance of the underlying tools. Type theory proof assistants have been used for significant proofs, including that of the Fundamental Theorem of Algebra [Barendregt, 2005], and research is ongoing to improve the performance of the technology, so we do not envisage problems. Note indeed that this approach is more complicated than most of other approaches and we are in a preliminary step for this approach. Another contribution of this thesis is that we do all proofs in our case studies mechani- cally. We successfully generated Plastic proofs for the following theories and components: • Set theory and fix-point theory. • J.L-calculus related theory. • CCS related theory. • LAZY-PCF+SHAR related theory 9.2 Evaluation • Evaluation related to aim 1: In Chapter 2 we declare that the Aim 1 of the thesis is to give an analysis of requirements of domain-specific reasoning and give some cri- teria. Through case studies, we give a more detailed analysis of the characteristics of CHAPTER 9. CONCLUSION AND FUTURE WORK 112 domain-specific reasoning. We find that domain-specific notations and higher-level ab- stractions can be used directly by domain users and their corresponding translations to LF level can be done automatically. The design reuse is highly appreciated for different domains. From the case studies we also learn a lot of knowledge about how to produce domain-specific computer assisted reasoning tools. For the new approach we design the relevant architecture and construct the corresponding components, study the fea- sibility of it through several case studies. The relevant methodology and process are also presented and investigated. The case studies concretize the work of formalization, parser, communication protocols, translation between different levels and interfaces. They provide a good support to the suitability of the approach. • Evaluation related to aim 2: The aim 2 we indicated in Chapter 2 is the analysis of LF and Plastic as a basis to support domain-specific reasoning. Our case studies suggest that LF is a suitable framework as an underlying basis for domain-specific reasoning and Plastic is a suitable system to support underlying reasoning. As indi- cated in the above section the theoretical and practical benefits and defects of using LF and Plastic instead of other proof assistants are also investigated in this thesis. The case studies in Chapter 5 and Chapter 6 show that LF and Plastic are powerful enough to support big applications. • Evaluation related to aim 3: Many of the theoretical aspects of the approach such as the theorem of the adequacy property in Chapter 8 are proved. • Evaluation related to the principles in Chapter 7: For Jakob Nielsen's ten general principles, to judge which points are reached by our interface, we use the following criteria: 1. Do we follow the principles when we design the interface? 2. Does the interface satisfy the main points of the principles? 3. Does the interface show the main points naturally? According to the above criteria, our implementation of the interface accomplished most of the points of the ten principles. But we need to do more in the following aspects: User control and freedom Flexibility and efficiency of usage Help and documentation 9.3 Future research The work in this thesis creates a number of opportunities for future work. The most inter- esting are the following: CHAPTER 9. CONCLUSION AND FUTURE WORK 113 • Studying more complex examples and looking at more issues of supporting GUis in which user commands can be more diverse, or even more powerful, such as proof-by- pointing [Bertot et a!., 1997b]. • Studying the application of coercive subtyping in domain specific reasoning. o Trying to combine model checker with our approach smoothly. o Other issues to study include how to allow users to develop their own lemmas (i.e. to extend the formalization rather than just to work inside it), and how to improve the understandability of proofs, e.g. representing traces of computation and using Natural Language to explain proof steps. o As the size of examples increases, we may also need to study techniques to help users organize their proofs and developments, such as allowing multiple contexts for reason- ing. • Do more case studies in dissimilar domains. Appendix A The proofs of the Subject Reduction theorem Subject Reduction theorem is a very important theorem in the case study of Chapter 6. The successful proof of it in Plastic shows the power of Plastic. The proof of Subject Reduction theorem is divided to two individual proofs of the subject reduction theorem (Theorem 1.) and normal form characterization theorem (Theorem 2.). Lemma 1 and Lemma 2 are specific lemmas for supporting the proofs of Theorem 1 and Theorem 2. The following are proofs which are in our module Subjrnf. Note that the symbol ? in the following proof is a place holder(i.e. an unnamed metavariable in Plastic). > module Subjrnf where; (**************************************************************) (* (* Subjrfn.lf *) This file contains the main theorem,*) (* subjr_NF, which combines the subject reduction (Theorem 1.)*) (* and normal form characterization theorems (Theorem 2.) *) (* The combination is necessary in order for the induction to *) (* go through. This proof is followed by individual proofs of *) (* the subject reduction theorem and normal form (* characterization theorem. (**************************************************************) > import ApTypes; > import Envprops; > import NFprops; > import Valid; 114 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM (**************************************************************) (* Subject Reduction + NF *) (* <> -> <>---> Valid(A)--->Domt(A)I- e:t ---> *) (* Valid(A') /\ Domt(A')I- e':t /\ (NF e') *) (* *) (**************************************************************) (**************************************************************) (* lemma i. (* (**************************************************************) >Claim subjr_NFPi: (c,c': El Config)(pi: El (Prf (OSred c c'))) > El (Pi_ (Prf (Valid_env (cfgenv c))) > (Pi[t:Ty] (Pi_ (Prf (TC (OS_Dom_ty (cfgenv c)) (cfgexp c) t)) 115 > (Prf (and (and (Valid_env (cfgenv c')) (TC (DS_Dom_ty (cfgenv c')) > (cfgexp c') t)) (NF (cfgexp c'))))))); > Intros c c' pi; > show E_OSred; >Refine E_OSred ([c,c': El Config](Pi_ (Prf (Valid_env (cfgenv c))) > (Pi[t:Ty] (Pi_ (Prf (TC (OS_Dom_ty (cfgenv c)) (cfgexp c) t)) > (Prf (and (and (Valid_env (cfgenv c')) (TC (OS_Dom_ty (cfgenv c')) > (cfgexp c') t))(NF (cfgexp c')))))))); > Refine pi; > Intros A Ai e ei en e2 s t x pri pr2 pr3 pr4; > Refine La_; > Intros pr5; > Refine La; > Intros xi; > Refine La_; > Intros pr6; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?• . ' > Claim NFen: El (Prf (NF (cfgexp (cfg en Ai)))); >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xi) ? ); >Refine App?? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > Refine Valid_cons x t ei A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > ReturnAll; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine NFen; > ReturnAll; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; 116 > Refine p_Eq_subst Ty_env (append VT (nil VT) (OS_Dom_ty A1)) (OS_Dom_ty A1) > ([H: Ty_env](TC Hen xl)); > 2 Refine App ? ? (Eq_refl ?) ?; > Refine TEp_inv_nfvExt (OS_Dom_ty (OS cons x t e2 Al)) en xl t x > (App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ?) (ap_ ? ? (ap > (ap_ ? ? pr4 ?) xl) ? ))) ? (nil VT) (OS_Dom_ty Al) > Refine App ? ? (Eq_refl ?) ?· . , > Refine App ? ? (App ? ? (App ? ? pr3 ?) ?) ?· . , > 2 Refine LL; > 2 Intros PH!; > Refine Snoe_notFVe en ? x; ? ? ? ; > Refine NFenat_Snoe en ? (cons VT (pair Vari Ty x t) (OS_Dom_ty A)) ?· . , >Refine p_Eq_subst Ty s nat_Ty ([nat_Ty: Ty](TC (cons VT (pair Vari Ty x t) > (OS_Dom_ty A)) en nat_Ty)); > Refine pr2; > Refine PH!; >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xl) ? ); >Refine App?? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xl e e1 x pr6); > Refine Valid_cons x t el A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t x1 eel x pr6); > ReturnAll; > Refine LL; > Intros PH!; >Claim eneqtttorfff: Prf (or (Eq Tm en ttt) (Eq Tm en fff)); > show NFebool_TF; > Refine ap_ ? ? (ap ? ? ( NFebool_TF en ? ) (cons VT (pair Vari Ty x t) > (OS_Dom_ty A))) ?; >Refine p_Eq_subst Ty s bool_Ty ([bool_Ty: Ty](TC (cons VT (pair Vari Ty x t) > (OS_Dom_ty A)) en bool_Ty)); > Refine pr2; > Refine PH1; >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xl) ? ); >Refine App?? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xl e e1 x pr6); > Refine Valid_cons x t el A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t xl eel x pr6); CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > ReturnAll; > Refine App ? ? (App ? ? (App ? ? eneqtttorfff ?) ?) ?; > Refine LL; > Intros PH2; >Refine p_Eq_subst Tm fff en ([en: Tm](not (FV x en))); > Refine inv_FV_fff x; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PH2; > ReturnAll; > Refine LL; > Intros PH2; >Refine p_Eq_subst Tm ttt en ([en: Tm](not (FV x en))); > Refine inv_FV_ttt x; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PH2; > ReturnAll; 117 >Refine App?? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > Refine Valid_cons x t el A ? pr5; >Refine App?? (p_fst??) (inv_TC_clos (DS_Dom_ty A) t xi eel x pr6); >Claim validconsai: Prf (Valid_env (cfgenv (cfg en (OScons x t e2 Ai)))); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ?) > (ap_?? (ap?? (ap_?? pr4 ?) xl) ? )); >Refine App?? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xl e ei x pr6); > Refine Valid_cons x t ei A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t xl e ei x pr6); > ReturnAll; >Refine App?? (p_fst??) (inv_valid_cons x t e2 Ai validconsal); > ReturnAll; (* CL *) > Intros A Al e ei en e2 s t x prl pr2 pr3 pr4; > Refine La_; > Intros pr5; > Refine La; > Intros xl; > Refine La_; > Intros pr6; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_F (clos en x t e2) ?; > Refine F_clos en e2 x t ?; > Refine NFe_Fe en ? (cons VT (pair Vari Ty x t) (OS_Dom_ty A)) s ? ?; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM 118 > Refine pr3; > Refine pr2; > Refine App ? ? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xi)?); > Refine App ? ? (p_snd??) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > Refine Valid_cons x t ei A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > ReturnAll; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?• . ' > Refine TC_clos (OS_Dom_ty Ai) X en e2 t xi ? ?; > Refine App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ?) > (ap_ ? ? (ap ? ? (ap_ ? ? pr4 ?) xi) ? ) ) ; > Refine App ? ? (p_snd ? ? ) (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > Refine Valid_cons x t ei A ? ?· . ' > Refine pr5; > Refine App ? ? (p_fst ? ? (inv_TC_clos (OS_Dom_ty A) t xi e ei x pr6); > Refine App ? ? (p_snd ? ? (inv_valid_cons X t e2 Ai ?) ; > Refine App ? ? (p_fst ? ? (App ? ? (p_fst ? ?) > (ap_ ? ? (ap ? ? (ap_ ? ? pr4 ?) xi) ? ) ) ; > Refine App ? ? (p_snd ? ? ) (inv_TC_clos (OS_Dom_ty A) t xl eel x pr6); > Refine Valid_cons x t ei A ? ?· . ' > Refine pr5; > Refine App ? ? (p_fst ? ? (inv_TC_clos (OS_Dom_ty A) t xl e ei x pr6); > Refine App ? ? (p_fst ? ? ) (inv_valid_cons X t e2 Ai ?) ; > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ?) > (ap_ ? ? (ap ? ? (ap_ ? ? pr4 ?) xl) ? ) ) ; > Refine App ? ? (p_snd ? ? ) (inv_TC_clos (OS_Dom_ty A) t xi eel x pr6); > Refine Valid_cons x t el A ? ?; > Refine pr5; >Refine App?? (p_fst??) (inv_TC_clos (OS_Dom_ty A) t xl e ei x pr6); > ReturnAll; (* fix *) > Intros A Al e ei en t x nx prl pr2 pr3 pr4; > Refine La_; > Intros pr5; > Refine La; > Intros xl; > Refine La_; > Intros pr6; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· . ' > Refine App ? ? (p_snd ? ? ) (ap_ ? ? (ap ? ? (ap_ ? ? pr4 ?) xi) ? ) ; > Refine TC_clos (OS_Dom_ty A) nx e1 (fix x t e) t xi ? ?· . ' > Refine TEp_RenExp nx x e e1 ? ? ? ? ? ?• . ' > 3 Refine pr2; >Refine App?? (p_snd??) (inv_TC_fix (OS_Dom_ty A) t xi ex?); > Refine pr6; 119 > 2 Refine p_Eq_subst Ty xi t ([tt:Ty] (TC (OS_Dom_ty A) (fix x t e) tt)) ? ?; > 2 Refine pr6; > 2 Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_fix (OS_Dom_ty A) t xi ex?)); > 2 Refine pr6; > 2 Refine pr5; > Refine App ? ? (App ? ? (App ? ? (Xmidvar nx x) ?) ?) ?; > 2 Refine LL; > 2 Intros PH!; > Refine App ? ? (p_inl ? ?) ?; > Refine PH!; > ReturnAll; > Refine LL; > Intros PH!; > Refine App ? ? (p_inr ? ?) ?; > Refine LL; > Intros PH2; > Refine App ? ? (App ? ? pr1 ?) ?; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari nx xx)) ? ?; > 2 Refine TEDomDomty_OSDom A; > Refine TCHet_FVeinDomH (OS_Dom_ty A) (fix x t e) xi pr6 nx ?; > Refine FV_fix nx e PH2 x t PH!; > ReturnAll ; > Refine App ? ? (App ? ? (p_pair ? ? ) ?) ?; > Refine App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xi)?)); > Refine TC_clos (OS_Dom_ty A) nx e1 (fix x t e) t xi ? ?; > Refine TEp_RenExp nx x e e1 ? ? ? ? ? ?; > 3 Refine pr2; >Refine App?? (p_snd??) (inv_TC_fix (OS_Dom_ty A) t xi ex?); > Refine pr6; > 2 Refine p_Eq_subst Ty xi t ([tt:Ty] (TC (OS_Dom_ty A) (fix x t e) tt)) ? ?; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > 2 Refine pr6; > 2 Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_fix (OS_Dom_ty A) t xi ex?)); > 2 Refine pr6; > 2 Refine pr5; > Refine App ? ? (App ? ? (App ? ? (Xmidvar nx x) ?) ?) ?; > 2 Refine LL; > 2 Intros PH!; > Refine App ? ? (p_inl ? ?) ?; > Refine PH!; > ReturnAll; > Refine LL; > Intros PH!; > Refine App ? ? (p_inr ? ?) ?; > Refine LL; > Intros PH2; > Refine App ? ? (App ? ? pr1 ?) ?; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari nx xx)) ? ?; > 2 Refine TEDomDomty_OSDom A; > Refine TCHet_FVeinDomH (OS_Dom_ty A) (fix x t e) xi pr6 nx ?; > Refine FV_fix nx e PH2 x t PH!; > ReturnAll; > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xi) ? )); > Refine TC_clos (OS_Dom_ty A) nx e1 (fix x t e) t xi ? ?; > Refine TEp_RenExp nx x e e1 ? ? ? ? ? ?; > 3 Refine pr2; >Refine App?? (p_snd??) (inv_TC_fix (OS_Dom_ty A) t xi ex?); > Refine pr6; 120 > 2 Refine p_Eq_subst Ty xi t ([tt:Ty] (TC (OS_Dom_ty A) (fix x t e) tt)) ? ?; > 2 Refine pr6; > 2 Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_fix (OS_Dom_ty A) t xi ex?)); > 2 Refine pr6; > 2 Refine pr5; > Refine App ? ? (App ? ? (App ? ? (Xmidvar nx x) ?) ?) ?; > 2 Refine LL; > 2 Intros PH!; > Refine App ? ? (p_inl ? ?) ?; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine PHi; > ReturnAll; > Refine LL; > Intros PHi ; > Refine App ? ? (p_inr ? ?) ?; > Refine LL; > Intros PH2; > Refine App ? ? (App ? ? pri ?) ?; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari nx xx)) ? ?; > 2 Refine TEDomDomty_OSDom A; > Refine TCHet_FVeinDomH (OS_Dom_ty A) (fix x t e) xi pr6 nx ?; > Refine FV_fix nx e PH2 x t PHi; > ReturnAll; ( * IfFalse *) > Intros A Ai A2 ei e2 e3 en pri pr2 pr3 pr4; > Refine La_; > Intros prS; > Refine La; > Intros xi; > Refine La_; > Intros pr6; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xi) ? ); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Ai) > ([tt: (List VT)] (TC tt (cfgexp (cfg e3 Ai)) xi)) ? ?; 121 >Refine App?? (p_snd??) (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6); > Refine Dom_pres (cfg ei A) (cfg fff Ai) pri; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) bool_Ty) ? )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6)); > Refine pr5; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xi) ? )); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Ai) > ([tt: (List VT)] (TC tt (cfgexp (cfg e3 Ai)) xi)) ? ?· .. ' >Refine App?? (p_snd??) (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6); CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM 122 > Refine Dom_pres (cfg ei A) (cfg fff Ai) pri; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) bool_Ty) ? )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6)); > Refine pr5; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xi) ? )); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Ai) > ( [tt: (List VT)] (TC tt (cfgexp (cfg e3 Ai)) xi)) ? ?· . ' >Refine App?? (p_snd??) (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6); > Refine Dom_pres (cfg ei A) (cfg fff Ai) pri; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) bool_Ty) ? )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6)); > Refine pr5; > ReturnAll; (* IfTrue *) > Intros A Ai A2 ei e2 e3 en pri pr2 pr3 pr4; > Refine La_; > Intros pr5; > Refine La; > Intros xi; > Refine La_; > Intros pr6; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr4 ?) xi) ? ); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Ai) > ([tt: (List VT)] (TC tt (cfgexp (cfg e2 Ai)) xi)) ? ?; > Refine App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6)); > Refine Dom_pres (cfg ei A) (cfg ttt Ai) pri; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) bool_Ty) ? )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) xi ei e2 e3 pr6)); > Refine pr5; >Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) x1)? )); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty A1) > ([tt: (List VT)] (TC tt (cfgexp (cfg e2 A1)) x1)) ? ?; > Refine App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) x1 e1 e2 e3 pr6)); > Refine Dom_pres (cfg e1 A) (cfg ttt A1) pr1; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_ ? ? (ap ? ? (ap_ ? ? pr3 ?) bool_Ty) ? ) ) ; > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) x1 e1 e2 e3 pr6)); > Refine pr5; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) x1) ? )); > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty A1) > ([tt: (List VT)] (TC tt (cfgexp (cfg e2 A1)) x1)) ? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) x1 e1 e2 e3 pr6)); > Refine Dom_pres (cfg e1 A) (cfg ttt A1) pr1; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) bool_Ty) ? )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (inv_TC_cond (OS_Dom_ty A) x1 e1 e2 e3 pr6)); > Refine pr5; > ReturnAll; (* Appl *) > Intros A A1 A2 e1 e2 en1 en2 enf t n pr1 pr2 pr3 pr4 pr5; > Refine La_; > Intros pr6; > Refine La; > Intros x1; > Refine La_; > Intros pr7; > Refine App ? ? > Refine App ? ? > Refine TC_clos > Refine App ? ? > Refine LL; > Intros r; (App ? ? (p_pair ? ?) ?) ?; (p_snd??) (ap_?? (ap?? (ap_?? pr5 ?) x1) ? ); (OS_Dom_ty A1) n en2 e2 t x1 ? ?; (App ? ? (inv_TC_appl (OS_Dom_ty A) x1 e1 e2 ?) ?) ?; 123 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine LL; > Intros PH1; > Refine App ? ? (p_fst ? > (DS_Dom_ty A1) r > Refine App ? ? > (ap_ ? ? > Refine App ? > Refine pr6; > Refine La; > Intros x; > Refine La_; > Intros PH2; ? (p_snd ? (ap ? ? (p_fst ? ? ) (TEp_Ap e2 en1 en2 A n t pr2 ? x1 ?); ? ) ( App ? ? (p_fst ? ? ) (ap_?? pr4 ?) (arr r x1)) ? )); ?) PH1; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Varix xx)) ? ?; > Refine TCHet_FVeinDomH (OS_Dom_ty A) en1 (arr r x1) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty A1) (OS_Dom_ty A) > ([tt: (List VT)] (TC tt en1 (arr r x1)))? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r x1)) ? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg e1 A) (cfg en1 A1) pr1); > ReturnAll ; > Refine pr7; > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) x1 e1 e2 ?) ?) ?; > Refine LL; > Intros r; > Refine LL; > Intros PH1; >Refine p_Eq_subst Ty r t ([t: Ty](TC (DS_Dom_ty A1) e2 t)) ? ?; > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty A1) > ([tt: (List VT)] (TC tt e2 r))? ?; > Refine App ? ? (p_snd ? ?) PH1; > Refine Dom_pres (cfg e1 A) (cfg en1 A1) pr1; > Refine App ? ? (p_snd ? ? ) (TEp_Ap e2 en1 en2 A n t pr2 ? > (OS_Dom_ty A1) r x1 ?); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r x1))? )); 124 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine La; > Intros x; > Refine La_; > Intros PH2; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Varix xx)) ? ?; > Refine TCHet_FVeinDomH (OS_Dom_ty A) enl (arr r xl) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty Al) (OS_Dom_ty A) > ([tt: (List VT)] (TC tt enl (arr r xl))) ? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xl))? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg el A) (cfg enl Al) prl); > ReturnAll; > Refine pr7; > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) xl el e2 ?) ?) ?; > Refine LL; > Intros r· . > Refine LL; > Intros PH1; > Refine App ? ? > (ap_ ? ? > Refine App ? > Refine pr6; > ReturnAll; > Refine pr7; ? (p_fst (ap ? (p_fst ? ? ? ? ) ( App ? ? (p_fst ? ? ) (ap_?? pr4 ?) (arr r xl))? )); ?) PH1; > Refine App ? ? (App ? ? (p_pair ? ? ) ?) ?; > Refine App ? ? (p_snd ? ? ) (App ? ? (p_fst > (ap_ ? ? (ap ? ? (ap_ ? ? pr5 ?) xl) > Refine TC_clos (DS_Dom_ty Al) n en2 e2 t xl ? ? ) ? ) ) ; ? ?· ... > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) xl el e2 ?) ?) ?; > Refine LL; > Intros r· . > Refine LL; > Intros PH1; 125 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine App ? ? (p_fst ? ? ) (TEp_Ap e2 eni en2 A n t pr2 ? > (OS_Dom_ty Ai) r xi?); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xi))?)); > Refine App ? ? (p_fst ? ?) PHi; > Refine pr6; > Refine La; > Intros x; > Refine La_; > Intros PH2; > Refine P~Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Varix xx)) ? ?; > Refine TCHet_FVeinDomH (DS_Dom_ty A) eni (arr r xi) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty Ai) (OS_Dom_ty A) > ( [tt: (List VT)] (TC tt eni (arr r xi))) ? ? ; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) Carr r xi))?)); > Refine App ? ? (p_fst ? ?) PHi; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg ei A) (cfg eni Ai) pri); > ReturnAll; > Refine pr7; > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) xi ei e2 ?) ?) ?; > Refine LL; > Intros r; > Refine LL; > Intros PHi; >Refine p_Eq_subst Ty r t ([t: Ty](TC (OS_Dom_ty Ai) e2 t)) ? ?; > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Ai) > ([tt: (List VT)] (TC tt e2 r))? ?; > Refine App ? ? (p_snd ? ?) PHi; > Refine Dom_pres (cfg ei A) (cfg eni Ai) pri; > Refine App ? ? (p_snd ? ? ) (TEp_Ap e2 eni en2 A n t pr2 ? > (OS_Dom_ty Ai) r xi?); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xi))?)); >Refine App ? ? (p_fst ? ?) PHi; > Refine pr6; 126 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine La; > Intros x; > Refine La_; > Intros PH2; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari x xx)) ? ?; > Refine TCHet_FVeinDomH (OS_Dom_ty A) enl Carr r xl) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty Al) (OS_Dom_ty A) > ([tt: (List VT)] (TC tt enl Carr r xl))) ? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) Carr r xl))? )); > Refine App ? ? (p_fst ? ?) PH!; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg el A) (cfg enl Al) prl); > ReturnAll; > Refine pr7; > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) xl el e2 ?) ?) ?; > Refine LL; > Intros r; > Refine LL; > Intros PH!; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xl))? )); > Refine App ? ? (p_fst ? ?) PH!; > Refine pr6; > ReturnAll; > Refine pr7; > Refine App ? ? (App ? ? (inv_TC_appl (OS_Dom_ty A) xl el e2 ?) ?) ?; > Refine LL; > Intros r; > Refine LL; > Intros PH!; > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? > (ap_ ? ? (ap ? ? (ap_ ? ? pr5 ?) xl) ? ) ) ; > Refine TC_clos (OS_Dom_ty Al) n en2 e2 t xl ? ?• . ' > Refine App ? ? (p_fst ? ? ) (TEp_Ap e2 enl en2 A n t pr2 ? > (OS_Dom_ty Al) r xl ?); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? 127 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xi)) ? )); > Refine App ? ? (p_fst ? ?) PHi; > Refine pr6; > Refine La; > Intros x; > Refine La_; > Intros PH2; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari x xx)) ? ?; > Refine TCHet_FVeinDomH (OS_Dom_ty A) eni (arr r x1) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty Ai) (OS_Dom_ty A) > ([tt: (List VT)] (TC tt en1 (arr r x1)))? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r x1)) ? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg e1 A) (cfg en1 A1) pr1); > ReturnAll; >Refine p_Eq_subst Ty r t ([t: Ty](TC (OS_Dom_ty A1) e2 t)) ? ?; > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty A1) > ([tt: (List VT)] (TC tt e2 r))? ?; > Refine App ? ? (p_snd ? ?) PH1; > Refine Dom_pres (cfg el A) (cfg en1 A1) prl; > Refine App ? ? (p_snd ? ? ) (TEp_Ap e2 eni en2 A n t pr2 ? > (OS_Dom_ty A1) r xi?); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r x1)) ? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine La; > Intros x; > Refine La_; > Intros PH2; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Varix xx)) ? ?; > Refine TCHet_FVeinDomH (OS_Dom_ty A) en1 Carr r x1) ? x PH2; > 2 Refine TEDomDomty_OSDom A; > Refine p_Eq_subst (List VT) (OS_Dom_ty Ai) (OS_Dom_ty A) 128 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > ([tt: (List VT)] (TC tt enl (arr r xl))) ? ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xl)) ? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ? ) ? ) ?) > (Dom_pres (cfg el A) (cfg enl Al) prl); > ReturnAll; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) (arr r xl)) ? )); > Refine App ? ? (p_fst ? ?) PH1; > Refine pr6; > ReturnAll; > Refine pr7; > ReturnAll; (* Var2 *) > Intros A Al e el t x y prl pr2 pr3 pr4; > Refine La -· > Intros pr5; > Refine La; > Intros xl; > Refine La_; > Intros pr6; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine App ? ? (p_snd ? ? ) (ap_ ? ? (ap ? ? (ap_ ? ? pr4 > Refine TC_var (OS_Dom_ty A) y xl ?; > Refine Mp_inv_nfvExt y x xl t (nil VT) (OS_Dom_ty A) ? > (inv_TC_var (OS_Dom_ty (OScons x teA)) xl y pr6); > Refine LL; > Intros PH1; > Refine App ? ? (App ? ? prl ?) ?; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PH1; > ReturnAll; > Refine App ? ? (p_fst? ?) (inv_valid_cons x teA pr5); > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; ?) xl) ? ) ; > Refine TEp_nfvExt (append VT (nil VT) (OS_Dom_ty Al)) el xl t x ? ? > (nil VT) (OS_Dom_ty Al) ?; > Refine App ? ? (Eq_refl ?) ?; > Refine LL; 129 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM 130 > Intros PHi; > Refine App ? ? (App ? ? pr2 ?) ?; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Vari x xx)) ? ?; > 2 Refine TEDomDomty_OSDom A; > Refine TCHet_FVeinDomH (OS_Dom_ty A) ei xl ? x PH!; > Refine p_Eq_subst (List VT) (OS_Dom_ty Al) (OS_Dom_ty A) > ([xx: (List VT)] (TC xx el xi)) ? ?; > 2 Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (Dom_pres (cfg (vary) A) (cfg ei Ai) pr3); > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xl) ? )); > Refine TC_var (OS_Dom_ty A) y xi ?; > Refine Mp_inv_nfvExt y x xl t (nil VT) (OS_Dom_ty A) ? > (inv_TC_var (OS_Dom_ty (OScons x teA)) xi y pr6); > Refine LL; > Intros PHi; > Refine App ? ? (App ? ? prl ?) ?; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PH2; > ReturnAll; > Refine App ? ? (p_fst ? ? (inv_valid_cons x teA pr5 ); > ReturnAll; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xl) ? )); > Refine TC_var (OS_Dom_ty A) y xl ?; > Refine Mp_inv_nfvExt y x xl t (nil VT) (OS_Dom_ty A) ? > (inv_TC_var (OS_Dom_ty (OScons x teA)) xl y pr6); > Refine LL; > Intros PHi; > Refine App ? ? (App ? ? prl ?) ?· . ' > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PHi; > ReturnAll; > Refine App ? ? (p_fst ? ? ) (inv_valid_cons x t e A pr5 ) ; > ReturnAll; > Refine Valid_cons x t e Ai ? ?; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr4 ?) xl) ? )); > Refine TC_var (OS_Dom_ty A) y xl ?; > Refine Mp_inv_nfvExt y x xi t (nil VT) (OS_Dom_ty A) ? > (inv_TC_var (OS_Dom_ty (OScons x teA)) xl y pr6); CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine LL; > Intros PH1; > Refine App ? ? (App ? ? prl ?) ?; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) PH1; > ReturnAll; > Refine App ? ? (p_fst??) (inv_valid_cons x teA pr5 ); > ReturnAll; > Refine p_Eq_subst (List VT) (OS_Dom_ty A) (OS_Dom_ty Al) > ([xx: (List VT)] (TC xx e t)) ? ?; > 2 Refine Dom_pres (cfg (var y) A) (cfg el Al) pr3; >Refine App?? (p_snd? ?) (inv_valid_cons x teA pr5 ); > ReturnAll; (* Varl *) > Intros A Al e el t x prl pr2 pr3 > Refine La_; > Intros pr4; > Refine La; > Intros xl; > Refine La_; > Intros pr5; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr3 ?) t) ? ); >Refine App?? (p_snd? ?) (inv_valid_cons x teA pr4 ); >Refine App?? (p_fst? ?) (inv_valid_cons x teA pr4 ); > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine TEp_nfvExt (append VT (nil VT) (OS_Dom_ty Al)) el xl t x ? ? > (nil VT) (OS_Dom_ty Al) ?; > Refine App ? ? (Eq_refl ?) ?; > Refine LL; > Intros PH1; > Refine App ? ? (App ? ? prl ?) ?; > Refine p_Eq_subst (List Vari) (TE_Dom (OS_Dom_ty A)) (OS_Dom A) > ([xx: (List Vari)] (member Varix xx)) ? ?; > 2 Refine TEDomDomty_OSDom A; > Refine TCHet_FVeinDomH (OS_Dom_ty A) el xl ? x PH1; > Refine p_Eq_subst (List VT) (OS_Dom_ty Al) (OS_Dom_ty A) > ([xx: (List VT)] (TC xx el xl)) ? ?; > 2 Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (Dom_pres (cfg e A) (cfg el Al) pr2); 131 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) xl)? )); >Refine p_Eq_subst Ty t xl ([tl: Ty](TC (OS_Dom~ty A) e tl))? ?; >Refine App?? (p_snd? ?) (inv_valid_cons x teA pr4 ); 132 > Refine If_T ? ? ? (inv_TC_var (OS_Dom_ty (OScons x t e A)) xl x pr5) ?; > Refine App ? ? (Eq_refl ? ) ?; >Refine App?? (p_fst? ?) (inv_valid_cons x teA pr4 ); > ReturnAll; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr3 ?) xl) ? )); >Refine p_Eq_subst Ty t xl ([tl: Ty](TC (OS_Dom_ty A) e tl))? ?; >Refine App?? (p_snd? ?) (inv_valid_cons x teA pr4 ); > Refine If_T ? ? ? (inv_TC_var (OS_Dom_ty (OScons x t e A)) xl x pr5) ?; > Refine App ? ? (Eq_refl ? ) ?; >Refine App?? (p_fst? ?) (inv_valid_cons x teA pr4 ); > ReturnAll; > Refine Valid_cons x t el Al ? ?; > Refine App ? ? (p_fst ? ? ) ( App ? ? (p_fst ? ? > (ap_ ? ? (ap ? ? (ap_ ? ? pr3 ?) t) ? ) ) ; > Refine App ? ? (p_snd ? ?) (inv_valid_cons x t e A pr4 ) ; > Refine App ? ? (p_fst ? ?) (inv_valid_cons x t e A pr4 ) ; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_ ? ? (ap ? ? (ap_?? pr3 ?) t) ? )); > Refine App ? > Refine App ? ? (p_snd ? ? (p_fst ? ?) (inv_valid_cons x teA pr4 ); ?) (inv_valid_cons x teA pr4 ); > ReturnAll; > Intros A Al e el prl pr2 > Refine La_; > Intros pr3; > Refine La; > Intros xl; > Refine La_; > Intros pr4; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· . , > Refine NF_Sno (sue el); > Refine Sno_s el; > Refine NFenat_Snoe el ? (OS_Dom_ty Al) ?; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM 133 > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_suc (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > Refine App ? ? (p_snd ? ? (ap_ ? ? (ap ? ? (ap_ ? ? pr2 ?) nat_Ty) ? ) ; > Refine App ? ? (p_snd ? ? (inv_TC_suc (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine p_Eq_subst Ty nat_Ty xl ([xl:Ty](TC (OS_Dom_ty Al) (sue el) xl)) ? ?; > Refine TC_suc ? ? ( App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? ))); >Refine App?? (p_snd??) (inv_TC_suc (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) (App ? ? (p_fst ? ? > (inv_TC_suc (OS_Dom_ty A) xl e pr4 )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_suc (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > ReturnAll; (* ZF *) > Intros A Al e e1 prl pr2 > Refine La_; > Intros pr3; > Refine La; > Intros xl; > Refine La_; > Intros pr4; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_fff; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine p_Eq_subst Ty bool_Ty xl ([xl:Ty](TC (OS_Dom_ty Al) fff xi)) ? ?; > Refine TC_fff (OS_Dom_ty A1) > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_is_o (OS_Dom_ty A) xl e pr4 )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty)? )); >Refine App?? (p_snd??) (inv_TC_is_o (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > ReturnAll; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM (* ZT *) > Intros A Ai e pri pr2 > Refine La_; > Intros pr3; > Refine La; > Intros xi; > Refine La_; > Intros pr4; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_ttt; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; 134 >Refine p_Eq_subst Ty bool_Ty xi ([xi:Ty](TC (DS_Dom_ty Ai) ttt xi))??; > Refine TC_ttt (DS_Dom_ty Ai) ; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_is_o (OS_Dom_ty A) xi e pr4 )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_is_o (OS_Dom_ty A) xi e pr4 ); > Refine pr3; > ReturnAll; > Intros A Ai e ei pri pr2 > Refine La_; > Intros pr3; > Refine La; > Intros xi; > Refine La_; > Intros pr4; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_Sno ei; > Refine inv_Sno_s ei; > Refine inv_NF_Sno ei; >Refine App?? (p_snd?? (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty)? ); >Refine App?? (p_snd??) (inv_TC_prd (OS_Dom_ty A) xi e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine p_Eq_subst Ty nat_Ty xi ([xi:Ty](TC (OS_Dom_ty Ai) e1 xi)) ? ?; > Refine App ? ? (p_snd ? ?) (inv_TC_suc ? ? ? CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > (App ? ? (p_snd ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )))); >Refine App?? (p_snd??) (inv_TC_prd (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_prd (OS_Dom_ty A) xi e pr4 )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_prd (OS_Dom_ty A) xl e pr4 ); > Refine pr3; > ReturnAll; (* PO *) > Intros A Ai > Refine La -· > Intros pr3; > Refine La; > Intros xi; > Refine La_; > Intros pr4; e pri pr2 > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; 135 >Refine App?? (p_snd??) (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? ); >Refine App?? (p_snd?? (inv_TC_prd (OS_Dom_ty A) xi e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; >Refine p_Eq_subst Ty nat_Ty xi ([xi:Ty](TC (OS_Dom_ty Ai) o xi))??; > Refine App ? ? (p_snd ? ? ) ( App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_prd (OS_Dom_ty A) xi e pr4 ); > Refine pr3; > Refine App ? ? (App ? ? (App ? ? (Eq_sym ?) ?) ?) > (App?? (p_fst??) (inv_TC_prd (OS_Dom_ty A) xi e pr4 )); > Refine App ? ? (p_fst ? ? ) (App ? ? (p_fst ? ? ) > (ap_?? (ap?? (ap_?? pr2 ?) nat_Ty) ? )); >Refine App?? (p_snd??) (inv_TC_prd (OS_Dom_ty A) xi e pr4 ); > Refine pr3; > ReturnAll; > Intros A e t x; > Refine La_; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM 136 > Intros pri; > Refine La; > Intros xi; > Refine La_; > Intros pr2; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine NF_F; > Refine F_abs; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine pr2; > Refine pri; > ReturnAll; > Intros A ; > Refine La -· > Intros pri; > Refine La; > Intros xi; > Refine La_; > Intros pr2; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_fff; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine pr2; > Refine pri; > ReturnAll; > Intros A ; > Refine La_; > Intros pri; > Refine La; > Intros xi; > Refine La_; > Intros pr2; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine NF_ttt; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine pr2; > Refine pri; > ReturnAll; CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > Intros A ; > Refine La_; > Intros pri; > Refine La; > Intros xi; > Refine La_; > Intros pr2; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?; > Refine NF_Sno; > Refine Sno_o; > Refine App ? ? (App ? ? (p_pair ? ?) ?) ?· .. > Refine pr2; > Refine pri; > ReturnAll; (**************************************************************) (* lemma 2. (* (**************************************************************) >Claim subjr_NF: (c,c': El Config)(pi: El (Prf (OSred c c'))) > (p2: El (Prf (Valid_env (cfgenv c)))) > (t:Ty) (p3: El (Prf (TC (OS_Dom_ty (cfgenv c)) (cfgexp c) t))) > El (Prf (and (and (Valid_env (cfgenv c')) > (TC (OS_Dom_ty (cfgenv c')) (cfgexp c') t)) > (NF (cfgexp c')))); > Intros c c' pi p2 t p3; > Refine ap_ ? ? (ap ? ? (ap_ ? ? (subjr_NFPi c c' pi) p2) t) p3; > ReturnAll; (**************************************************************) (* Theorem 1. *) (* Subject Reduction *) (* <> -> <>---> Valid(A)---> Domt(A)I- e:t ---> *) (* Domt(A')I- e' :t *) (**************************************************************) > Claim subjr_red (e,e': El Tm)(A,A': El OS_env) 137 CHAPTER A. THE PROOFS OF THE SUBJECT REDUCTION THEOREM > (pi: El (Prf (OSred (cfg e A) (cfg e' A')))) > (p2: El (Prf (Valid_env A)))(t: El Ty) > (p3: El (Prf (TC (OS_Dom_ty A) e t))) > El (Prf (TC (OS_Dom_ty A') e' t)); > Intros e e' A A' pi p2 t p3; > Refine App ? ? (p_snd ? ?) ( App ? ? (p_fst ? ?) > (subjr_NF (cfg e A) (cfg e' A') pi p2 t p3)); > ReturnAll; (**************************************************************) (* Theorem 2. ( * Normal Forms (* <> -> <>--->Valid<>---> e' in NF (**************************************************************) >Claim NormalForms : (e,e': El Tm)(A,A': El OS_env) > (pi: El (Prf (OSred (cfg e A) (cfg e' A')))) > (p2 : El (Prf (Valid_config (cfg e A)))) > El (Prf (NF e')); > Intros e e' A A' pi p2; >Claim PH!: (Prf (Ex Ty ([t:Ty] TC (OS_Dom_ty (cfgenv (cfg e A))) > (cfgexp (cfg e A)) t))); > Refine App ? ? (p_snd ? ? > ReturnAll; > Refine (App ? ? (App ? ? > Refine LL; > Intros t· ' > Refine LL; > Intros p3; ( inv_valid_cfg (cfg e A) p2); PH! ?) ?) ; >Refine App?? (p_snd? ?) (subjr_NF (cfg e A) (cfg e' A') pi? t p3); >Refine App?? (p_fst??) ( inv_valid_cfg (cfg e A) p2); > ReturnAll; 138 From the above proof of the Subject Reduction theorem we can see that the proof in Plastic is more detailed than Lego and Coq. This is due to many reasons (such as the underlying type theory or logic framework, etc.). But we think that this does not prevent us to study the issues about domain specific reasoning. Bibliography [Abadi eta!., 1991] Abadi, M., Cardelli, 1., Curien, P.-1., and Levy, J.-J. (1991). Explicit substitutions. Journal of Functional Programming, 1(4):375-416. Preliminary version in POPL 1990. [Aczel, 1978] Aczel, P. (1978). The type theoretic interpretation of constructive set the- ory. In Logic Colloquium '77 (Proc. Conf., Wroclaw, 1977}, volume 96 of Stud. Logic Foundations Math., pages 55-66. North-Holland, Amsterdam. [Aczel, 1982] Aczel, P. (1982). The type theoretic interpretation of constructive set theory: choice principles. In The L. E. J. Brouwer Centenary Symposium (Noordwijkerhout, 1981), volume 110 of Stud. Logic Found. Math., pages 1-40. North-Holland, Amsterdam. [Aczel, 1986] Aczel, P. (1986). The type theoretic interpretation of constructive set theory: inductive definitions. In Logic, methodology and philosophy of science, VII (Salzburg, 1983}, volume 114 of Stud. Logic Found. Math., pages 17-49. North-Holland, Amsterdam. [Aczel, 1999] Aczel, P. (1999). On relating type theories and set theories. In Types for proofs and programs (Irsee, 1998}, volume 1657 of Lecture Notes in Comput. Sci., pages 1-18. Springer, Berlin. [Aczel and Rathjen, 2001] Aczel, P. and Rathjen, M. (2000/2001). Notes on constructive set theory. Technical Report 40, Mittag-Leffler. [Adams, 2004] Adams, R. (2004). The consistency of large elimination. http://WWW.cs.rhul.ac.uk/-robin. [Archer and Heitmeyer, 1997] Archer, M. and Heitmeyer, C. (1997). Human-style theorem proving using PVS. In Proc. lOth International Theorem Proving in Higher Order Logics Conference, pages 33-48. [Archer et a!., 1998] Archer, M., Heitmeyer, C., and Sims, S. (1998). TAME: A PVS in- terface to simplify proofs for automata models. In User Interfaces for Theorem Provers, Eindhoven, The Netherlands. 139 BIBLIOGRAPHY 140 [Aspinall, 2000] Aspinall, D. (2000). Protocols for interactive e-proof. available at http://zermelo.dcs.ed.ac.uk/-da/drafts/#eproof. [Aspinall, 2005a] Aspinall, D. (2005a). Proof general. available at http://proofgeneral.inf.ed.ac.uk/. [Aspinall, 2005b] Aspinall, D. (2005b). Proof general kit. http://proofgeneral.inf.ed.ac.uk/kit. [Aspinall and Liith, 2003] Aspinall, D. and Liith, C. (2003). Proof general meets isawin. In User Interfaces for Theorem Provers (UITP 2003), Rome, Italy. [Backhouse, 1988] Backhouse, R. (1988). On the meaning and construction of the rules in Martin-Lof's theory of types. In et al., A. A., editor, Workshop on General Logic. LFCS Report Series, ECS-LFCS-88-52, Dept. of Computer Science, University of Edinburgh. [Backhouse, 1998] Backhouse, R. C., editor (1998). Informal proceedings of the Workshop on User Interfaces for Theorem Prover. Eindhoven University of Technology. [Bailey, 1996] Bailey, A. (1996). Lego with implicit coercions. Draft. [Bailey, 1998] Bailey, A. (1998). The Machine-checked Literate Formalisation of Algebra in Type Theory. PhD thesis, University of Manchester. [Barendregt, 2005] Barendregt, H. (2005). The fundamental theorem of algebra project. http://WWW.es.kun.nl/-freek/fta/xindex.html. [Barendregt, 1990] Barendregt, H. P. (1990). Functional programming and lambda calculus. In van Leeuwen, J., editor, Handbook of Theoretical Computer Science, Vol. B: Formal Models and Semantics, chapter 7, pages 321-363. Elsevier, Amsterdam, and The MIT Press, Cambridge, MA. [Barthe et a!., 2003] Barthe, G., Capretta, V., and Pons, 0. (2003). Setiods in type theory. Functional Programming, 13(2):261-293. [Barthe and S0rensen, 2000] Barthe, G. and S!iirensen, M. H. (2000). Domain-free pure type systems. Journal of Functional Programming, 10(5):412-452. [Basin and Matthews, 2002] Basin, D. and Matthews, S. (2002). Logical frameworks. In Gabbay, D. and Guenthner, F., editors, Handbook of Philosophical Logic, second edition, volume 9, pages 89-164. Kluwer Academic Publishers, Dordrecht. [Berardi, 1990] Berardi, S. (1990). Type Dependence and Constructive Mathematics. PhD thesis, Universita di Torino, Italy. BIBLIOGRAPHY 141 [Bertot eta!., 1997a] Bertot, J., Bertot, Y., Coscoy, Y., Goguen, H., and Montagnac, F. (1997a). User guide to the CTCOQ proof environment. Technical Report RT-0210, Inria, lnstitut National de Recherche en Informatique et en Automatique. [Bertot, 1998] Bertot, Y. (1998). The ctcoq system: Design and architecture. Research report 3540, INRIA SORPHIA ANTIPOLIS. [Bertot eta!., 1997b] Bertot, Y., Kleymann-Schreiber, T., and Sequeira, D. (1997b). Imple- menting proof by pointing without a structure editor. Technical report ECS-LFCS-97-368, University of Edinburgh. [Bertot and Thery, 1998] Bertot, Y. and Thery, L. (1998). A generic approach to building user interfaces for theorem provers. Journal of Symbolic Computation, 25(2):161-194. [Betarte eta!., 2000] Betarte, G., Comes, C., Szasz, N., and Tasistro, A. (2000). Smart card operating system. Lecture Notes in Computer Science, 1956:77-93. [Borras eta!., 1989] Borras, P., Clement, D., Despeyrouz, T., Incerpi, J., Kahn, G., Lang, B., and Pascual, V. (1989). CENTAUR: The system. In Proceedings of the ACM SIG- SOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments (PSDE), volume 24(2), pages 14-24, New York, NY. ACM Press. [Boulton eta!., 1998] Boulton, R., Slind, K., Bundy, A., and Gordon, M. (1998). An inter- face between CLAM and HOL. In Grundy, J. and Newey, M., editors, Proceedings of the 11th International Conference on Theorem Proving in Higher Order Logics {T PH 0 Ls '98), volume 1479 of Lecture Notes in Computer Science, pages 87-104, Canberra, Australia. Springer. [Boyer and J S. Moore, 1979] Boyer, R. S. and J S. Moore (1979). A Computational Logic. Academic Press. [Boyer and J S. Moore, 1997] Boyer, R. S. and J S. Moore (1997). A Computational Logic Handbook. Academic Press, second edition. [Boyer and Moore, 1984] Boyer, R. S. and Moore, J. S. (1984). Proof checking the RSA public key encryption algorithm. American Mathematical Monthly, 91(3):181-189. [Bradfield and Stirling, 2003] Bradfield, J. and Stirling, C. (2003). Modal logics and mu-calculi: an introduction. http://WWW.mimuw.edu.pl/-sl/teaching/02_03/WZTPW/HANDBOOK/mu_intro.ps. [Bundy, 1987] Bundy, A. (1987). The use of explicit plans to guide inductive proofs. In Conf. on Automated Deduction (CADE 9). [Bundy, 2001] Bundy, A. (2001). The automation of proof by mathematical induction. In Robinson, A. and Voronkov, A., editors, Handbook of Automated Reasoning, volume I, chapter 13, pages 845-911. Elsevier Science. BIBLIOGRAPHY 142 [Bundy et al., 2000] Bundy, A., Moore, J., and Zinn, C. (2000). An intelligent tutoring system for induction proofs. [Callaghan, 1999] Callaghan, P. (1999}. Plastic: an implementation of typed LF with coer- cions. Talk given in the Annual Conf of TYPES'99. [Callaghan, 2000a] Callaghan, P. (2000a). The plastic proof assistant. http://WWW.dur.ac.uk/CARG/plastic.html. [Callaghan and Luo, 1998] Callaghan, P. and Luo, Z. (1998}. Mathematical vernacular in type theory based proof assistants. In User Interfaces for Theorem Provers (UITP'98), Eindhoven. [Callaghan and Luo, 2000a] Callaghan, P. and Luo, Z. (2000a}. Implementation techniques for inductive types in plastic. Proceedings of TYPES'99. LNCS series. [Callaghan, 2000b] Callaghan, P. C. (2000b}. Coherence checking of coercion in plastic. APPSEM Workshop on Subtyping and Dependent Types in Programming. [Callaghan, 2005] Callaghan, P. C. (2005}. Plastic www page. http://WWW.dur.ac.uk/CARG/plastic.html. [Callaghan and Luo, 2000b] Callaghan, P. C. and Luo, Z. (2000b}. Plastic: an implementa- tion of typed LF with coercive subtyping and universes. Journal of Automated Reasoning, special issue on Logical Frameworks. [Callaghan and Luo, 2001] Callaghan, P. C. and Luo, Z. (2001}. An implementation of LF with coercive subtyping and universes. Automated Reasoning, 27(1}:3-27. [Callaghan et al., 2001] Callaghan, P. C., Luo, Z., and Pang, J. (2001}. Object languages in a type theoretic meta-framework. In Proc. Workshop on Proof Transformation, Pre- sentation and Proof Complexities, pages 23-36, Siena, Italy. [Church, 1932] Church, A. (1932}. A set of postulates for the foundation of logic. Annals of Mathematics, (2} 33 and 34. [Church, 1940] Church, A. (1940}. A formulation of the simple theory of types. J. Symbolic Logic, 5( 1). [Clarke and Emerson, 1981] Clarke, E. M. and Emerson, E. A. (1981}. Design and syn- chronization skeletons using branching time temporal logic. Lecture Notes in Computer Science, 131:52-71. [Clarke et al., 1986] Clarke, E. M., Emerson, E. A., and Sistla, A. P. (1986). Automatic verification of finite-state concurrent system using temporal logic specifications. ACM Transactions on Programming Languages and Systems, 8(2):244-263. BIBLIOGRAPHY 143 [Clarke et a!., 1994] Clarke, E. M., Grumberg, 0., and Hamaguchi, K. (1994). Another look at It! model checking. In D.L.Dill, editor, Proceedings of the 6th Conference on Computer Aided Verification, volume 818 of Lecture Notes in Computer Science, pages 415-427, Stanford, CA. Springer Verlag. [Cleaveland eta!., 1993] Cleaveland, R., Parrow, J., and Steffen, B. (1993). The Concur- rency Workbench: A semantics-based tool for the verification of concurrent systems. ACM Transactions on Programming Languages and Systems, 15(1):36-72. [Cleaveland and Smolka, 1999] Cleaveland, R. and Smolka, S. (1999). Process algebra. [Constable et a!., 1986] Constable, R. et a!. (1986). Implementing Mathematics with the NuPRL Proof Development System. Pretice-Hall. [Consulting, 2001] Consulting, M. (2001). Domain-specific modelling: 10 times faster than urn!. Technical report, MetaCase Consulting. [Coq, 2005] Coq (2005). Coq WWW page. http://pauillac.inria.fr/coq. [Coquand, 1992] Coquand, T. (1992). Pattern matching with dependent types. Talk given at the BRA workshop on Proofs and Types, Bastad. [Coquand and Huet, 1985] Coquand, T. and Huet, G. (1985). Constructions: a higher order proof system for mechanizing mathematics. Lecture Notes in Computer Science, 203. [Coquand and Huet, 1988] Coquand, T. and Huet, G. (1988). The calculus of constructions. Information and Computation, 76(2/3). [Coquand and Paulin-Mohring, 1990] Coquand, T. and Paulin-Mohring, C. (1990). Induc- tively defined types. LNCS, 417. [Crary, 1998] Crary, K. (1998). Type-theoretic methodology for practical programming lan- guages. Technical Report TR98-1699, Cornell University, Computer Science. [Curry, 1934] Curry, H. (1934). Functionality in combinatory logic. Proc. Nat. Acad. Science USA, 20:584-590. [Curry and Feys, 1958] Curry, H. B. and Feys, R. (1958). Combinatory Logic. North- Holland, 1 edition. [Dahn and Wolf, 1994] Dahn, B. I. and Wolf, A. (1994). A calculus supporting structured proofs. Journal for Information Processing and Cybernetics (EIK), 5-6:261-276. [Dahn, 1998] Dahn, I. (1998). Using ILF as an interface to many theorem provers. In Informal proceedings of the Workshop on User Interfaces for Theorem Prover, pages 75- 86. BIBLIOGRAPHY 144 [Dam, 1995] Dam, M. (1995). Model checking compositional proof systems for model check- ing infinite state processes. In Lee, I. and Smolka, S. A., editors, Proceedings 6th Int. Conf. on Concurrency Theory, CONCUR '95, Philadelphia, PA, USA, 21-24 Aug 1995, volume 962 of Lecture Notes in Computer Science, pages 12-26. Springer-Verlag, Berlin. [de Bruijn, 1980] de Bruijn, N. (1980). A survey of the project AUTOMATH. In Hindley, J. and Seldin, J., editors, To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism. Academic Press. [Denney, 2001] Denney, E. (2001). A Prototype Proof Translator from HOL to Coq. Draft. [Dery and Rideau, 1994] Dery, A.-M. and Rideau, L. (1994). Distributed programming en- vironments: an example of a message protocol. Technical Report RT-0165, Inria, Institut National de Recherche en Informatique et en Automatique. [Dybjer, 1991] Dybjer, P. (1991). Inductive sets and families in Martin-Lof's type theory and their set-theoretic semantics. In Huet, G. and Plotkin, G., editors, Logical Frameworks. Cambridge University Press. [Eastaughffe, 1998] Eastaughffe, K. A. (1998). Support for interactive theorem proving: Some design principles and their application. In Informal proceedings of the Workshop on User Interfaces for Theorem Prover, pages 96-103. [Emerson, 1990] Emerson, E. A. (1990). Temporal and Modal Logic. In J. van Leeuwen, ed- itor, Handbook of Theoretical Computer Science, volume B, pages 996-1072, Amsterdam. Elsevier Science Publishers. [Emerson and Clarke, 1980] Emerson, E. A. and Clarke, E. M. (1980). Characterizing cor- rectness properties of parallel programs using fixpoints. Procs ICALP'80 LNCS, 85:169- 181. [Emerson and Lei, 1986] Emerson, E. A. and Lei, C. L. (1986). Efficient model checking in fragments of the propositional mu-calculus. In Proc. Jth IEEE LICS. [Field, 1990] Field, J. (1990). On laziness and optimality in lambda interpreters: Tools for specification and analysis. In Proceedings of the seventeenth symposium on Principles of Programming Languages, pages 1-15, San Francisco. [Fischer and Ladner, 1979] Fischer, M. J. and Ladner, R. E. (1979). Propositional dynamic logic of regular programs. J. Computer and System Science, 18:194-211. [Gentzen, 1935] Gentzen, G. (1935). Untersuchungen i.iber das logische schliessen. Mathe- matische Zeitschrift, 39. [Gill and Marlow, 2005] Gill, A. and Marlow, S. (2005). http://WWW.haskell.org/happy/. Happy WWW page. BIBLIOGRAPHY 145 [Girard, 1972] Girard, J.-Y. (1972). Interpretation fonctionelle et elimination des coupures de l'arithmetique d'ordre superieur. PhD thesis, Universite Paris VII. [Goguen, 1994] Goguen, H. (1994). A Typed Operational Semantics for Type Theory. PhD thesis, University of Edinburgh. [Goguen, 1999] Goguen, H. (1999). Soundness of the logical framework for its typed opera- tional semantics. In Girard, J.-Y., editor, Proceedings of the 4th International Conference on Typed Lambda Calculi and Applications (TLCA '99}, pages 177-197, L'Aquila, Italy. Springer-Verlag LNCS 1581. [Gordon, 1985] Gordon, M. (1985). HOL a machine oriented formulation of higher order logic. Technical Report TR-68, Computer Laboratory, Cambridge University. [Gordon and Melham, 1993a] Gordon, M. and Melham, T. (1993a). Introduction to HOL: a theorem proving environment for higher-order logic. Cambridge University Press. [Gordon et al., 1979] Gordon, M., Milner, A., and Wadsworth, C. (1979). Edinburgh LCF: A Mechanized Logic of Computation, volume 78 of LNCS. Springer-Verlag, New York. [Gordon and Melham, 1993b] Gordon, M. J. C. and Melham, T. F. (1993b). Introduction to HOL: A Theorem Proving Environment for Higher Order Logic. Cambridge University Press. [Griffin, 1988] Griffin, T. G. (1988). Notational Definition and Top-Down Refinement for Interactive Proof Development Systems. PhD thesis, Cornell University, Departement of Computer Science. [Group, 2000] Group, T. C. S. (2000). Dove user manual. Technical report, Information Technology Division, The Australian Defence Science and Technology Organisation. [Gurov, 1998] Gurov, D. B. (1998). Specification and Verification of Communicating Sys- tems with Value Passing. PhD thesis, University of Victoria. [Harper et al., 1987] Harper, R., Hansell, F., and Plotkin, G. (1987). A framework for defining logics. Proc. 2nd Ann. Symp. on Logic in Computer Science. IEEE. [Hennessy, 1991] Hennessy, M. (1991). A proof system for communicating processes with value-passing. Formal Aspects of Computing, 3( 4) :346-366. [Hennessy and Liu, 1993] Hennessy, M. and Liu, X. (1993). A modal logic for message passing processes. In Proc. 5th International Computer Aided Verification Conference, pages 359-370. [Hennessy and Milner, 1980] Hennessy, M. and Milner, R. (1980). On observing nondeter- minism and concurrency. In Proc. ICALP'BO LNCS 85. BIBLIOGRAPHY 146 [Hennessy and Milner, 1985] Hennessy, M. and Milner, R. (1985). Algebraic laws for non- determinism and concurrency. Journal of the Association for Computing Machinery, 32(1 ): 137-161. [Heyd and Cregut, 1996] Heyd, B. and Cregut, P. (1996). A modular coding of unity in coq. In J. Von Wright, J. Grundy, and J. Harrison, editors, Ninth international Conference on Theorem Proving in Higher Order Logics TPHOL, volume 1125 of Lecture Notes in Computer Science, pages 251-266, Thrku, Finland. Springer Verlag. [Hickey, 1997] Hickey, J. J. (1997). Nuprl-Light: An implementation framework for higher- order logics. Lecture Notes in Computer Science, 1249. [Hill and Thompson, 1995] Hill, S. and Thompson, S. (1995). Miranda in Isabelle. In Paul- son, L. C., editor, Preceedings of the first Isabelle Users Workshop, number 397 in Uni- versity Of Cambridge Computer Laboratory Technical Reports Series, pages 122-135. [Howard, 1980] Howard, W. A. (1980). The formulae-as-types notion of construction. In Hindley, J. and Seldin, J., editors, To H. B. Curry: Essays on Combinatory Logic. Aca- demic Press. [Howe, 1989] Howe, D. J. (1989). Equality in lazy computation systems. In Proceedings, Fourth Annual Symposium on Logic in Computer Science, pages 198-203, Asilomar Con- ference Center, Pacific Grove, California. IEEE Computer Society Press. [Hunt et al., 1992] Hunt, W. A., Jr., and Brock, B. C. (1992). A Formal HDL and its use in the FM9001 verification. In Philosophical Transactions of the Royal Society, pages 339( 1652) :35-4 7. [Jackson, 1995] Jackson, P. B. (1995). Enhancing the Nuprl Proof Development System and Applying it to Computational Abstract Algebra. PhD thesis, Cornell University. [Jones et al., 1998] Jones, A., Luo, Z., and Soloviev, S. (1998). Some proof-theoretic and algorithmic aspects of coercive sub typing. Types for proofs and programs ( eds, E. Gimenez and C. Paulin-Mohring), Proc. of the Inter. Conf. TYPES'96, LNCS 1512. [Jones and Hansen, 2002] Jones, M. A. and Hansen, T. L. (2002). Using xml in the masp client-server protocol. [Jones, 1993] Jones, M. P. (1993). Coherence for qualified types. Technical Report YALEU/DCS/RR-989, Yale University, New Haven, Connecticut, USA. [Jutting, 1977] Jutting, L. S. (1977). Checking Landau's "Grndalagen" in the A UTOMATH system. PhD thesis, Eindhoven University. [Kahn, 1987] Kahn, G. (1987). Natural semantics. In Proceedings of the Symposium on The- oretical Aspects of Computer Science (STAGS}, volume 247 of Lecture Notes in Computer Science, pages 22-39. Springer-Verlag. BIBLIOGRAPHY 147 [Kleene and Rosser, 1935] Kleene, S. and Rosser, J. (1935). The inconsistency of certain formal logics. Annals Math., (2) 36. [Klop, 1980] Klop, J. W. (1980). Combinatory reduction systems. Mathematical Center Tracts 127. [Kozen, 1983] Kozen, D. (1983). Results on the propositional mu -calculus. Theoretical Computer Science, 27(3):333-354. [Laurent, 1999] Laurent, S. S. (1999). Inside XML DTDs: scientific and technical. Enter- prise computing. McGraw-Hill, New York, NY, USA. [Leucker and Noll, 2000] Leucker, M. and Noll, T. (2000). Truth- A real-world application in Haskell. In Mohnen, M., editor, Proceedings of the 12th International Workshop on Im- plementation of Functional Languages (IFL '00 ), number AIB-00-7 in Aachener Informatik Berichte, pages 363-380. RWTH Aachen. [Leucker and Noll, 2001] Leucker, M. and Noll, T. (2001). Truth/SLC- A parallel verifica- tion platform for concurrent systems. In Berry, G., Comon, H., and Finkel, A., editors, Proceedings of the 13th Conference on Computer Aided Verification (CA V '01 ), volume 2102 of Lecture Notes in Computer Science, pages 255-259. Springer-Verlag Inc. [Lowe and Duncan, 1997] Lowe, H. and Duncan, D. (1997). XBarnacle: Making theo- rem provers more accessible. In McCune, W., editor, Proceedings of the 14th Interna- tional Conference on Automated deduction, volume 1249 of LNAI, pages 404-407, Berlin. Springer. [Luo, 2004] Luo, Y. (2004). Coherence and Transitivity in Coercive Subtyping. PhD thesis, University of Durham. [Luo, 1990a] Luo, Z. (1990a). An Extended Calculus of Constructions. PhD thesis, Univer- sity of Edinburgh. Also as Report CST-65-90/ECS-LFCS-90-118, Department of Com- puter Science, University of Edinburgh. [Luo, 1990b] Luo, Z. (1990b). A problem of adequacy: conservativity of calculus of con- structions over higher-order logic. Technical report, LFCS report series ECS-LFCS-90-121, Department of Computer Science, University of Edinburgh. [Luo, 1994] Luo, Z. (1994). Computation and Reasoning: A Type Theory for Computer Science. Oxford University Press. [Luo, 1997] Luo, Z. (1997). Coercive subtyping in type theory. Proc. of CSL'96, the 1996 Annual Conference of the European Association for Computer Science Logic, Utrecht. LNCS 1258. BIBLIOGRAPHY 148 [Luo, 1999] Luo, Z. (1999). Coercive subtyping. Journal of Logic and Computation, 9(1):105-130. [Luo, 2003] Luo, Z. (2003). PAL+: A lambda-free logical framework. Functional Program- ming, 13(2):317-338. [Luo and Pollack, 1992] Luo, Z. and Pollack, R. (1992). LEGO Proof Development Sys- tem: User's Manual. LFCS Report ECS-LFCS-92-211, Department of Computer Science, University of Edinburgh. [Luo and Soloviev, 1999] Luo, Z. and Soloviev, S. (1999). Dependent coercions. In The 8th international conference on category theory and computer science (CTCS'99), Edinburgh, Scotland. Electronic Notes in Theoretical Computer Science 29. [Magnusson and Nordstrom, 1994] Magnusson, L. and Nordstrom, B. (1994). The ALF proof editor and its proof engine. In TYPES '93: Proceedings of the international work- shop on Types for proofs and programs, pages 213-237, Secaucus, NJ, USA. Springer- Verlag New York, Inc. [Manna and Pnueli, 1981] Manna, Z. and Pnueli, A. (1981). Verification of temporal pro- grams: the temporal framework. In Boyer, R. S. and Moore, J. S., editors, The Correctness Problem in Computer Science. Academic Press, New York. [Marques, 1998] Marques, F. (1998). Program composition in coq-unity. In Grundy, J. and Newey, M., editors, Supplementary proceedings of the 11th International Conference on Theorem Proving in Higher Order Logics (TPHOLs'98), pages 95-104, Canberra, Aus- tralia. Springer. [Martin-Li:if, 1975] Martin-Li:if, P. (1975). An intuitionistic theory of types: predicative part. In H.Rose and J.C.Shepherdson, editors, Logic Colloquium '73. [Martin-Li:if, 1984] Martin-Lof, P. (1984). Intuitionistic Type Theory. Bibioplois, Napoli. Notes of Giowanni Sambin on a series of lectures given in Padova. [McMillan, 1992] McMillan, K. L. (1992). Symbolic Model Checking: An approach to the state explosion problem. PhD thesis, Carnegie Mellon University. [McMillan, 2005] McMillan, K. L. (2005). Smv www page. http://WWW-2.cs.cmu.edu/-modelcheck/smv.html. [Meisels and Saaltink, 1997] Meisels, I. and Saaltink, M. (1997). The Z/EVES reference manual (for version 1.5). Technical Report TR-97-5493-03d, ORA Canada. [Merriam and Harrison, 1997] Merriam, N. and Harrison, M. (1997). What is wrong with guis for theorem provers. BIBLIOGRAPHY 149 [Milner, 1980] Milner, R. (1980). A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer-Verlag, Berlin. [Milner, 1989] Milner, R. (1989). Communication and Concurrency. International Series in Computer Science. Prentice Hall. SU Fisher Research 511/24. [Moller and Stevens, 1999] Moller, F. and Stevens, P. (1999). The Edinburgh Concurrency Workbench {Version 7.1). Laboratory for Fundations of Computer Science, University of Edinburgh. [Monroy-Borja, 1997] Monroy-Borja, R. (1997). Planning Proofs of Correctness of CCS system. PhD thesis, University of Edinburgh. [Naumov eta!., 2001] Naumov, P., Stehr, M.-0., and Meseguer, J. (2001). The HOL/NuPRL proof translator: A practical approach to formal interoperability. The 14th International Conference on Theorem Proving in Higher Order Logics, Edinburgh, Scotland. [Nesi, 1997] Nesi, M. (1997). Mechanising a modal logic for value-passing agents in HOL. Electronic Notes in Theoretical Computer Science 5, 1997, 5. [Nesi, 1999] Nesi, M. (1999). Formalising a value-passing calculus in HOL. Formal Aspects of Computing, 11(2):160-199. [Nielsen, 2005] Nielsen, J. (2005). Ten usability heuristics. http://WWW.useit.com/papers/heuristic/heuristic_list.html. [Nordstrom eta!., 1990] Nordstrom, B., Petersson, K., and Smith, J. M. (1990). Progmm- ming in Martin-Lof's Type Theory, volume 7 of International Series of Monogmphs on Computer Science. Oxford University Press, New York, NY. [Nordstrom eta!., 2000] Nordstrom, B., Petersson, K., and Smith, J. M. (2000). Martin- lof's type theory. In Abramsky, S., Gabbay, D. M., and Maibaum, T., editors, Handbook of Logic in Computer Science. Oxford University Press. [Owre eta!., 1997] Owre, S., Rushby, J., and Shankar, N. (1997). Integration in PVS: tables, types, and model checking. In Brinksma, E., editor, Tools and Algorithms for the Construction and Analysis of Systems TACAS '97, number 1217 in Lecture Notes in Computer Science, pages 366-383, Enschede, The Netherlands. Springer-Verlag. [Owre eta!., 1992] Owre, S., Rushby, J. M., and Shankar, N. (1992). PVS: A prototype verification system. Lecture Notes in Computer Science, 607:748-752. [Pang et a!., 2002] Pang, J., Callaghan, P., and Luo, Z. (2002). An approach to verification of domain properties based on LF. In Types 2002 WORKSHOP, Netherlands. BIBLIOGRAPHY 150 [Pang et al., 2005a] Pang, J., Callaghan, P., and Luo, Z. (2005a). LFTOP: An LF-based approach to domain-specific reasoning. Journal of Computer Science and Technology, 20(4):526-535. [Pang and Zhao, 2005] Pang, J. and Zhao, R. (2005). The features of list comprehensions of Haskell and their applications. Journal of Computer engineering and applications (in Chinese with English abstract), 41 No.4:99-101. [Pang et al., 2005b] Pang, J., Zhao, R., and Wang, H. (2005b). The higher-order features of Haskell and their applications. Journal of Computer Science (in Chinese with English abstract}, 32(6):167-168,198. [Pang et al., 2006a] Pang, J., Zhao, R., and Wang, H. (2006a). Verification of semantic properties based on logical framework LF. Journal of Computer Science (in Chinese with English abstract), 33(5):12-16,69. [Pang et al., 2006b] Pang, J., Zhao, R., and Wang, Q. (2006b). The features of lazy evalua- tion of Haskell and their applications. Journal of Computer engineering and applications (in Chinese with English abstract), 42 No.10:97-99,122. [Park, 1969] Park, D. (1969). Fixpoint induction and proofs of program properties. Machine Intelligence, 5:59-78. [Paulson, 1999] Paulson, L. (1999). Isabelle's logics. Technical report, Computer Labora- tory, Cambridge University. [Paulson, 2005] Paulson, L. (2005). Isabelle www page. http://WWW.cl.cam.ac.uk/Research/HVG/Isabelle. [Paulson, 1987] Paulson, L. C. (1987). Logic and computation: interactive proof with Cam- bridge LCF Cambridge Tracts in Computer Science 2. Cambridge University Press. [Paulson, 1994] Paulson, L. C. (1994). Isabelle: A generic theorem prover. Lecture Notes in Computer Science, 828:xvii + 321. [Paulson, 2000] Paulson, L. C. (2000). Mechanizing UNITY in Isabelle. ACM Transactions on Computational Logic, 1(1):3-32. [Paulson, 2001] Paulson, L. C. (2001). Mechanizing a theory of program composition for UNITY. ACM Transactions on Programming Languages and Systems, 23(5):626-656. [Pfenning, 1999] Pfenning, F. (1999). Logical frameworks. In Robinson, A. and Voronkov, A., editors, Handbook of Automated Reasoning. Elsevier Science Publishers. In prepara- tion. BIBLIOGRAPHY 151 [Plotkin, 1981] Plotkin, G. D. (1981). A structural approach to operational semantics. Tech- nical Report DAIMI FN-19, Computer Science Department, Aarhus University, Aarhus, Denmark. [Pollack, 2005] Pollack, R. (2005). Lego www page. http://WWW.dcs.ed.ac.uk/home/lego. [Pratt, 1976] Pratt, V. (1976). Semantical considerations of floyd-hoare logic. In Proc. 16th IEEE FOGS. [Pratt, 1982] Pratt, V. (1982). A decidable mu-calculus. In Proc. 22nd IEEE FOGS. [Prawitz, 1965] Prawitz, D. (1965). Natural Deduction, a Proof-Theoretic Study. Lmqvist and Wiksell. [Prawitz, 1973] Prawitz, D. (1973). Towards a foundation of a general proof theory. In P. Suppes et al., editor, Logic, Methodology, and Phylosophy of Science IV. [Prawitz, 1974] Prawitz, D. (1974). On the idea of a general proof theory. Synthese, 27. [Project, 2002] Project, T. C. D. T. L. (2002). The Coq Proof Assistant Reference Mamtal (version 7.3}. INRIA-Rocquencourt and CNRS-ENS Lyon. [Project, 2004] Project, T. C. D. T. L. (2004). The Coq Proof Assistant Reference Manual (version 8.0}. INRIA-Rocquencourt and CNRS-ENS Lyon. [Ranta, 1994] Ranta, A. (1994). Type- Theoretical Grammar, volume 1 of Indices. Clarendon Press, Oxford. [Ranta, 2005] Ranta, A. (2005). Gf www page. http://WWW.cs.ehalmers.se/-aarne/GF. [Rudnicki, 1992] Rudnicki, P. (1992). An overview of the Mizar project. In 1992 Workshop on Types for Proof and Programs. Chalmers University of Technology. [Saaltink, 1997] Saaltink, M. (1997). The Z/EVES user's guide. Technical Report TR-97- 5493-06, ORA Canada. [Safranek, 2001] Safranek, D. (2001). Verification tools for concurrent processes. Technical report, http://WWW.fi.muni.cz/paradise/acp_tools. [Saibi, 1997] Saibi, A. (1997). Typing algorithm in type theory with inheritance. Proc of POPL'97. [Saibi, 1999] Saibi, A. (1999). Outils Generiques de Modelisation et de Demonstration pour la Formalisation des Mathematiques en Theorie des Types. Application ?la Theorie des Categories. PhD thesis, INRIA. BIBLIOGRAPHY 152 [Seaman and Felty, 1993] Seaman, J. and Felty, A. (1993). Proving properties about a lazy functional language with the coq proof development system. http://WWW.site.uottava.ca/-afelty/bib.html. [Seaman and lyer, 1996] Seaman, J. and lyer, S. (1996). An operational semantics of sharing in lazy evalutation. Journal of Computer Programming, 27(3):289-322. [Sellink, 1994] Sellink, M. P. A. (1994). Verifying process algebra proofs in type theory. In Andrews, D. J., Groote, J. F., and Middelburg, C. A., editors, Proceedings of the International Workshop on Semantics of Specification Languages (SOSL '93 }, Workshops in Computing, pages 315-339, London, UK. Springer. [Shankar, 1986] Shankar, N. (1986). Proof Checking Metamathematics. PhD thesis, Univer- sity of Texas at Austin. [Siind and Boulton, 2000] Slind, K. and Boulton, R. (2000). Iterative dialogues and au- tomated proof. In Gabbay, D. M. and de Rijke, M., editors, Frontiers of Combining Systems 2 {Proceedings of the Second International Workshop, FroCoS'98, Amsterdam, The Netherlands, October 1998}, volume 7 of Studies in Logic and Computation, pages 317-335. Research Studies Press. [Smith, 1988] Smith, J. (1988). The independence of Peano's fourth axiom from Martin- Lof's type theory without universes. Journal of Symbol·ic Logic, 53(3). [Soloviev and Luo, 2000] Soloviev, S. and Luo, Z. (2000). Coercion completion and conser- vativity in coercive subtyping. In Annals of Pure and Applied Logic. [S!ilrensen and Urzyczyn, 1998] S!ilrensen, M. H. and Urzyczyn, P. (1998). Lectures on the Curry-Howard isomorphism. Technical report 98/14 (= TOPPS note D-368), Univ. of Copenhagen. [Steve Bishop and et al., 2005] Steve Bishop, M. F. and et al., M. N. (2005). Tcp,udp,and sockets:rigorous and experimentally-validated behavioural specification volume 1: Overview. Technical Report UCAM-CL-TR-624, University of Cambridge. [Stirling, 1992] Stirling, C. (1992). Modal and temporal logics. In Abramsky, S., Gabbay, D. M., and Maibaum, T. S. E., editors, Handbook of Logic in Computer Science. Volume 2. Background: Computational Structures, pages 477-563. Oxford University Press. [Tarski, 1955] Tarski, A. (1955). A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 25(2):285-309. [Thery et al., 1992] Thery, L., Bertot, Y., and Kahn, G. (1992). Real theorem provers de- serve real user-interfaces. Technical Report RR-1684, Inria, Institut National de Recherche en Informatique et en Automatique. BIBLIOGRAPHY 153 [Trybulec, 2005] Trybulec, A. (2005). Mizar WWW page. http://mizar.org/. [van Daalen, 1980] van Daalen, D. T. (1980). The Language Theory of Automath. PhD thesis, Technological University, Eindhoven. [Vardi and Wolper, 1986] Vardi, M. and Wolper, P. (1986). An automata-theoretic ap- proach. to automatic program verification. In Proc. 1986 IEEE Symp. Logic Comput. Sci., pages 322-331, Cambridge. [Vardi and Wolper, 1983] Vardi, M. Y. and Wolper, P. (1983). Yet another process logic. LNCS, 164:501-512. [Victor, 1994] Victor, B. (1994). A Verification Tool for the Polyadic 1r-Calculus. Licentiate thesis, Department of Computer Systems, Uppsala University, Sweden. Available as report DoCS 94/50. [Walukiewicz, 1993] Walukiewicz, I. (1993). A Complete Deductive System for the I-t- Calculus. PhD thesis, Warsaw University. [Walukiewicz, 1995] Walukiewicz, I. (1995). Notes on the propositional I-t-calculus: Com- pleteness and related results. Technical Report NS-95-1, BRICS, Dept. of Computer Science, Univ. of Arhus, Denmark. [Wenzel, 2002] Wenzel, M. M. (2002). Isabellejlsar - a versatile environment for human- readable formal proof documents. PhD thesis, TU Mi.inchen. [Wiedijk, 2001] Wiedijk, F. (2001). Mizar light for HOL light. Lecture Notes in Computer Science, 2152:378-393. [Winskel, 1989] Winskel, G. (1989). A note on model checking the modal nu-calculus. In Ausiello, G., Dezani-Ciancaglini, M., and Rocca, S. R. D., editors, Automata, Languages and Programming, 16th International Colloquium, volume 372 of Lecture Notes in Com- puter Science, pages 761-772, Stresa, Italy. Springer-Verlag. [Yu, 1999] Yu, S. (1999). Verification of Concurrent Programs Based on Type Theory. PhD thesis, University of Durham. [Yu and Luo, 1997] Yu, S. and Luo, Z. (1997). Implementing a model checker for LEGO. In Fitzgerald, J., Jones, C. B., and Lucas, P., editors, FME'97: Industrial Applications and Strengthened Foundations of Formal Methods (Proc. 4th Intl. Symposium of Formal Meth- ods Europe, Graz, Austria, September 1997), volume 1313 of Lecture Notes in Computer Science, pages 442-458. Springer-Verlag. ISBN 3-540-63533-5. Index JL-calculus, 46 LAZY-PCF+SHAR, 70 ALF, 20 Automath, 20 bisimulation, 49 ccs, 41 congruence, 48 Coq, 4, 20 DSL, 1 ECC, 23 HML, 44 HOL, 23 Isabelle, 4, 20 lsabellejlsar, 4 Lego, 4, 20 LF, 2 LTS, 41 MIZAR, 22 Nuprl, 20 PDL, 44 Plastic, 20 Proof General, 38 TAME, 5 Truth, 5 XML, 89 Z/EVES, 5 154