510.84 IL6r no. 1139-1145 1983-84 INC. cop. 3 ■/.: k95 '■■■' 5lo.«H /V0. /f^ Report No. UIUCDCS-R-83-1142 UILU-ENG 83 1724 Mutual Consistency Maintenance in A Prototype Data Traffic Management System August 31, 1983 ©Rtftf of vfc DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAH URBANA, ILLINOIS Report No. UIUCDCS-R-83-1142 Mutual Consistency Maintenance in A Prototype Data Traffic Management System by G. G. Belford, J. W. S. Liu, D. Cho, P. Cotten, S. England J. D. Goldstein, S. C. Hwung, K. A. Kaufman, C. K. Kim J. Leo, A. P. Manolas, A. Moon, A. D. Smet, Y. L. Yan Department of Computer Science University of Illinois 1304 W. Springfield Avenue Urbana, Illinois 61801 and G. L. Robinson and R. L. Lapp Department of the Army Construction Engineering Research Lab. Interstate Research Park Newmark Drive, P. O. Box 4005 Champaign, Illinois 61820 August 31, 1983 This work was partially supported by the U. S. Army Construction Engineering Research Lab., Champaign, Illinois, Contract No. DACW88-81-C-0011. TABLE OF CONTENTS Abstract List of Figures Chapter 1. Introduction Chapter 2. Background and Mission Chapter 3. Prototype Data Traffic Management System Chapter 4. The Linkage Directory Chapter 5. Mutual Consistency Maintenance Subsystem Chapter 6. Summary and Conclusions 1 3 6 9 13 25 References 26 Appendix A List of Abbreviations Appendix B UCP and TUTHP in the AMPRS II-to-CAPCES Link 27 28 Bra K!bR53 88S9 ABSTRACT This report describes the mutual consistency maintenance mechanism used in a prototype Data Traffic Management System (DTMS) which was designed and implemented to integrate several independent and stand-alone data systems owned by the U. S. Army Corp of Engineers (CE). The major design constraints were that no modifications to the data systems involved were allowed and that an update to each duplicate copy was to be handled as automatically as possible and in a manner approved by the administrator of the data system containing the copy. The purpose of this work was to develop and test operating procedures which can be expanded and modified so as to be applicable in the implementation of an operational version of DTMS. The capabilities of the prototype DTMS and the Mutual Consistency Maintenance Subsystem (MCMS) are discussed. The linkage directory used to support mutual consistency maintenance and the mutual consistency maintenance mechanism are described. u LIST OF FIGURES Figure No. Page No. Configuration of the Prototype DTMS Hierarchical Structure of the Linkage Directory General Structure of the MCMS Project Data Input from AMPRS II Description File Examples of terminal displays and user responses Acknowledgement message to AMPRS II user 7 10 14 17 18 21 23 MM CHAPTER 1. INTRODUCTION. Id many distributed systems, duplicate copies of files or records are kept at separate sites for reasons ranging from improvement in reliability and query response time to preservation of independent ownership of information gathered at different sites. Many update schemes have been developed [1 - 6] to ensure mutual consistency. It is typical to assume that the individual sites are connected via dedicated lines in the absence of network and host failures and that updates to all sites can be carried out automatically under the control of a distributed database management system or a distributed operating system. This report describes the mutual consistency maintenance mechanism used in a prototype Data Traffic Management System (DTMS), which was designed and implemented to integrate several independent and stand-alone data systems owned by the U. S. Army Corps of Engineers (CE). Some of these data systems reside on a single host computer, but others are on remote hosts. Communication facilities provided to support host-to-host communication primarily consist of dial-up lines. While the problem of maintaining consistency among several data systems on different host computers presents no serious difficulties in theory, the ideal solutions to the problem may not be implementable in a real situation because of political constraints. In the case of this pilot project, different data systems are operated and maintained by different organizations. Because authorization and access rights reside with the database administrators and users of the individual data systems, it is not always possible to update duplicate copies automatically. In fact, the constraints imposed on the design of the prototype DTMS were that no modifications to the data systems served by DTMS are allowed and that updates to duplicate copies must be handled in ways approved by the administrators of the individual data systems. Because of these constraints, the solutions described in [1 -6] were not applicable. The prototype DTMS was designed and implemented to demonstrate the feasibility of integrating independent data systems to form a distributed data system in which mutual consistency is maintained between the component data systems and coherent access to more than one data system is supported. Specifically, the purpose of this project was to develop a general framework within which the mutual consistency maintenance mechanism and distributed query processing facility can be tailored to meet the constraints of the individual data systems. Our primary objective was to develop and test operating procedures which can be expanded and modified so as to be applicable in the implementation of an operational version of a DTMS serving a dozen or more major data systems. To provide the necessary background, Chapter 2 describes the motivation and mission of the project. Chapter 3 discusses the capabilities of the prototype DTMS, in general, and of the Mutual Consistency Maintenance Subsystem (MCMS), specifically. Chapter 4 describes the structure of the linkage directory that was implemented to support mutual consistency maintenance. Chapter 5 describes the mutual consistency maintenance mechanism used in the prototype MCMS. Our conclusions are summarized in Chapter 6. CHAPTER 2. BACKGROUND AND MISSION This chapter is an expansion of the objectives and scope stated in the previous chapter. Within this chapter, the work on DTMS is placed within the context of the mission of the CE and its responsible offices in project management, i.e., the management of design, construction, operation, maintenance, and disposal of military facilities throughout the facility life cycle. The terminology used in this report is also defined here. 2-1. Project Data In order to carry out its tasks in project management, the CE requires information on all associated facilities from facility inception, through construction completion (or real estate acquisition), and through disposal for Army-owned facilities. The information on the facilities is also used by other organizations and agencies to support a wide range of activities during the facility life cycle. Currently, the information on different projects is stored in a large number of stand-alone data systems. Data items relating to a single project stored in one data system are referred to collectively as the project data for that project. Each data system has an administrator. In this case, the administrator is an organization element responsible for the operation of the data system and the maintenance of the data base stored in the system. The administrator controls data base access, issues user identification, and manages password procedures. He has full accountability for the adequacy of the data system and the accuracy of the information stored in the system. The users of the data systems are project managers. Users are granted write access to the project data for their projects by the administrator. 2-2. Redundancy in Stored Project Data Since different organizations at different levels of project management collect and maintain the data stored in different data systems, redundancy in stored data is unavoidable. Moreover, due to different design objectives, the data systems often support different data models, user views, and user interfaces. For example, different file formats and identifiers may be used for data on the same project. The same data item may also be stored in different data systems in different formats. During any phase of the life cycle of a facility, all updates of a data item that is part of the project data related to this facility are first carried out in the data system used to support project management during that phase. Therefore, this copy of the data item may be considered as the primary copy. The user or the administrator who has the write access right to the primary copy of the data item is referred to as the owner of the data item. It is assumed that, at any one tine, there is only one primary copy of any data item. Other copies of the data item stored in other data systems are duplicate copies. The user with the write access right to a duplicate copy of a data item is referred to as the responsible user of the data system containing that copy. An update of a data item is carried out when the owner of the data item issues an update (transaction) in the data system containing the primary copy. This data system is referred to as the sending data system for that transaction. The data systems containing copies of one or more data items affected by an update transaction are referred to as the receiving data systems. An update transaction sent to a receiving data system to request update of a duplicate copy is referred to as a triggered update (transaction). 2-3. Data Traffic Management System Currently, updates to all duplicate copies of an item for the purpose of mutual consistency maintenance are applied manually upon receipt of information detailing the update that has been applied to the primary copy. This process is time-consuming and liable to error. Foreseeing the need to provide a more effective mutual consistency maintenance mechanism as well as a coherent interface to support report generation based on information residing in more than one data system, a Data Traffic Management System (DTMS) was planned. The functions to be supported by the DTMS can be categorized into the following two classes: (1) Maintenance of mutual consistency between data systems DTMS monitors updates to data systems served by it and determines whether data items updated have duplicate copies. When a data item with duplicate copies is updated, DTMS will submit an appropriately formulated triggered update transaction to each of the receiving data systems. The administrator of a receiving data system will be notified whenever triggered updates have been carried out if his system allows automatically triggered updates. Alternatively, DTMS will inform him that the triggered update is needed and prompt him for appropriate actions. When a triggered update is carried out, an acknowledgement message will be sent to the owner of the data item. When a triggered update is deferred or rejected, the acknowledgement message will contain reasons for deferral or rejection. (2) Report preparation and management decision support requiring access to information stored in more than one data system The DTMS will provide support to facilitate queries to multiple data systems and generation of standard reports. A dictionary/directory system will be an integral part of the DTMS. CHAPTER 3. PROTOTYPE DATA TRAFFIC MANAGEMENT SYSTEM This chapter provides an overview of the prototype DTMS. Its capabilities and configuration are described. The prototype DTMS was designed to demonstrate the feasibility of (1) maintenance of mutual consistency between data bases stored in the data systems served by DTMS and (2) report generation and query processing requiring access to information stored in more than one data system. Two major design constraints were that no modifications to the data systems involved were allowed and that updates to duplicate copies triggered by an update of the primary copy were to be handled as automatically as possible, with minimal user intervention. The primary objective of this pilot project was that the procedures and mechanisms developed and tested in the prototype should be readily usable in an operational version of the DTMS. The implementation of the prototype DTMS is partially completed. Its configuration is as shown in Figure 1. The portion shewn in solid lines is implemented. The portions shown in dotted lines are still in the design stage. The DTMS currently resides on an IBM 370/3033 host computer operated by Tymshare, Inc. This computer is referred to hereafter as the DTMS host. The detailed characteristics of the individual data systems are not relevant. Three of these systems, the DD Form 1391 Processor, CAPCES, and AMPRS II, reside on the DTMS host and are supported by the database management system FOCUS. DEACONS runs on a Wang Labs VS90 computer which operates in IBM 370 emulation mode. HQ-IFS runs on an IBM compatible computer under the NOMAD data management system. The AR415-17 runs on a Honeywell 66/80 computer. The COEMIS F&A systems reside on Honeywell 66/20 computers. The (Mutual Consistency Maintenance Subsystem) MCMS is implemented to enforce mutual consistency between AMPRS II, CAPCES, and DEACONS, as shown by the solid boxes in Figure 1. MCMS is being expanded to include the DD Form 1391 Processor to the list of data systems served. These data systems were chosen to be served by the prototype MCMS in order to demonstrate that the mechanisms used in the prototype MCMS can enforce mutual consistency between data systems both on and not on the DTMS host. A small and efficient directory is kept by the DTMS to support maintenance of mutual consistency. This directory, referred to as the linkage directory hereafter, contains information on locations of and access paths to duplicate copies of AMPRSE COEMIS F&A| L__ __J DD form 1391 Processor the MCMS // >" The procedure UPBULK carries out a number of integrity checks, and any errors discovered are logged into an error listing for subsequent correction. 13 CHAPTER 5 MUTUAL CONSISTENCY MAINTENANCE SUBSYSTEM This chapter describes the MCMS in the prototype DTMS, which is designed and implemented to maintain mutual consistency between AMPRS II and CAPCES and between AMPRS II and DEACONS. These two components of the subsystem are referred to as the AMPRS II-to-CAPCES link and AMPRS II-to-DEACONS link, respectively. In this subsystem, the sending data system is AMPRS II. The receiving data system is either CAPCES or DEACONS. 5-1. General Structure of MCMS The general configuration of the MCMS is as shown in Figure 3. For each of the data systems served by DTMS, there is an update capture process (UCP) and a triggered update transaction handling procedure (TUTHP). Mutual consistency is maintained in the following manner: The UCP of each data system determines whether any updates originated in the system require that triggered updates be carried out in other data systems. When a triggered update is required, the TUTHP of the receiving data system is called to carry out the update in the manner approved by the administrator of that system. Triggered updates may be carried out automatically or semiautomatically by DTMS, or manually with the help of DTMS. The two methods implemented in the prototype MCMS are described later on in this section. Update Capture Depending on the manner in which the sending data system is interfaced to DTMS, the UCP may extract information on updates from update transaction files or from history files maintained by the sending data system. The UCP may also extract update information by monitoring the interactive work sessions during which update transactions are entered. The UCP may be waked up to run by the DTMS monitor at regular time intervals. Alternatively, it may be waked by a notification signal from the sending data system at the time when an update is carried out in that system. Clearly, depending on the characteristics of the data systems involved and their interfaces with DTMS, many different types of UCP need to be provided. When the UCP of a sending data system runs, it determines whether any new updates have been made since the last time it ran. For each new update, the UCP captures all relevant information on the update. By consulting the linkage directory, the 14 AMPRSII UCP Linkage Directory ^^ 1 AMPRSII AMPRSII TUTHP •<■ ' $ CAPCES UCP r^ ^ 1 CAPCES CAPCES TUTHP % *> |~~~-jl DEACONS UCP "^ i ^ A'" DEACONS DEACONS TUTHP ^ — — - DD 1391 UCP ■»- ^ DD Form 1391 Processor DD 1391 TUTHP — — > « t • • *" Data ^ Control Figure 3. Mutual Consistency Maintenance Subsystem mm 15 UCP determines whether there are duplicate copies of the data item stored in other data systems. If there are, the UCP finds from the linkage directory the item name and format and the responsible user id in each of the receiving data systems. Necessary conversions in formats and values are carried out also. The result generated by the UCP for each update transaction is an input file for each receiving data system. Each input file contains complete information needed by the receiving data system to carry out the triggered updates of all data items whose primary copies were updated by the update transaction. Because triggered updates are handled in different ways, the contents and formats of the input files are different for different receiving data systems. In the case where DTMS is allowed to carry out the triggered updates automatically or semiautomatically after permission is granted by the user, the input file is used directly as the input to the TUTHP of the receiving data system. In the prototype MCMS, automatic triggered updates were not allowed. In this case, a corresponding description file is also generated by the UCP. The description file is used as a part of the message sent to the responsible user of the receiving data system giving him the details on the triggered update transaction and requesting his permission to carry it out. The description file is also used as a part of the acknowledgement message sent by DTMS to the originating user. Triggered Update Transaction Handling When the TUTHP of a receiving system is called, each input file (and, if applicable, the corresponding description file) is processed in turn. For each of the data items contained in the input file, the TUTHP retrieves the needed access path information from the linkage directory. Based on this information, an appropriately formulated update transaction is generated and submitted to the receiving data system. The transaction may be submitted automatically by DTMS. Alternatively, as in the case of the AMPRS II-to-CAPCES link (described later), information such as the new and old values of each data item is presented to the responsible user. The user may choose to have the triggered update carried out, deferred, or rejected. If he chooses to carry out the triggered update, the triggered update is formulated and submitted for him. On the other hand, if he chooses not to carry out the triggered update, he is prompted to type in reasons for his action. In either case, the acknowledgement sent to the originating user contains information on his action and reasons. 18 In the case where the triggered update transactions to a receiving data system are handled completely automatically by MCMS, the TUTHP of that system is called by the UCP of the sending data system. In the case where triggered updates arc handled in batch mode, the TUTHP is periodically awakened. When a triggered update transaction is submitted to and carried out by the receiving data system, the duplicate copy in that system is updated. Clearly, triggered updates should not in turn trigger more updates. Hence, a fully automatic system would require some method, such as a flag in the linkage directory to indicate which is the primary copy, to prevent cyclic update triggering. In the examples described below, triggered updates are handled semiautomatically under user supervision, since the administrators of the receiving data systems will not allow triggered updates be carried out automatically. 5-2. The Mechanism Implemented in the Prototype MCMS The prototype MCMS is designed and implemented to maintain mutual consistency between AMPRS II and CAPCES and between AMPRS II and DEACONS. The AMPRS II UCP and the CAPCES TUTHP are described in detail in Appendix B. interface with AMPRS II. When an AMPRS II user initiates an update, the information required to carry out the update is placed in an update transaction file. This file is processed when the update to the AMPRS II data system is actually carried out. This may be either immediately after the transaction is submitted or after it has been batched together with other update transactions. The AMPRS II UCP interfaces with AMPRS II via the AMPRS II update transaction files. At regular intervals, the AMPRS II UCP copies the update transaction files from the AMPRS II user space into the DTMS space. By processing the update transaction file, the UCP determines the data items updated and their new values. Triggered Update Handling in the AMPRS H-to-CAPCES Link When the receiving data system is CAPCES, the information extracted from a update transaction file is as listed in Figure 4. Since the CAPCES database administrator does not allow completely automatic triggered updates, triggered updates are handled in the following semiautomatic manner: The corresponding description file, shown in Figure 5, is used as a part of a message which is displayed for the CAPCES 17 Sending Data System Identification User (Project Manager, Sender) Identification Project Identification (Including Items Below, as Applicable) Station (Location) Code (Standard) Project Number Symbol Type Funds Symbol Authorization (Program) Year (Last Two Digits) Project Data Item Name- Value Pairs Name (or Number) 1, Value 1 Name (or Number) 2, Value 2 Name (or Number) n, Value n Figure 4. Data extracted from AMPRS II ■!i| •V**-.*'. ' s.' ■■■'•■■■'" "■ £■ ■■-■•••'• 18 user who is responsible for updating the data item in CAPCES. It is also inserted in the acknowledgement message returned to the AMPRS II user who initiated the update. Due to the limitation of 80 characters per line on most CRT terminals, each of the data values in a description file is at most 15 characters in length. Similarly, due to the facts that CRT terminals on Tymshare systems operate in scroll mode and that there are only 24 lines per display screen, it is not desirable to display any file containing more than 24 lines. (It will be difficult for a user to read the lines in the beginning of the file as the lines scroll upwards if there are more than 24 lines to be displayed.) For this reason, the number of data items in each description file is limited to five in the prototype AMPRS II-to-CAPCES link. It is noted that this limitation need not be imposed in the operational version of the DTMS. A more sophisticated window manager which presents consecutive segments of the displayed file to users as individual pages or allows scrolling one line at a time under user control may be developed for the operational version of the DTMS. Such a window manager will provide a more suitable user interface for displaying files of arbitrary lengths. For every data item in the description file displayed to him, the responsible CAPCES user is given the choice of having the copy of the data item in CAPCES updated as in AMPRS II, of refusing to carry out the update, or of deferring the update until some later time. If his choice is to update the data item, the update to CAPCES is automatically carried out for him. Otherwise, he is prompted to type in reasons for refusal or deferral. In the following, the interaction between the CAPCES user and the DTMS are described. (1). Informing the CAPCES User. The CAPCES user is notified of an incoming message from DTMS immediately upon sign-on. A message with the following format is displayed on his terminal when he signs on. DTMS MESSAGES ARE WAITING. DO YOU WANT MESSAGES DISPLAYED? (2). Display of Description File and Prompt for Action. The user responds by typing in yes (or y) or no (or n). (i). If "no" (or "n") is the response, the short message (described in (1)) and the description file(s) are stored for the user. The short message is displayed each time he 19 Receiving Data System Sending Data System CAPCES CAPCES User ID Project ID in CAPCES Station (Location) Name Project Number Symbol Type Funds Name Authorization (Program) Year Project Description (Title) AMPRS II AMPRS II User ID Project ID in AMPRS II Station (Location) Code Project Number Symbol Type Funds Code Authorization (Program) Year Category Code (F4C) Data items to be updated in CAPCES No. Name in CAPCES New Value Original Value Name in AMPRS II Value in AMPRS II 1 2 Name (No.) 1 Name (No.) 2 Value 1 Value 2 Value 1 Value 2 Name (No.) 1 Name (No.) 2 Value 1 Value 2 5 Name (No.) 5 Value 5 Value 2 Name (No.) 5 Value 5 Figure 5 Format of Description File •x^'x-'. 20 signs on. (ii). If the user types "y es '\ the terminal displays the following: DATE YY MM DD TIME hh:mm:ss MSG NO. DTMS Message Identification Number The description file corresponding to this msg. no. (See Figure 5) SELECT ONE OF THE FOLLOWING ACTIONS: 1. ACCEPT ALL ITEMS 2. DEFER ALL ITEMS 3. REJECT ALL ITEMS 4. ACCEPT IN PART (3). Further Interactions, (i) If the selection of the user is either 1, 2, or 3, the terminal displays 1, 2, or 3, respectively, as shown below: 1. PROJECT DATA UPDATING COMPLETED 2. TYPE DEFERRAL REASON(S) 3. TYPE REJECTION REASON(S) In the case of 2 or 3, the user types in reasons for rejection or deferral of the update of the data items in CAPCES. The following are examples of user responses to messages 2 and 3, respectively. 2. UPDATES TEMPORARILY DEFERRED TO STABILIZE OUR REPORTS 3. PROJECT WAS DELETED PER DIRECTIVE (NUMBER) (ii). In the case where the user response is 4, descriptions of the data items are displayed one at a time. For each data item displayed, the user is prompted to type in his choice of action. Examples of the terminal displays and user responses are shown in Figure 6. Lines in upper case letters are displayed by the system and lines in lower case letters are user responses. 21 LINE NAME IN NEW ORIGINAL NAME IN VALUE IN NO. CAPCES VALUE VALUE AMPRS II AMPRS II 1 XY2XY2 42 41 XY2 72 >ENTER 1 TO ACCEPT, 2 TO DEFER, OR 3 TO REJECT: 3 [TYPE REJECTION REASON(S): The project was deleted. Blank line LINE NO. NAME IN CAPCES NEW VALUE ORIGINAL VALUE NAME IN AMPRS II VALUE IN AMPRS II PAD482 4003 2032 PAD 6000 >ENTER 1 TO ACCEPT, 2 TO DEFER, OR 3 TO REJECT: 2 [TYPE DEFERRAL REASON(S): update temporarily deferred to stabilize our reports on this project, update will be carried out before September 10, 1983. Blank line •'&'•'''■'£ Figure 6. Examples of terminal displays and user responses in m 22 (4). Completion of All Triggered Updates. For each triggered update transaction to be carried out in CAPCES, there is a message formatted as described in (2.ii) waiting to be read by the CAPCES user. For each message, steps (2) and (3) are repeated until the following message appears. NO MORE MESSAGES Or, in the case where the user wishes to stop before all triggered updates are taken care of, unprocessed messages are stored for him to handle in the next session. (5). Acknowledgement to AMPRS II User. For each triggered update transaction acted upon by the CAPCES user, an acknowledgment message is sent to the AMPRS II user identified in the user name field of the description file. This acknowledgement message informs the AMPRS II user which action has been taken by the responsible CAPCES user. Figure 7 summarizes the acknowledgement messages that might be received by the AMPRS II user. The last line in Figure 7 is displayed only when the message in 4 indicates that updates to some data items are rejected. Whenever the response to the question DO YOU NEED ASSISTANCE is "yes", the user is prompted to type in the name and telephone number of the person to be contacted. Appropriate messages are then sent to the DTMS management office so that help may be provided to the CAPCES and AMPRS II users. After a triggered update is acted upon by the responsible CAPCES user, the content of the description file for that triggered update is kept in one or more history files. More specifically, one history file contains information on updates in which some or all items were accepted. There are also two "wait" files, one of which contains descriptions of updates which were entirely or in part deferred or rejected and the other contains information in machine- readable form on deferred transactions, so that they can be processed later. Triggered Update Handling in the AMPRS II-to-DEACONS Link. DEACONS is the data system containing information on Air Force design and construction projects. Since the administrator of DEACONS does not permit DTMS software to be resident on the DEACONS host computer, the AMPRS II-to-DEACONS triggered updates must be applied to DEACONS in batch mode. Specifically, the 23 ACTION DISPLAY DATE YY MMM DD TIME hh:mm:ss MSG NO. DTMS Message Identification Number Corresponding description file YOUR MESSAGE (NUMBER) IS ACCEPTED YOUR MESSAGE (NUMBER) IS DEFERRED REASON(S): UPDATES TEMPORARILY DEFERRED TO STABILIZE OUR REPORTS YOUR MESSAGE (NUMBER) IS REJECTED REASON(S): PROJECT WAS DELETED PER DIRECTIVE (NUMBER) DO YOU NEED DTMO ASSISTANCE? YOUR MESSAGE (NUMBER) IS TREATED AS FOLLOWS LINE I DEFERRED (REASON: DATA ITEM VALUE TEMPORARILY ....) LINE J DEFERRED (REASON: UPDATE TEMPORARILY DEFERRED ....) LINE K REJECTED (REASON: SHOULD NOT DATE 980331 BE 890331?) ALL OTHERS ARE ACCEPTED DO YOU NEED ASSISTANCE? Figure 7. Acknowledgement message to AMPRS II user 24 DEACONS administrator wants simply a complete project description created from AMPRS II data and placed in a file accessible to him whenever any of the data items pertaining to an Air Force design and construction project are changed in AMPRS II. Hence, there are no DTMS interactions with the DEACONS user or direct applications of the updates as in the AMPRS II-to-CAPCES link. A "triggered update transaction" in the case of the AMPRS II-to-DEACONS link is therefore handled in the following manner: When the AMPRS II UCP finds that the project id of a transaction contained in an AMPRS II update transaction file is that of an Air Force project, the transaction file is written to the input file generated for the DEACONS TUTHP. In this case, the input file is in the same format as that of the AMPRS II update transaction file. The DEACONS TUTHP is called to process the Air Force projects transactions file as follows. The linkage dictionary is checked to see if a transaction updates an item that requires a triggered update to DEACONS. For all such transactions, the project identifier is written to a file for further processing. This file is sorted by project identifier and duplicates are deleted. For each project identifier, a set of data element values are retrieved from AMPRS II. These values are converted to the formats of the corresponding data elements in DEACONS. They are then combined with the project identifier and written to an output file in a suitable format for automated processing by DEACONS. This output file is then spooled to the DEACONS user ID (on the DTMS host), from which it may be retrieved by the DEACONS user and applied to the DEACONS database. The details on the design and implementation of the AMPRS II- to-DEACONS link can be found in [10]. 25 CHAPTER 6 SUMMARY AND CONCLUSIONS This report describes the MCMS of the prototype DTMS designed and implemented by the Department of Computer Science, University of Illinois at Urbana- Champaign, for the Army Construction Engineering Research Laboratory. The work on the prototype DTMS was carried out to demonstrate the feasibility of creating a DTMS which would serve as a control point to coordinate and integrate the major data systems owned and accessed by the Corps of Engineers. Once the feasibility of these procedures has been demonstrated, the prototype implementation can be expanded and modified so as to be applicable to the dozen or so major data systems to be served by DTMS. Theoretically, the problem of maintaining consistency among copies of data items distributed among several data systems presents no major problems. Many solutions have been proposed. The lesson learned in this project was that ideal solutions to a problem may not be implementable in a real situation, not because of technical difficulties but because of political constraints. The design of the prototype DTMS was constrained by the requirements that no modifications to the data systems served by DTMS be made and that different triggered update handling mechanisms, as demanded by the administrators of of the data systems, be provided. The design and implementation of the facilities needed to carry out several examples of DTMS processing were completed under this project. These facilities provide a general framework within which the mutual consistency maintenance mechanism can be tailored to meet the demands of the individual data systems. Specifically, the mutual consistency maintenance mechanism was investigated both in the case when both data systems reside on the DTMS host and when the sending data system is on the DTMS host but the receiving data system is remote. In particular, major tasks accomplished include the implementation of the linkage directory to support mutual consistency maintenance and a semiautomatic triggered update handling mechanism. The linkage directory, along with the procedures for its search and maintenance, can be easily expanded to support mutual consistency maintenance in the operational DTMS. The mechanism used to handle triggered updates in the prototype MCMS can be easily generalized to be used in the operational DTMS as well. 2d REFERENCES [1] Alsberg, P. A., G. G. Belford, J. D. Day, and E. Grapa, "Multi-copy Resiliency Techniques", in Distributed Database Management, edited by P. Bernstein, J. B. Rothie, and D. W. Shipman; IEEE, 1978. [2] Ellis, C. A. "A robust algorithm for updating duplicate databases," Proc. of 2nd Berkeley Workshop on Distributed Data Management and Computer Networks, 1977. [3] Kung, H. T. and J. R. Robinson, "On optimistic methods for concurrent control," ACM TODS, vol. 6, June, 1981. [4] Lampson, B. and H. Sturgis, "Crash recovery in a distributed data storage." Tech. Report, Xerox PARC, 1976. [5] Parker, D. S., et.al, "Detection of Mutual Inconsistency in Distributed Systems," IEEE Trans, on Software Engineering, Vol.SE-9, May, 1983. [6] Thomas, R. F., "A solution to the concurrency control problem for multiple copy databases," Proc. Spring COMPCON, Feb, 1978. [7] "Prototype Data Traffic Management System, " Final Report by Department of Computer Science, University of Illinois, August 31, 1983. [8] Cotton, P., "The Implementation of a Pilot Data Traffic Manager System," MCS Project Report, Depart, of Computer Science, University of 111., Urbana, 111. 1981. [9] Moon, A., "A directory to support automatic update triggering," MCS Project Report, Department of Computer Science, University of 111., Urbana, 111., 1983. [10] A. N. Manolas, "AMPRSII - DEACONS data transfer", MCS Project Report, Department of Computer Science, University of 111., Urbana, 111., 1983. 27 APPENDIX A LIST OF ABBREVIATIONS AMPRSII AR415-15 AR415-17 CAPCES CE CERL COEMIS COEMIS F&A DD DEACONS DTMS F&A FOCUS HQ HQ-IFS IBM MCMS PAX TUTHP UCP Automated Military (Design and Construction Project Management) Progress Reporting System Military Construction, Army (MCA) Program Development (Empirical) Cost Estimating for Military Programming Construction Appropriations Programming, Control, and Execution System U. S. Army Corps of Engineers U.S. Army Construction Engineering Research Laboratory Corps of Engineers Management Information System COEMIS, Finance and Accounting Subsystem Department of Defense USAF Design and Construction System Data Traffic Management System Finance and Accounting A DBMS marketed by Information Builders, Inc. (IBI), resident on Tymshare, Inc.'s TYMCOM-370 Headquarters HQ Integrated Facilities System International Business Machines, Inc. Mutual Consistency Maintenance Subsystem Programming, Administration, and Execution System Triggered Update Transaction Handling Process Update Capture Process 28 APPENDIX B FLOW CHARTS AND DESCRIPTIONS OF UCP AND TUTHP IN AMPRS H-TO-CAPCES LINK This appendix contains flow charts which describe the AMPRS II UCP and CAPCES TUTHP. The programs described here are also documented on-line on the DTMS host computer. B-l. AMPRS H UCP, DEM03. The input to the AMPRS II update capture process, DEMOS, are the AMPRS II update transaction files. Each of these files contains the following information: (1) Identification of the AMPRS II user who originated the update transaction, (2) Project identification consisting of station code, project number, type funds code, and authorization year, and (3) Project data item name-value pair for each of the data items updated. The output generated by DEM03 are placed in the DESCFILE file, the content of which is later processed by RCV EXEC, the CAPCES TUTHP. The format of the DESCFILE file is shown in Figure B-l. For each AMPRS II update transaction, there is a segment of the DESCFILE file containing information in both the input file and the description file described in Chapter 5. The operations DEM03 are as shown in Figure B-2. Subroutines are marked by " * ". The control structure of DEM03 is shown in Figure B-2a. At a regular interval, the DEM03 picks up the notifications from AMPRS II users (each of which contains the initiator's user id and file name), and copies the update transaction files in the AMPRS II user space into the DTMS space. The DTMS file containing the AMPRS II update transaction files is named TI. For each item contained in this file, the linkage directory is consulted to determine whether the item is listed in the directory. The data item is listed in the linkage directory when it is redundantly stored in other data systems. In this case, for each receiving data system, DEM03 finds from the directory the item name and format and the responsible user id. This step, Step 1, is illustrated by the flow chart in Figure B-2b. The output generated by this step is the TEMPFILE. The flow chart for the subroutine PROCESS-TRANS called in Step 1 is in Figure B-2c. The subroutine PROCESS-ITEM, which is called by PROCESS-TRANS, is described in Figure B-2d. DSEARCH, the subroutine which is equivalent to GETLINK, is called by 20 DESCFILE DATE TOD 1st Transaction Description part Input part [ 2nd Transaction 3rd Transaction CAPCES N CAPCES AMPRS II User 10 • ♦ • • • • Name In NO* wvuti ♦ « ♦ • ProI.ID=....,item— name=value,i ProJ.ID» Item— name=value,i Figure B-l. DESCFILE File Format HP 30 PROCESS-ITEM to search the linkage directory. The flow charts of DSEARCH is in Figure B-2e. Other necessary conversions such as project id , format of value, etc. are also carried out. The results obtained by processing the AMPRS II update transaction files are placed in TEMPFILE. TEMPFILE is then sorted by USER ID and PROJ ID. This sorted output is used as input to build the DESCFILE file. This step, Step 2, of DEM03 is illustrated by the flow charts in Figure B-2f. As Step 3, the DEM03 checks for the existence of the old description files. If an old DESCFILE file exists, the DEM03 checks the ACK file which contains the acknowledgement messages from the receiving users stating that their portions of DESCFILE file are already processed. Thus, segments of the DESCFILE that are already processed are determined. DEM03 clears the processed segments of the old DESCFILE file, merges the the remaining portion of the DESCFILE with the newly created one, and adds time stamp to the DESCFILE. Step 3 is illustrated in Figure B- 2g. B-2. Operations of the RCV EXEC. Triggered updates to CAPCES are handled semiautomatically. The CAPCES TUTHP, RCV, is awakened to run when the responsible CAPCES user signs on and wishes to process the pending triggered updates. Figure B-3 shows the operations of RCV EXEC, which processes the DESCFILE file created by the update capture process, generates and sends the messages, and carries out updates to CAPCES as described in Chapter 5. SHOWCAP, CAPDATE, COMPLETE, and JFTNMS are called by RCV. The operations of RCV are described below: (1) When the responsible CAPCES user enters RCV EXEC, a link is created to the DTMS space. The DESCFILE line pointer is set to the first line of DESCFILE STORE. (2) If the DESCFILE line pointer has reached the end of file, the procedure -DONE is called (beginning at step 15.) (3) The first item in the line pointed to in DESCFILE STORE is checked. If it is not the CAPCES user's user id, it means that the user has already processed this 31 NO Copy AMPRS II User ID and update file Call Step 1 XL Call Step 2 NO -> EXIT Merge old and new description files V Put ttmestamp ^ 3L YES ^ Call Step 3 issn^M Wmm ■■■•■;'...." 3L END Figure B-2a. Control Structure of DEM03 32 START 3 Step 1 Open files ± Read and Save AMPRS II User ID ± Read contents of Tl ± Perform * PROCESS-TRANS until EOF ■> See Ffg. B-2d,C V Close file DB ± STOP Input: Tl (AMPRS II Update Transaction files) and Linkage directory Output: TEMPFILE Figure B-2b. Flow Chart for Step 1 of DEM03 * PROCESS-TRANS Locate next 'proj' <■ NO ± Move Project Key ± Read contents of Tl ± Perform * PROCESS-ITEM until next 'proj' or EOF See Fig. B-2b HIP .'■■•..'■; Figure B-2c. Flow Chart for PROCESS-TRANS (called in Step 1) 34 See Fig. B-2e (Return Code 0) £ Move ITEM-NR and Save Perform ♦ DSEARCH YES 1 Build one temp record for each Item X Read contents of Tl NO * PROCESS-ITEM YES See Fig. B-2b NO YES > EXIT Figure B-2d. Flow Chart for PROCESS-ITEM (called by PROCESS-TRANS) 35 * DSEARCH (GETLINK) ^ Check DB name YES X. Check item name YES NO See Fig. &-2d YES 1 Pick up LINK DB LINK item 1 Pick up LINK item's format, User ID 3£%Eft mmm Figure B-2e. Flow Chart for DSEARCH (called by PROCESS-ITEM) 30 START ± Open files ± Sort TEMPFILE by User and Project 10s ^ 3L Read Sorted file J EOF? YES ± NO Save User and Project IDs ± Perform ♦ HEADBUILD \Z -/ STOP YES ^> Move and build LINE 1 LINE 2 * LINEBUILD Move Item name and value to LINETABLE %. Build input line In INTABLE * WRITING Write LINE ! LINE 9 and Count (# of Items followed) Write Item lines from LINETABLE Perform * LINEBUILD Write Input lines from INTABLE Figure B-2f. Flow Chart for Step 2 of DEM03 37 START Step 3 ± Open files Input: ACK message and old description files Output: Temporary description file ± Read ACK ± Put User IDs from ACK into internal User ID table (USER TABLE) 7K > Write out this transaction's records 1. Read old description file YES Save User ID and Search USER TABLE YES ± Skip to next transaction STOP Figure B-2g. Flow Chart for Step 3 of DEM03 ". ' 7$k> 1 ■' ' 1 iLjf&rt ' 1 yLS&A B» aft n&y GWK» nB Bras RBVvtf iwwi '•■■'■•: ■■•■• m essjiV Kg^jl 38 START RCV DESCFILE STORE flagged indicating update performed Procedure -DONE 7K YES No new updates in AMPRS II Scan DESCFILE for user# match ± Display "NO MORE MESSAGES" NO 7^ > END No new updates for this user Find by user# and copy block of transactions into DESCFILE STORE ^L Copy new timestamp Into TIME DATA ± Send timestamp and this user# to AMPRS II capture process to insure purging of records just picked up Figure B-3a Flow Chart for RCV 30 * Process Update procedure Move description to DESCFILE DATA and machine data to INFILE TEMP ± Call SHOWCAP to fill in DESCFILE T Display DESCFILE DATA and get user's decision and reason(s). * COMPLETE is called to assist Send acknowledgements to AMPRS II and DTMO <- > Remind user messages remain See Fig. B-3a YES > Call ♦ CAPDATE to update CAPCES YES y_JL YES Update history files Call * JFTNMS to get name, phone#, and/or messages for DTMO