Li.S .b /OJ.;;~: 7tJ-Sj9b3 DEPARTMENT OF THE ARMY PAMPHLET NO. 70-5 ~ a:. ~ MATHEMATICS OF MILITARY ACTION, OPERATIONS AND SYSTEMS ~-HEADQUARTERS, DEPARTMENT OF THE ARMY .~ JANUARY 1968 :~~ .• FOREWORD A text of this nature is ambitious, for it seeks to provide basic mathematical concepts,techniques, and terminology useful in formulating and solving a broad variety of military problems.No single military publication has been available with as broad a scope as has been undertaken herein seeking a unified treatment of military operations and their operational considerations. The mathematical concepts and techniques are those having acquired acceptance as standards inoperations research. To apply them to specific real problems is the task for the reader. The operations researcher will find the review of basic mathematical information helpful, andthe suggestive aspects of the text stimulating. For the serious and informed reader, who will studyand restudy the text, there is sufficient development to enable excursion into the details of theliterature as appropriate to his needs. Those ignoring the mathematical approach will find increasedunderstanding of the subject in the philosophical discussions and narrative descriptions. Allreaders, no matter at what operational level or degree of technical proficiency, must exerciseimaginative associations if this text is to be of utility. It provides methods not answers and even thepertinent method may be obscure in terms of the problem at hand. Although considerable economic considerations are included in topic developments, no specificeffort has been given to presenting the analytic tools and reasoning of economic theory. In a lateredition it may be useful to incorporate in military context such subjects as marginal analysis,micro-economic analysis (particularly general equilibrium and welfare economies), input-outputanalysis, and activity analysis. To be sure, the mathematical tools are essentially those alreadyprovided but the economic perspective will give further importance to their usefulness. A level ofparticular usefulness should occur in those cost-effectiveness studies of military systems conductedin the framework of national policy and objectives. Users of this pamphlet are encouraged to submit recommended changes or comments toimprove the text. When applicable, comments should be keyed to the specific page, paragraph andline of the text in which the change is recommended. Reasons will be provided for each comment toinsure understanding and complete evaluation. Comments should be forwarded direct to theCommanding Officer, U.S. Army Research Office-Durham, Box CM, Duke Station, Durham, NorthCarolina 27706. Source data cut-off date for material contained in this publication is September1967. ~ I, I i J DA PAM 70-5 PAMPHLET HEADQUARTERSDEPARTMENT OF THE ARMY No. 70-5 WASHINGTON, D.C., 15 January 1968 MATHEMATICS OF MILITARY ACTION, OPERATIONS AND SYSTEMS Pag.,CHAPTER 1 INTRODUCTION 1-1 Situation ---------------------------------------------------------------------------1-11-2 Objectives ---------------------------------------------·------------------------------1-21-3 Level of Treatment of Content--------------------------------------------------------~ 1-21-4 Scope ------------------------------------------------~------------------------------1-3 1-5 Organization of the Volume________________________________________ ---_________ -------1-3 1-4 1-71-8 Source Material ----------------------------------------------------------------------1-5 1-6 ~;;~!~~t~7::o~:~~;~:~c~-============================================================ 1-4 CHAPTER 2 OPERATIONAL USES OF VECTORS 2--1 The Multi-Dimensionality of Operational Quantities______________________________________ 2-12-1.1 Sample Catalog --------------------------------------------___ _________ ____ ____ 2-12-2 Characteristics of Vectors --------------------------------------------_________ ____ ____ 2-22-2.1 Usages of the Term "Vector"--------------------------------____________________ 2-22-2.2 Historical Background of State-Vector--------------------------------------------2-3 2-3 Definition and Notation_______________________________________________ -----------------2-42-4 Concepts Associated with Geometrical Vectors___________________________________________ 2-52-4.1 Coordinate Systems ~-----------------------------------------------------------2-52-4.2 Scalars -----------------------------------------------------------------------2-52--4.3 Trajectories -------------------------------------------------------------------2-52-4.4 Velocity ----------------------------------------------____ ---_____________ ----2-52-4.5 Magnitude --------------------------------------------------------------------2-62-4.7 Scalar Multiples ---------------------------------------------------------------2-62-4.8 Acceleration ------------------------------------------____ ___ _____ ____ ____ ____ 2-6 2-4.9 Force -------------------------------------------------------------------------2-62-4.10 Analysis of Vectors Into Components_____________________________________________ 2-72-4.11 Unit Vectors ------------------------------------------------------------------2-72-4.12 Dot or Vector Product_________________________________________ -----------------2-72-4.13 Time-Integrals of Vectors_______________________________________________________ 2-82-5 n-Dimensional Non-Geometrical Vectors_________________________________________________ 2-82-6 Population-Vectors -----------------------------------------------____________________ 2-82-6.1 Definitions ------------------------------------------------____________________ 2-82-6.2 Considerations in Treatment of Population Vectors_________________________________ 2-92--6.3 Illustrations of Population Vectors-----------------------------------------------2-10 2-7 Hypervectors ------------------------------------------------------------------------2-11 2--7.1 Definition -------------------------------------------------____________________ 2-11 2-7.2 Matrices ----------------------------------------------------------------------2-112-8 Activity Vectors -------------------------------------------------____________________ 2-11 2-8.1 Matrix Nature ----------------------------------------------------------------2-11 2-8.2 Combat -----------------------------------------------------------------------2-122--8.3 Developing Capabilities -------------------------------------____________________ 2-12 2--8.4 Transportation ----------------------------------------------------------------2-13 2-8.5 Production ---------------------------------------------------------------------2-132-8.6 Basic Input-Output Activity-----------------------------------------------------2-132-8.7 Effects of Timing and Scheduling on the Linearity of Activities ---------------------2-14 2--9 Allocations ______________________________________________________· ___________________ _ 2-14 • 2-10 Assignments -------------------------------------------------------------------------2-15 2-11 Vectors in Strategies of Action_________________________________________________________ 2-15 iii 2-12 CHAPTER 3 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 CHAPTER 4 4-1 4-2 4-3 4-4 4-5 Page ___________________ _ 2-16 2-12.1 Quantitative Usages of the Term_______________________________________________ 2-16 Distributions --------------------------------------------------- _ 2-12.2 Distributions as Vectors_______________________________________________________ 2-16 _ 2-12.3 Statistical Distributions in Ballistics___________________________________________ _ 2-17 2-12.4 The Design of Lethality of Burst as a Distribution_______________________________ _ 2-18 2-12.5 Deliberate Discursive Unity in Game-Theoretic Designs__________________________ _ 2-18 2-18 2-12.6 Variety ---------------------------------------------------------------------2-12.7 Statistical Laws ---------------------------------------------_________________ 2-19 2-12.8 Dimensionality ---------------------------------------_______________________ _ 2-19 MATRICES, GRAPHS, AND NETWORKS 3-1 Summary --------------------------------------------------------------------------- Definitions and Notation________________________ ---------------_______________________ _ 3-2 Matrices and Graphs of Relations_____________________________________________________ 3-2 _ 3-3.1 Examples of Relations_________________________________________________________ 3-2 _ _ Types of Relations___________________________________________________________________ 3-2 3-4.1 Graph of a Relation___________________________________________________________ 3-2 _ 3-2 3-4.2 Relation Matrices ------------------------------------------------------------- 3-3 Networks --------------------------------------------------------------------------- 3-3 3-5.1 General ----------------------------------------------------------------------3-5.2 Initial Movements -----------------------------------------___ ---------________ 3-3 3-5.3 Unique Routing ----------------------------------------_______________________ _ 3-4 3-5.4 Successive Trips ---------------------------------------________________ ---·-----3-5 3-5.5 Product of a Matrix by a Matrix________________________________________________ 3-6 _ _ Relations Associatable with Matrix Products___________________________ ---------________ 3-7 Examples and Applications___________________________________________________________ 3-8 3-5.6 Further Successive Trips_______ -------------------------_______________________ 3-7 _ Routing Continuously Divisible Flow Through a Network________________________________ _ 3-9 Systems' States and State-Transition in Weapon Systems________________________________ _ 3-10 Production Networks __ ---------------------------------------_______________________ _ 3-10 3-10 3-10.1 General --------------------------------------------------------------------- 3-10.2 Arsenal Manufacturing --------------------------------_______________________ _ 3-10 3-10.3 Output by Fractionation______________________________________________________ 3-11 _ Input-Output Matrices-Leontief Models_______________________________________________ 3-12 _ Matrices as Hypervectors_____________________________________________________________ 3-12 _ Graph Representations of Projects "CPS"-----------------------_______________________ _ 3-14 Basic Operations of Matrix Algebra___________________________________________________ 3-15 _ MILITARY USES OF PROBABILITY 4-1 Scope ------------------------------------------------------------------------------- 4-1 Randomness -------------------------------------------------------------------------4-2.1 Deterministic Behavior --------------------------------------------------------- 4-1 4-1 4-2.2 Random Behavior ------------------------------------------------------------- 4-2 4-2.3 Examples of Randomness________________________________________________________ 4-3 4-2.4 "Chance" --------------------------------------------------------------------- 4.:...3 Probabilities ------------------------------------------------------------------------4-3 4-3.1 Trials ------------------------------------------------------------------------4-3 4-3.2 Outcomes --------------------------------------------------------------------- 4-3.3 "Average Frequency" per TriaL__________________________________________________ 4-3 4-3.4 .The Example of the Die_________________________________________________________ 4-4 4-4 Types of Outcomes----------'----------------------------------------------------------4-4.1 Conditional Outcomes ----------------------------------------------------------- 4-4 4-5 4-4.3 Complement-Contradictory ---------------------------------------------------- 4-4.2 Combined Outcomes -~--------------------------------------------------------- 4-5 4-5 4-4.4 Independence ----------------------------------------------------------------- 4-5 4-4.5 Alternatives ------------------------------------------------------------------4-5 4-4.6 Bayes's Theorem --------------------------------------------------------------- Random Variables and Their Distributions_______________________________________________ 4-5 4-5 4-5.1 Random Variables -------------------------------------------------------------4-7 4-5.2 Distributions ------------------------------------------------------------------ iv Page4-6 ~onte Carlo ~ethods-----------------------------------------------------------------4-6.1 General -----------------------------------------------------------------------4-7 4-6.2 Procedure ---------------------------------------------------------------------4-7 4-6.3 Pseudorandom Numbers ---------------------------------------------------------4-74-7 General Categories of the Employment of Probability_____________________________________ 4-9 4-8 Random Imitations of Deterministic Outcomes____________________________________________ 4-10 4-9 Notations and Forms of Random Variables______________________________________________ 4-11 4-9.1 Notations and Terminology------------------------------------------------------4-12 4-9.2 Form of Some Distributions______________________________________________________ 4-12 4-13 CHAPTER 5 POISSON AND RELATED RANDO~ PROCESSES 5-1 General ----------------------------------------------------------------------------- 5-1.1 Types and lJses________________________________________________________________ _ 5-1 5-1.2 Approach ----------------------------------------------------------------------5-1 5-2 Concept of Random Processes__________________________________________________________ _ 5-1 5-3 5-2 Independence -------------------------------------------------------------------------5-25-3.1 Definition --------------------------------------------------------------------- 5-2 5-3.2 Relating Non-Independent Processes to Independent Processes ----------------------5-25-3.3 Other Considerations -----------------------------------------------------------5-35-4 Stochastic Categories of Random Processes______________________________________________ _ 5-4.1 Definitions ---·------------------------------------------------------------------5-3 5-4.2 Stationarity -------------------------------------------------------------------5-3 5-4.3 Examples of Steady-State_______________________________________________________ _ 5-35-45-4.4 Ergodicity ---------------------------------------------------------------------5-4 5-5 The Basic Independent Processes_______________________________________________________ _ 5-45-6 Bernoulli Processes and Trials_________________________________________________________ _ 5-55-6.1 Definition ---------------------------------------------------------------------5-6.2 Discrete Bernoulli Trials--------------------------------------------------------5-5 5-6.3 Statistics of Discrete Bernoulli Processes_________________________________________ _ 5-55-6.4 Continuous Bernoulli Trials_____________________________________________________ _ 5-6 5-7 The Poisson Process-Poisson Trials___________________________________________________ _ 5-6 5-7.1 Definition ----------------------------------------------------------------------5-6 5-7.2 The Basic Counting Processes___________________________________________________ _ 5-65-7.3 Examples of Poisson Trials_____________________________________________________ _ 5-7 5-8 Statistics of the Counting Processes____________________________________________________ _ 5-8 5-9 Special Properties of Poisson and Bernoulli Processes_____________________________________ 5-95-9.1 Important "Traffic" Properties__________________________________________________ _ 5-9 5-9.2 Contagious and Non-Contagious Shift____________________________________________ _ 5-9 5-10 Independent Gaussian Process-Wiener Process_________________________________________ _ 5-9 5-10 CHAPTER 6 ~ARKOV, STATE AND RENEWAL PROCESSES 6-1 Scope -------------------------------------------------------------------------------6-1.1 Dependent Processes ------------------------------------------------------------6-1 6-1.2 Examples ----------------------------------------------------------------------6-16-1.3 ~arkov and Renewal Processes___________________________________________________ 6-1 ~arkov Processes ---------------------------------------------------------------------6-36-2 6-2.1 ~arkov Chains -----------------------------------------------------------------6-56-2.2 Discrete vs. ~arkov-Poisson Processes_____________________________________________ 6-56-2.3 Transitions ___ ______ ____ ____ __ ___ __ ____ ___ _____ ______ ___ ____ __ _ _ __ _ _ __ _ _ ____ __ _ 6-6 6-2.4 Occupancy Analysis ------------------------------------------------------------6-7 6-3 Examples ----------------------------------------------------------------------------6-7 6-4 Numerical Behavior with Time__________________________________________________________ 6-8 6-4.1 Differential Behavior -----------------------------------------------------------6-9 6-4.2 General Formal Integrated Solution_______________________________________________ 6-9 6-4.3 Formal Solution, Ergodic Case___________________________________________________ 6-10 6-5 Sample Histories When Transitions_ Are Stationary -------------------------------------6-11 6-6 Non-~arkovian Processes --------------------------------------------------------------6-13 6-7 ~onte-Carlo Simulation of a Markov Process_____________________________________________ 6-146-8 Statistics of Stationary Renewal Processes_______________________________________________ 6-15 6-8.1 Current Age -------------------------------------------------------------------6-15 6-15 Page 6-8.2 "Mortality" Depending on Age___________________________________________________ 6-15 6-16 6-8.3 IIazard Rate ------------------------------------------------------------------6-8.4 Initial Renewal -----------------------------------------------------------------6-16 6-9 The Renewal Counting Processes_______________________________________________________:_ 6-16 6-9.1 Renewal Rate-Renewal Equations_______________________________________________ 6-17 6-9.2 Average Number of Renewals____________________________________________________ 6-17 6-9.3 Dependence of Renewal Rate on Time_____________________________________________ 6-17 CHAPTER 7 ACTIVITY: FORMS, PATTERNS, STRUCTURES 7-17-1 Scope -------------------------------------------------------------------------------- Activity vs. Action_____________________________________________________________________ 7-1 7-2 7-1 7-2.1 Definition --------------------------------------------------------------------- 7-2.2 Restrictions on Frequency of Action______________________________________________ _ 7-1 7-2 7-2.3 Concepts of ActivitY------------------------------------------------------------~ 7-27-3 Processes ----------------------------------------------------------------------------7-27-3.1 Definition -------------------------------------------------------------------- 7-3 7-3.2 Time Measurement ------------------------------------------------------------ 7-47-4 Intermittency ------------------------------------------------------------------------7-47-5 Event-State Process -------------------------------------------------------------------7-47-5.1 Events --------------------------------------------------------·--------------- 7-5 7-5.2 Binary Events ----------------------------------------------------------------- 7-6 7-5.3 Compound Event Processes_______________________________________________________ 7-6 7-5.4 State --------------------------------------------------------------------------7-67-5.5 Important Event-State Processes---------------------~-------------------------- 7-7 7-5.6 Recurrence --------------------------------------------------------------------7-77-6 Formal Differential Analysis of (Vector) Activity--------------------------------------- 7-87-6.1 Input -------------------------------------------------------------------------7-97-6.2 Output or Outcome ------------------------------------------------------------ 7-9 Feedback --------------------------------------------------------------------- 7-6.3 Representation of Change of State_______________________________________________ _ 7-10 7-6.4 Analogies in Representation of Activity_________________________________________________ _ 7-107-7 Measures of Performance ______________________________________________________________ 7-117-8 7-11 7-8.1 General ---------------------------------------------------------------------- 7-8.2 Measures of Effectiveness________________________________________________________ 7-11 7-12 7-8.3 Normalized Measures ---------------------------------------------------------- 7-12 7-9 Dominating Structures ---------------------------------------------------------------- CHAPTER 8 CONSTRAINED OPTIMIZATION 8-1 Decision and Constrained Optimization___________________________________________________ 8-1 8-1.1 Nature of Constrained Optimization_______________________________________________ 8-1 8-1.2 Mathematical Basis of Constrained Optimization___________________________________ 8-1 8-1.3 Examples _____________.:________________________________________________________ 8-2 8-4 8-2 The Decision Process------------------------------------------------------------------8-48-2.1 General Procedure ------------------------------------------------------------- 8-2.2 Mathematics and Alternatives of Action___________________________________________ 8-5 8-2.3 Requirements and Objectives_____________________________________________________ 8-5 8-6 8-2.4 Constraints ------------------------------------------------------------------- 8-2.5 Selecting Measures of Effectiveness_______________________________________________ 8-7 8-8 8-2.6 Modelling --------------------------------------------------------------------8-3 Types of Optimizations_________________________________________________________________ 8-9 8-9 8-4 Inequalities --------------------------------------------------------------------------8-98-4.1 Forms ----------------------------------------------------------------------- 8-10 8-4.2 Examples --------------------------------------------------------------------- 8-4.3 Effect of Elasticity_____________: _______________________ -------------------------- 8-11 8-5 Extrema -----------------------------------------------------------------------------8-11 8-11 8-5.1 Definition ---------------------------------------------------------------------8-12 8-5.2 Notation --------------------------------------------------------------------- 8-5.3 Some Characteristic Properties of Extrema________________________________________ 8-12 8-6 Optimization with Equality Constraints__________________________________________________ 8-13 8-13 8-6.1 General ---------------------------------------------------------------------- vi Page 8-6.2 Lagrangian Function ------------------------------------------------------~---8-6.3 Multiple Equality Constraints____________________________________________________ 8-14 8-7 Optimization with Inequalities__________________________________________________________ 8-14 8-8 Special Considerations in Optimization__________________________________________________ 8-14 8-9 IIIustrations --------------------------------------------------------------------------8-15 8-16 8-9.1 A Tactical Illustration-Perimeter Defense Design_________________________________ 8-168-9.2 Value of Weapon Systems________________________________________________________ 8-178-9.3 Cost Effectiveness of Weapon Systems_____________________________________________ 8-188-10 The Place of Decision in Action_______________________________ --------------------------8-188-11 Dynamic Programming ----------------------------------------------------------------8-198-11.1 Definition --------------------------------------------------------------------8-11.2 Sequential Decisions -----------------------------------------------------------8-19 8-20 8-11.3 Sequential Programming of Nonsequential Decisions_______________________________ 8-208-11.4 A Network Example____________________________________________________________ 8-21 CHAPTER 9 PROGRAMMING LINEAR ACTIVITY 9-1 Scope ------------------------------------------------------------------------------- 9-2 Linear Activities and Combinations______________________________________________________ 9-1 9-2.1 Characteristics of Linear Activities_______________________________________________ 9-1 9-2.2 General Concept of Linear Programming__________________________________________ 9-1 9-2.3 Examples of Linear Combinations of Activities_____________________________________ 9-2 9-3 Linearly Programmable Activities_______________________________________________________ 9-2 9-39-3.1 Definition ---------------------------------------------------------------------9-3.2 Example ----------------------------------------------------------------------9-3 9-3.3 Possible Nonlinearities ----------------------------------------------------------9--3 9-3.4 Linear Inequalities -------------------------------------------------------------9-4 9-4 9-5 Algebra and Geometry of Vectors and Inequalities________________________________________ 9-59-4.1 Purpose of Paragraph___________________________________________________________ 9-4.2 Linear Independence ------------------------------------------------------------9-5 9-5 9-4.3 Other Characteristics of N-Dimensional Space_____________________________________ 9-5 9-4.4 Convex Mixtures ---------------------------------------------------------------9-4.5 Simultaneous Linear Inequalities_________________________________________________ 9-6 9-4.6 Convex Polyhedron -------------------------------------------------------------9-6 9-4.7 Gradient ----------------------------------------------------------------------9-7 9-4.8 Location of the Linear Optimum__________________________________________________ 9-8 9-5 Characteristics of the Solution to Linear Programs________________________________________ 9-9 9-5.1 Duality ------------------------------------------------------------------------9-9 9-5.2 Saddle Point Theorem___________________________________________________________ 9-9 9-9 9-5.3 Interpretation of Lagrange Multipliers____________________________________________ 9-109-6 Simplex Method ---------------------------------------------------------------------9-6.1 Introduction -------------------------------------------------------------------9-10 9-10 9-6.2 Basic Solutions ---------------------------------------------------------------- 9-6.3 ln1proving the Solution__________________________________________________________ 9-11 9-129-6.4 New Stage --------------------------------------------------------------------9-6.5 Artificial Variables -------------------------------------------------------------9-13 9-6.6 Simplex Tableau ---------------------------------------------------------------9-13 9-7 Examples ----------------------------------------------------------------------------9-13 9-8 Transportation and Network Flows_____________________________________________________ 9-14 9-16CHAPTER 10 THE SIMPLEST MILITARY GAMES 10-1 Background of Games as an Analytical Technique_________________________________________ 10-110-2 Types of Two-Person Games__________________________________ --------------------------10-110-2.1 Matrix Single Game____________________________________________________________ 10-2.2 Allocation Game ---------------------------------------------------------------10-1 10-2.3 Sequential Game --------------------------------------------------------------10-1 10-3 Solution to Game Problems-------------------------------------------------------------10-2 10-2 • 10-3.1 Graphical Solution of a 2 X 2 Matrix Single Game_________________________________ 10-210-3.2 Numerical Solution of a 2 X 2 Game___________________ --------------------------10-310-3.3 Graphical Solution of a 2 X m Game______________________________________________ 10-3 10-4 10-3.4 Basic Relationships in Determining the Value of a Game___________________________ 10-4Development of a Game in an Actual Military Situation____________________________________ 10-5 vii Page CHAPTER 11 SYSTEMS 11-1 11-2 Some Considerations in System Evaluation_______________________________________________ 11-1 11-2.1 The Need for System Evaluation_____________________________________ ------------11-1 11-2.2 The Place of Concepts__________________________________________________________ 11-1 11-2.3 Comprehensiveness in System Concepts___________________________________________ 11-2 11-3 General Examples of Operating Systems_________________________________________________ 11-2 11-3.1 Combat Systems ---------------------------------------------------------------11-2 11-3.2 Service Systems -------------------------------------------------------------- 11-1 Scope ------------------------------------------------------------------------------- 11-3 11-3.3 Supply Systems ---------------------------------------------------------------11-3 11-4 General Representation ----------------------------------------------------------------11-4 11-5 External Input -----------------------------------------------------------------------11-4 11-5.1 Categories --------------------------------------------------------------------11-4 11-5.2 Compound Demand Process_____________________________________________________ 11-5 11-5.3 Examples of Compound Demand Processes________________________________________ 11-5 11-5.4 Special Forms of Demand_______________________________________________________ 11-5 11-5.5 Statistical Representation of Demand____________________________________________ 11-5 11-6 Disposition of Demand_________________________________________________________________ 11-6 11-7 Maintaining Capability ----------------------------------------------------------------11-7 11-7.1 Maintenance of a System_______________________________________________________ 11-7 11-7.2 New or Replacement Systems----------------------------~-----------------------11-8 11-8 Cycles ------------------------------------------------------------------------------ 11-8 11-9 Measure of Effectiveness and of Cost____________________________________________________ 11-9 CHAPTER 12 STOCHASTIC SERVICE SYSTEMS 12-1 Scope --------------------------------------------------------------------------------12-1 12-2 Summary of Operational Aspects________________________________________________________ 12-1 Characteristics of Service Systems______________________________________________________ 12-2 12-3 12-4 Examples ----------------------------------------------------------------------------12-2 12-4.1 Combat ---------------------------------------------------------------------- 12-2 12-4.2 Traffic ----------------------------------~----------------------------------- 12-3 12-4.3 Public Facilities -------------------------------------------------------------- 12-3 12-4.4 Manufacturing --------------------------------------------------------------- 12-3 12-3 Measures of Performance ______________________________________________________________ 12-5 12-4 Measures of Effort ____________________________________________________________________ 12-6 12-7 Classifications ------------------------------------------------------------------------12-5 Additional Operating Characteristics____________________________________________________ 12-5 12-8 12-9 Tactical Aspects ----------------------------------------------------------------------12-7 12-9.1 General --------------------------------------------------------------------- 12-7 12-9.2 Capacity of Service and of Queue________________________________________________ 12-7 12-9.3 Costs-Wait vs. Idle Capacity-Maintenance______________________________________ 12-7 12-9.4 Specialization of Servers________________________________________________________ 12-7 12-9.5 Access to the Queue____________________________________________________________ 12-8 12-9.6 Priorities ____________________________________________________________________ .:_ 12-8 12-9 12-9.8 Networks -------------------------------------------------------------------- 12-9.7 Scheduling ------------------------------------------------------------------- 12-9 12-9.9 Special Considerations of Size___________________________________________________ 12-9 12-9.10 Statistical Independence --------------------------------------------------------12-9 12-10 Computation Methods -----------------------------------------------------------------12-9 12-10.1 General Methods ------------------------------------------------------------ 12-9 12-10.2 Methods of Treating Nonstationarity of Demand__________________________________ 12-10 12-10.3 Methods When There Are Several Servers_______________________________________ 12-10 12-10.4 Erlang's Phases --------------------------------------------------------------12-10 12-10.5 Representation of P by Differential and Integral Equations -----------------------12-11 12-10.6 Lagrangian Simulation --------------------------------------------------------12-12 12-11 12-12 Formulas ---------------------------------------------------------------------------- CHAPTER 13 SYSTEM RELIABILITY 13-1 13-1 Introduction ------------------------------------------------------------------------- 13-1.1 Definition of Reliability_________________________________________________________ 13-1 13-1.2 Scope of Chapter_______________________________________________________________ 13-1 viii Page 13-2 General Representation of ReliabilitY----------------------------------------------------13-2 13-3 Definitions and Measures of Performance (Effectiveness)----------------------------------13-2 13-3.1 Binary Performance ------------------------------------------------------------13-2 13-3.2 Reliability Interval ------------------------------------------------------------13-3 13-3.3 Reliability --------------------------------------------------------------------13-3 13-3.4 Readiness --------------------------------------------------------------------13-3 13-3.5 Maintenance Rates ------------------------------------------------------------13-3 13-3.6 Amount of Time System is Unoperable in a Given IntervaL_____ --------------------13-5 13-3.7 Residual Lifetime -------------------------------------------------------------13-5 13-4 Failure Rate and Hazard Function______________________________________________________ 13-6 13-4.1 Types of Hazard Functions-----------------------------------------------------13-6 13-4.2 Standard Representation of a Lifetime Distribution in Terms of its Hazard Function__ 13-6 13-4.3 Hypothesizing a Hazard Rate____________________________________________________ 13-6 13-5 Some Lifetime-to-Failure Distributions__________________________________________________ 13-7 13-5.1 Exponential ------------------------------------------------------------------13-7 13-5.2 VVeibull ----------------------------------------------------------------------13-7 13-5.3 Gamma ----------------------------------------------------------------------13-8 13-5.4 Comparisons ------------------------------------------------------------------13-8 13-5.5 Predicting Lifetime ------------------------------------------------------------13-8 13-6 Monotonic Hazard Rate________________________________________________________________ 13-8 13-7 Effect of Str.ucture on Reliability--------------------------------------------------------13-9 13-8 Support Policies and Effort_____________________________________________________________ 13-10 13-8.1 Purpose of Support Policies-----------------------------------------------------13-1013-8.2 Effort per Support_____________________________________________________________ 13-11 13-8.3 Frequency of Support__________________________________________________________ 13-11 13-9 Optimal Timing of Maintenance if Hazard Rate Increases_________________________________ 13-11 13-9.1 Least-Support Rate Age-RenewaL_______________________________________________ 13-11 13-9.2 Maximum-Availability Age-Renewal ---------------------------------------------13-11 13-9.3 Least-Support Rate Periodic RenewaL___________________________________________ 13-12 13-10 Variations of Supporting Action________________________________________________________ 13-12 13-11 Optimal Support Capacity, Crews and Spares____________________________________________ 13-13 13-11.1 Support Servers --------------------------------------------------------------13-1313-11.2 Replenishment of Spare Parts__________________________________________________ 13-13 13-12 Programming of Redundancy in System Design___________________________________________ 13-13 CHAPTER 14 SUPPLY SYSTEMS 14-1 Scope --------------------------------------------------------------------------------14-1 14-2 Development of Analytical Methods of Evaluating Supply Systems__________________________ 14-1 14-2.1 Some Uses of Analytical Methods________________________________________________ 14-1 14-2.2 Special Considerations in VVartime Supply________________________________________ 14-2 14-3 Summary of Operational Considerations_________________________________________________ 14-3 14-3.1 Types of Inventories___________________________________________________________ 14-3 14-3.2 Demand ----------------------------------------------------------------------14-3 14-3.3 Effort ------------------------------------------------------------------------14-3 14-3.4 Storage-Reliability and Losses_________________________________________________ 14-3 14-3.5 Performance or Effectiveness____________________________________________________ 14-3 14-3.6 Elements and Alternatives of Inventory Design___________________________________ 14-4 14-4 General Characteristics of Demand_______________________________________________________ 14-4 14-4.1 Uncertainty ------------------------------------------------------------------14-4 14-4.2 Demand Source Structure_______________________________________________________ 14-5 14-4.3 Expected Time-Pattern --------------------------------------------------------14-5 14-4.4 Compo.und Demand Process-----------------------------------------------------14-5 14-5 • Compound-Poisson Demand ------------------------------------------------------------14-5 14-5.1 Properties --------------------------------------------------------------------14-514-5.2 Compound-Poisson for Ordnance Items___________________________________________ 14-6 14-5.3 Approximations to Compound Poisson____________________________________________ 14-8 14-6 Statistical Time-Series Methods of Forecasting and ControL_______________________________ 14-8 14-6.1 Nature of the Problem__________________________________________________________ 14-8 14--6.2 Methods ----------------------------------------------------------------------14-8 14-6.3 Comparison of Running Average and Regression Methods__________________________ 14-9 14--6.4 Statistical Control -------------------------------------------------------------14-9 ix Page 14-6.5 Supply System Effects----------------------------------------------------------14-10 14-6.6 Responses of Predictors_________________________________________________________ 14-11 14-7 Supply Effort ------------------------------------------------------------------------14-12 14-7.1 Vertical Nature of Effort_______________________________________________________ 14-12 14-7.2 Effect of Frequency of Supply on Effort Required_________________________________ 14-12 14-7.3 Acceleration of Effort__________________________________________________________ 14-12 14-7.4 Effort when Storage is Bypassed. Unavailability Costs _____________________________ 14-13 Inventory Losses and Holding Effort____________________________________________________ 14-13 14-8 14-8.1 General ----------------------------------------------------------------------14-13 14-8.2 Physical Losses ---------------------------------------------------------------14-13 14-8.3 Loss of Value-Surprise Obsolescence _____________________________________________ 14-13 14-8.4 Holding Effort ----------------------------------------------------------------14-14 14-9 Supply Effectiveness ------------------------------------------------------------------14-15 14-9.1 Inventory as a Process__________________________________________________________ 14-15 14-9.2 Measures of Supply Effectiveness ________________________________________________ 14-16 Intermittent Replenishment of Inventory_________________________________________________ 14-16 14-10 14-10.1 Conditions Affecting Replenishment Quantity____________________________________ 14-16 14-10.2 Replenishment Quantity and Frequency------------------------------------------14-17 14-10.3 Replenishment Lead Time______________________________________________________ 14-18 14-11 Inventory Review ---------------------------------------------------------------------14-18 14-12 Periodic Review and Replenishment under Constant Leadtime______________________________ 14-19 14-12.1 Definitions -------------------------------------------------------------------14-19 14-12.2 Time Behavior of Inventory____________________________________________________ 14-19 14-12.3 Approximate Control Specifically of Availability, Backorders ---------------------14-19 14-13 Continuous Review-Constant Leadtime and Replenishment Quantity ----------------------14-20 14-14 Optimal Balance of Holding Effort and Customer Delay___________________________________ 14-21 14-15 Optimal Control of Fixed Availability---------------------------------------------------14-22 14-15.1 Background of Development of Methodology ______________________________________ 14-22 14-15.2 Effect of Variability of Demand________________________________________________ 14-22 14-15.3 Relationship of Availability, Maximum Inventory and Replenishment Quantity______ 14-24 14-16 Production Quantities -----------------------------------------------------------------14-24 14-16.1 Economic Order QuantitY------------------------------------------------------14-24 15-16.2 Sudden Unexpected Obsolescence of Output______________________________________ 14-25 14-16.3 Discounting Future Effort_____________________________________________________ 14-27 14-16.4 Decaying Demand Rate________________________________________________________ 14-27 14-17 One-time Provisionings ----------------------------------------------------------------14-29 14-17.1 Types -----------------------------------------------------------------------14-29 14-17.2 The "Newsboy" Problem_______________________________________________________ 14-29 14-17.3 Flyaway Kit -----------------------------------------------------------------14-30 14-18 Planning Production Using Linear Programming_________________________________________ 14-30 CHAPTER 15 COMBAT 15-1 Scope --------------------------------------------------------------------------------15-1 15-2 Representations of Effect_______________________________________________________________ 15-1 15-2.1 Outcomes ---------------------------------------------------------------------15-1 15-2.2 Effect on a Single Target_______________________________________________________ 15-2 15-2.3 Effect on Composite Targets_____________________________________________________ 15-2 15-2.4 Effect on Complex Targets______________________________________________________ 15-3 15-3 Representation of S.urvival -------------------------------------------------------------15-3 15-3.1 Examples ---------------------------------------------------------------------15-3 15-3.2 Relationships of Survival to Structure____________________________________________ 15-4 15-4 Delivery Errors -----------------------------------------------------------------------15-4 15-5 Effects of Trials ----------------------------------------------------------------------15-5 15-5.1 Weapons Effects --------------------------------------------------------------15-5 15-5.2 Effective Radius ---------------------------------------------------------------15-5 15-5.3 Discrete Effects, Total Probability per TriaL_____________________________________ 15-6 15-6 Sequences of Trials in Detection and in Fire______________________________________________ 15-6 15-6.1 Detecting a Target_____________________________________________________________ 15-6 15-6.2 Hitting a Passive Attacker------------------------------------------------------15-7 15-6.3 Attack of Area Target__________________________________________________________ 15-8 X Page 15-7 Combat as a Reaction Process___________________________________________________________ 15-9 15-7.1 Lanchester and Force Relationships______________________________________________ 15-9 15-7.2 Average Rates of Fighting and of Effectiveness-----------------------------------15-10 15-7.3 Contact Relationships ----------------------------------------------------------15-10 15-8 Population Reaction Processes__________________________________________________________ 15-10 ~odes of Engagement__________________________________________________________________ 15-9.1 One Against One Combat_______________________________________________________ 15-12 15-9.2 Contact Other than One Against One_____________________________________________ 15-12 15-9 15-12 15-9.3 Ganging ----------------------------------------------------------------------15-1215-9.4 Area Intelligence Only. Coordinated Fire, Ganged Probing__________________________ 15-13 15-10 Lanchester Square Law and Its Application to Historical Battles___________________________ 15-13 15-11 Guerrillas vs. Regulars----------------------------------------------------------------15-15 15-12 Stochastic Extensions of Lanchester ~ethods_____________________________________________ 15-16 APPENDIX A Table of e-"-------------------------------------------------------------------------- A-1 B Table of Cumulative Poisson___________________________________________________________ _ A-2 C Table of Poisson______________________________________________________________________ _ A-5 D Table of Normal (Gaussian) Probability Density________________________________________ _ A-8 E Table of Normal Probability IntegraL__________________________________________________ _ A-ll INDEX 1-1 • xi CHAPTER 1 INTRODUCTION 1-1 Situation In its World War II beginnings, operations research was needed to help solve problems of current operations. But many of the problems solved then have since been discovered to be of such widespread incidence and similar form (sometimes merely disguised by the difference in military locale) that the need to solve them has been overtaken by the need to distribute throughout the military community the capability to identify the occurrence of the problem, and to apply some existing solution, perhaps with a minor modification. Today operations research is practiced by a great many personnel in the Army in connection with the conduct of all kinds of operations. Not all of this practice, though it be called "research," is research in the way that the original operations research of World War II was. We have benefited from these discoveries and do not have to spend the effort rediscovering their results. Instead we use already developed methods as procedures to obtain the results we need in current problems. Thus what was research earlier is today engineering. In fact, what is commonly designated operations research in the Army might more appropriately be called operational engineering, even if its methods appear to be more investigative than the methods of World War II operations research. Yet during the same post-World War II period there has been an extensive development in knowledge of the theoretical foundations of operational topics. No end to further developments is in sight. The engineering and science of operations are today in an accelerating state of develop ment, and it may be expected that within a short time operational subjects not touched upon in this pamphlet will have emerged into practical importance. In this pamphlet more emphasis has consequently been put upon the scientific basis of such engineering than the pressure for everyday use of the methods might seem to justify. But the acceleration cited above has increased the priority for distributing knowledge of the science of the relevant subjects throughout the military ·community. The objective of operations research has always been to improve action. The material upon which operations research has drawn is the growing body of mathematics of operational topics. As this body of material becomes more substantial, the field passes from being one of mere "quantitative assistance to management decision"-to paraphrase many titles of books that have been written on the subject-into a stage of being a definite, positive and relatively self-contained methodology for the design of action, with less need for constant advice from managers. There is little doubt, for example, that the methods of constrained optimization represent the nature of many decision problems so well that solving them can be automated. This has in fact already taken place at levels from top to bottom in management, both military and nonmilitary. It seems likely that the subject of the organized designs of operations stands at the beginning of a new period of development. • The need to possess a simple overall precise methodology, of high capability for resolution·of decisions, is not merely distantly related to the national security. The need is the more imperative because of the power which capability at operational science puts in the hands of the military force possessing it. The spectacular example is game theory: he who does not comprehend the computa 1-1 tion of the optimal solution will be defeated in the actual contests. The topics and methods of the volume are quite internationally known, being based in mathematics. The contest between enemies has become, more than before, a contest of Archimedean organizations for operational analysis and mathematical design of operations, as well as a contest of weapons. The increased technical complexity of operations has made a technical language of operational topics, such as the terminology and measures that comprise much of this pamphlet, an immediate operational necessity. The terminology itself is quite precise, quite suitable for use by the partici pants in military operations. Nothing less than the terminology is reliable when the operation has been designed upon the mathematical principles to which the terminology refers. The following examples may be cited of operational terms that possess such efficiency: "Bernoulli fire"; "hazard rate"; "hit probability" ; "shadow price" (of a resource or requirement) ; "mixed strategy"; "queue"; "connections" (of an organization) ; "Poisson arrivals"; "linear programming"; "linearly programmable operations"; "convex payoff"; "redundancy"; "series-parallel defensive designs"; "minimax"; "critical-path"; "force-population vector." Each of the above terms and others in this pamphlet, has a precise operational meaning and a complete unique and unambiguous technical definition. 1-2 Objectives In the writing of this pamphlet, the objectives have been to (1) Identify the subject matter, especially in its breadth of occurrence throughout all military operations-and especially the uniformity in all of its occurrences that permits the application of the standard concepts-mathematical representations, measures, and strategies of action exemplified in the theoretical material. (2) Establish technical terminology, and by extensive illustration and reference to increase the extent to which the technical vocabulary available in the subject is employed with both routine and unusual Army operations; and to demonstrate the practical efficiency of the terminology in the design, conduct, and control of operations. (3) Disseminate the many concepts, terminologies, quantitative measures, and mathematical representations of operational subjects that have in recent years gradually acquired acceptance, within the operations research profession, as standards. ( 4) Increase the use, at all levels of Army activity, of those optimal strategies of action whose technical reliability has been firmly established. (5) Supply the capable reader with sufficient knowledge of the topics of importance to permit him to develop applications of the methods himself, and to evaluate the practice of others; to direct him to the more detailed literature while providing an official reference document from an Army point of view. (6) Provide an overall structure for relating future developments of the topics covered. No single military publication has been available with as broad a scope as has been undertaken here in seeking a unifying treatment of all types of operations and all of their operational considerations. With regard to optimal strategies above, the reader's attention is called to the many approaches to the solutions of operating questions ptovided by the material in this pamphlet, including: optimal offensive and defensive tactical designs; determining the best capacity levels for service units and resources in the design and deployment of organizations; determining optimal allocations of weapon systems and force-units generally; optimal schedules of production; optimal inventories of supplies; optimal policies of inspecting, maintaining, repairing, replacing and selecting equipments. 1-3 Level of Treatment of Content The choice of the technical and mathematical levels at which to present the subject material in this pamphlet has been deemed to require a balancing of the following specific considerations: (1) The subject material of this pamphlet is being taught in increasing breadth of scope, 1-2 and at an increasingly intense level of mathematical statement, to engineering undergraduates in colleges and universities around the country. (2) While personnel involved in the operations are of course more likely to be of prior generation, when this mathematics was not even known much less taught, yet the operations to which it is applied are their operations, the ones which they conduct, manage and/or have the responsibility for. (3) The performance of the operation can be improved by the use of the material, e.g., the design of missiles for maximum reliability, the correct evaluation (and consequent design) of weapon systems and of composites of force and weapons. Mathematical prowess has gained in importance as an element of military superiority. An attempt has been made to present every subject covered at two levels (1) A nonmathematical statement of the facts and principles. This has been done not merely for the benefit of the nonmathematical reader, but also to aid the mathematical technician in relating formulas to practical situations. (2) A mathematical statement which is a comprehensive summary, at the omission of the least important variants of the matter described, and which contains the most important optimal strategies, mathematical representations, and methods that are known concerning the topics. Every attempt has been made to assure the authoritativeness of the content. This pamphlet is at some points intended to be a handbook for the employment of the material presented, particularly in those cases where the operational profitability of the subject has a sufficient history of successful application, and where there is little question about the validity of assuming that the facts in the application would be as assumed in the theory or method. At other points, this pamphlet is intended to serve as a guide for solutions that may have to be developed when the practical need arises. With very few exceptions this pamphlet is mathematically self-contained for the reader with a knowledge of undergraduate calculus and probability. But within the space allotted, there is little surplus of numerical or algebraic illustration. For example, the formulas for the performance of stochastic service systems in chapter 12 are all derivable, using the principles of chapter 4 and the methods of chapter 6, but only one derivation has been given. Similarly for linear programming, only two numerical illustrations are provided. Realistic numerical illustration in a methodological volume of this sort is sometimes inhibited by the fact that the true values of numerical contents needed are a matter of military security. 1-4 Scope In order to cover even the basic methodology which is likely to be needed in every instance of operations research, it has been necessary in this volume to range over what appears to be a very broad field, from combat to administration, to repair shops, to traffic problems, to structural design, to the allocation of resources for the manufacture of weapon systems, to the microanalysis of duels with random hits in combat. Nevertheless, no publication which attempts to deal today with the principles of action, the forms of activity, and the characteristics of systems, can do less except at the risk of not establishing and standardizing the fundamentally simple structure in all these manifold operations and arenas of military activity. 1-5 Organization of Volume This pamphlet has been so organized as to attempt to indicate the simplicity of the subject in the large, while providing as many practical procedures as possible. Chapters 2 through 7, which for many users may contain familiar material, endeavor to summarize the forms and structures of activity and action. Chapter 11, summarizing some aspects of systems, might also have been included earlier for this reason. 1-3 Chapters 8 through 10 of this pamphlet concern decision and action, and the last chapters concern systems. Chapter 8 is a fundamental chapter in regard to decisions and action and their organizational aspects. Chapter 9 merely develops the details in linear activities. The subject of the theory of games (ch 10), involving as it does some new methods in the basic military topic of opposition of strategists, is a topic in itself. Chapter 11 summarizes the common parts of chapters 12 through 15 that deal with systems. Nevertheless, the details cannot be specified in summaries, and must be gone through individually. Chapter 12 deals with service systems, particularly stochastic ones, and principally with their performance. Chapter 13, concerned with the prominent topic of reliability, is actually the more general subject of supporting the capability of a system. Chapter 14, on supply, collects a number of basic principles and strategies of supply that employ new mathematical methods in their design and control. Chapter 15 focuses on combat performance and on elementary mathematical repre sentations of the outcome of battle. 1-6 Organizational Experience Experience with operations in practice has emphasized two facts as much as any. (1) For operators, commanders, and researchers to work together as necessary in the solution of problems by operations research, generally creates a certain amount of organizational strain in the act. The "operator" represents feasibility. He is the one who determines whether the plans of the activity-designer and the objectives of the commander have been put together in a program that really works. The researcher's typical methodology and habit is to refer and analyze the matter, to develop a thorough logic for its solution, including cases besides those at hand, and to submit the matter to painstaking, methodical logic and calculation. The commander is interested in effective action, and will move personnel and the situation as rapidly as possible in that direction. If quick of mind, he may guess the approximate value of the optimal strategy while the analyst is still calculating. He is usually prone to feel that the analyst has done too much research. The optimal feasible solution is thus typically a kind of "interior" point in the triangle of extreme points represented by these three participants. In particular, the researcher has had to accept, in the usual military situation, a requirement that is often difficult: a solution developed too late to be used will have no effect upon the action taken. The best it can do is aid future action. (2) The most successful military case histories have demonstrated that the effectiveness of the operation has increased in direct relationship to the precision and intensity of the operational research methodology that was used. Complicated operations always need measures of performance that capture the facts in a simple way to serve as daily progress charts or as foci of concept, measurement, and activity. But simple measures that capture the fact require more than simple-minded concepts of the operation or of what to do about it. The key observation here is that the methodology which the operations researchers of World War II employed had been developed, in science and mathematics, prior to the war. 1-7 Frequent Misconceptions There are some organizational misconceptions which it is hoped this volume can assist in dispelling. One, for example, is that "simulation" and war-gaming are complete methodologies in themselves. Stress has been laid herein upon the fact that computers do not think, that war games simulate only the model of reality that is the program for the simulation. Another is that operations research does not design anything. Competition between engineers and analytical scientists has at times seemed to split their total activity in conflicting programs, with the systems engineers interested only in designing new hardware and the operations analysts interested only in making better use of existing hardware. In this pamphlet, emphasis is on the fact 1-4 that the objective of operations research is to obtain the best design of action. Action takes place within systems. The systems engineer and the analysts are specialists of different sorts, who together can design a good system and a good use of it. Another is that the fundamental topic of operations research is the evaluation of weapon systems, i.e., "systems analysis." This pamphlet does not attempt to be a handbook on that subject; but the logic and fundamental mathematical methods of weapon systems evaluation are provided in this publication, including procedures for choosing between conflicting measures in evaluating such systems. In fact, the great and rapid development in methodologies, and of names for their techniquesPERT, Monte Carlo, search theory, etc-has made it difficult to relate and subordinate the many developments. It is hoped that this pamphlet will clarify these manifold topics. Many forces in the military community tend to separate and overspecialize particular developments. This volume has sought to create a counteremphasis. For example, operations research and logistics research are not separate topics, although organizationally they have at times been functionally separated in the Army. From time to time, operations research is as variously regarded as the method of "planning" studies, or as the method of weapon system evaluation, or as some other particularly effective method of the moment. These fluctuations in emphasis are not unreasonable in an organization as broad as the Army, an organization involved in so many different types of operations, and sometimes they merely express the effect of active professional competition among different methodologies. 1-8 Source Material The compiling of this pamphlet has had the benefit of many excellent texts on the theoretical topics, many unclassified Defense Department research and technical reports, and numerous papers in the journal Operations Research dealing with the most important military applications treated. A special bibliography was furnished by the Defense Documentation Center. This bibliography consisted of 50,000 abstracts, which were scanned in an effort to identify nearly the entire breadth of applications throughout the military establishment as reported in the unclassified DDC literature. In addition, a special logistics bibliography was furnished by the Army Logistics Management Center. All operations research groups and professional schools of the Defense Department (not just Army) that were contacted were very helpful in providing lists of their reports and copies of material they employ for training purposes. The Army Management Training Agency has developed a considerable amount of material for its continuing and course programs in operations research. The periodical, International Abstracts in Operations Research was also used as a source. The rapid increase in publication of nonmilitary books on operations research has created much overlap. The comparatively few references cited at the ends of the several chapters will give an excellent coverage of the theoretical material. The Morse and Kimball volume, covering the original naval operations analysis of World War II is still an outstanding treatment of the material that it covers. For Army use, the practical illustrations are, of course, too specialized in detail. But the formal structure and principles of operations and of operational problems are common through all the services. Developments within one service almost always find much usefulness in applications in the other services. 1-5 CHAPTER 2 OPERATIONAL USES OF VECTORS 2-1 The Multi-Dimensionality of Operational Quantities In operational situations, the operational quantities that must be specified are typically highly "multi-dimensional" in that each of a large number of aspects-numbers of troops, their conditions, assignments, current status, deployments, supplies, and, not least, the possible strategies of actionmust be individually specified in order to represent the actual situation. For general reference as examples of operational multi-dimensionality, the following catalog is cited. 2-1.1 Sample Catalog (1) Examples of Military "Populations" (a) A given battle front consists of a fixed number N of separate positions (grid sections of a battle map, missile sites, etc.). To describe a given force situation and deployment, at least N numbers f1Jz, ... ,/N are required, with fi representing the number of our troops at position i, i = 1,2 ... ,N. More than N numbers would typically be required to detail the operational situation since the amounts of supplies present, the condition of the troops, their objectives, etc., may also have to be represented. Just as a point moving in three-dimensional space through time can be viewed as a "trajectory" [x(t),y(t),z(t)], with x(t) its x-coordinate at timet, etc., so such a force situation on the front as it develops and changes during time can be thought of a anN-dimensional trajectory ft(t), ... ,fK(t), i.e., as a point moving around in N dimensions during time. (b) Imagine a table listing the kinds of military personnel and equipment to be assigned to a TOE. Suppose the total number of kinds are N. Then the intended design of any particular organization or type of organization can be specified as an ordered set of N numbers uto ... ,u!l', of which u, is the number of units of the ith kind authorized for the organization. If the capability of an organization can be quantitatively expressed, then to the extent that such capability is dependent upon the TOE of an organization, we should expect to find a functional relationship between the set of numbers u, that characterize an organization of particular type and its capability. The determination of such relationships belongs to the topic of organizational design and is the basis for operational predictions of expected performance of the organization. (2) Examples of Alternatives of Military Action, i.e., "Tactics": (a) A player in a game against an intelligent opponent has N different alternatives of tactical action available to him at each given play but can employ only one on any given play. If the identical game were to be played over and over and over repeatedly, then over a long period of time his best "strategy" would be to employ some alternative i in a fraction f; of the plays, alternative j in a fraction 2-1 fi of the plays, etc. The best strategy for the given point in the play may thus be represented as the set of numbers [f~, ... ,fN]. Any complete strategy of action (complete in the sense that all possible alternatives of action have been considered and assigned a relative frequency of employment) can be representd as another set of fractions [f/, ... ,f/]. To structure a proposed game for such analysis, the set of all alternative tactics, N in number-i.e., the dimensionality of the game-must first be identified, otherwise the best strategy may be overlooked or the mixture of strategies to be used over the long run may be unfavorably biased. The "dimensionality" of something (e.g., space) is sometimes termed the number of degrees of freedom of action within it. (b) A military weapon is to be designed and may be called upon to be used under any of E types of environmental conditions (temperature, humidity, theater, skill of user, etc.). It is estimated that if used in environment e, the effectiveness of the weapon will be f•. The chance that it will be used in environment e is estimated to be p•. Ifdesigned to operate most favorably in environment e1, its effectiveness if operated in environment ez will be f (ez ;e1). If a given number N are to be made in all, the number to be made to operate most effectively in environment e is n (e ;N). The total mix of design specializations is a "design strategy." (c) Of a total given quantity of material to be shipped to a given destination, let s, be the amount to be shipped along route r of R available routes. This example typifies a formal problem of the allocation of military effort. Equivalent formal versions may involve the allocation of reconnaissance effort to regions, the allocation-programming of offensive fire against targets, etc. (3) Examples of a Double-Multiplicity of Dimensionality: (a) A front of N positions is to be reinforced (supplied, etc.) from M particular reserve positions (supply points, etc.). Let r;j represent troops (amount of supplies, etc.) which should go from the ith reserve position, i = 1, ... ,M to the jth front position, j = 1, ... ,N. Note that we have a number of combinations of "populations" that may be distributed according to a number of "strategies." (b) An arsenal is capable of making a composite production output consisting of various amounts of each of J products which may include experimental studies, pilot models, finished parts, assemblies, complete weapons, etc., of various kinds. To make this output will require some corresponding composite input of a total spectrum of I types of resources available to the arsenal, including money, labor of given skill, machinery of given capability and capacity, input materials of given specification, etc. The rate a; at which the ith type of resources is available is given. The rate Pi of production of the jth type of product is to be determined. The amount of resource i that will be consumed per unit of project j. is s;i estimated as accurately as possible. Consequently, the only feasible production programs are sets [pl, ... ,pJ] for which each sum, namely :S siiPh does not i=l exceed the rate of availability a;. Such a set [p11 •• • ,pJ] specifies a momentary output rate of the arsenal and the corresponding set [a1, ... ,aJ specifies a momentary input rate. The arsenal may as a unit be viewed as a "system" which transforms the input rate into the output rate. 2-2 Characteristics of Vectors 2-2.1 Usages of the Term "Vector" (1) Genera~ Ordered sets of numbers or values (or n-tuples), such as [x1.x2, ... ,x,] are 2-2 termed vectors. The numbers or values of a vector are termed its coordinates. Some specialized uses of the term "vector" are given below. (2) Linear Vector Strictly speaking, the full concept of a linear vector .requires that the coordinates xi in a vector [x1, ... ,xn] be capable of taking on any of a continuous range of numerical values; however, most organized collections of military forces and material are measured in discrete units, and are thus strictly speaking not linear, but "integer." In practice, the methodological difficulty may be avoided by imaginingcontinuous approximations to whole numbers, such as population sizes, the continuous values to be properly interpreted in translating the results into application, however, this device does not always suffice. (3) Geometrical Vector A geometrical vector is a quantity that has both magnitude and direction. This usage began in physics in the 19th century, and limits the meaning of vector to quantities in mechanics that involve motion or displacement or physical action in space, such as velocity, acceleration, force, stress, strain, momentum, etc. For such quantities not merely their magnitude but their direction must be specified. In 3 dimensions, 3 numbers are required to specify any given vector, typically either the magnitude and two angles, or the individual respective components of the vector along the 3 coordinate axes of space. (4) State Vector A state-vector, or eigenvector, is for a probabilistic system a set of numbers each of which is the probability or relative frequency with which the system will be found to be in the corresponding state.1 Since the concept has considerably influenced the development of a comparable concept in operational analysis, the historical background of state vectors is discussed briefly here. 2-2.2 Historical Background of State-Vector The notion of the state of a system originated in the study of systems of components, with Lagrange, who contributed the idea of generalized "state-coordinates." For a system of n mechanically-interacting members-e.g., a set of n bodies joined to each other by springs-Hamilton showed that from the knowledge at any one time simultaneously of all of the 6n numbers that give the position coordinates and momentum-components (along the 3 axes)of the bodies, it would be possible to predict the entire future behavior of the system if the bodies obeyed the classical laws of motion. The set [x,, ... ,X6 ,] of the values of these 6n numbers for the system at any given time is then the "state-vector" of the system, completely describing it and determin~ng its future uniquely. Statistical mechanics was developed to deal with systems composed of a very large number of components subject to a very large number of forces, such as the molecules of a gas, for which the complete specification of a state-vector for the system (the gas) is impractical. Especially as developed by Gibbs, statistical mechanics showed that it was not necessary to specify the statevector in order to explain those properties of a thermodynamic system which are essentiallystatistical, i.e., those which depend upon certain averages of the properties of the individual components. These properties include the important ones of temperature and entropy. To support the "laws of thermodynamics" concerning these important system measures, there is needed only the probability that a given system has a given value as its state-vector. By emphasizing systems of indefinitely large dimensionality, composed of infinitely many behaviorally-homogeneous components, a given thermodynamic system could be regarded as an "ensemble" of smaller systems (still 1 This usage dates from the birth of quantum mechanics in the 1920's (although, somewhat ironically, the study of "states" of physicalsystems antedates the subject of vector analysis in the history of physics.) 2-3 infinite-dimensional) that was "in statistical equilibrium," i.e., the fraction p (X) of the total number ofsystems of the ensemble that were in state ~ X would not change during time. Emphasis on specific dimensionality was thus displaced by emphasis on probability distribution. And since the state of a given system was regarded as a sample of the possible states of the whole ensemble, the state-vector X = [x11 • •• ,xn, . ..] was now replaced in conceptual importance by the probability p(X) that the system was in this given state X, i.e., was replaced by a "probability-state-vector." In modern quantum mechanics, attention is focused on the explicit time-behavior of the probability-state-vector particularly for systems whose set of possible state-values can be specified in practice, such as the energy-levels of an atom, the states of an oscillator, and thus for which the dimensionality of the set of states would be specific, being typically either small or discrete. The probability-state-vector is now once again a vector [ph ... ,pn] in which p; is the probability that the system is in state i at time t, the vector changing with time and being in principle quite calculable. Quantum mechanics greatly stimulated the development of linear algebra in mathematics, including matrix algebra, since multiplication of a vector by a matrix can represent the performing of some physical operation in connection with the system as a result of which the probability distribution of the state of the system would be changed. 2-3 Definition and Notation The variety of needs for representing activities as well as states, supports the definition of a vector as merely a set of numbers [x1, ... ,xn] with special assumptions to be specifically added in particular contexts as needed. The most striking concept is that of the integrity of the entity represented by the whole set of numbers. This tends to epitomize the operational usefulness of the concept of vector, namely the organized fitting of the individual element of a given operation into the operational whole. An n-dimensional vector is an ordered set of numbers or values [xl,xz, ... ,xn] of which the ith, namely Xi, is termed the ith coordinate of the vector. The values of the xi may be restricted to some particular set-e.g., the numbers 0 and 1, or the integers, etc.-in which case the vector may be correspondingly identified (binary vector, integer vector, etc.). The number i refers to the ith dimension, sometimes identified as the ith degree of freedom (especially in connection with a system). The vector is the ordered set taken as a whole. The many notations that are in use endeavor appropriately to convey the singleness of the set while yet referring simultaneously to all of the coordinates in their order. Some of the notations are: (1) [xi] (2) X (3) X (4) [X] (5) (xl,Xz, ..• ,Xn) (6) [X17 Xz, ... ,Xn] (7) Xl,Xz, ... ,Xn (Sa) row vector [x,, ... ,xn] (8b) column vector Rows and column vectors occur in connection with matrices, which will be considered in chapter 3. The arrow notation is sometimes used for geometrical vectors to emphasize that they have direction.In this volume the notation [x,, ... ,Xn] will be used, as will the notation [x;] if the dimensionality is clear from the context. 2 Gibbs used the term "phase" ·for state. 2-4 2-4 Concepts Associated with Geometrical Vectors Motion may occur simultaneously in the three dimensions of space, and the subject of vectoranalysis of such motion is highly developed in physical mechanics. Many of its concepts can beusefully abstracted to nongeometrical vectors, including those in operational analysis. Hence thebasic concepts of geometrical vectors are summarized here for purposes of reference in connectionwith such abstractions. 2-4.1 Coordinate Systems Any particular choice of an origin and of three axes-e.g., the three perpendicular x, y, andz axes-for measuring locations in space is termed a "coordinate system" for space. To locate apoint in space, three numbers must be given: either its x, y, and z coordinates or an equivalentsuch as the radius of the point from the origin and angles or their cosines in two given planes. Notwo numbers will suffice to specify location.The location of the origin of a coordinate system for three-dimensional space (or space of anydimension) can be changed and the axes rotated, even so as not to be perpendicular to each other(oblique axes) as long as they are not made to lie in one plane. Such changes are called transformations of coordinate systems. For example, every human presumably carries his own coordinatesystem around with him constantly and, from a fixed point of view external to him, we wouldregard him as transforming his system whenever he moved in any way. 2-4.2 Scalars Of a system under study, a property of it at a given point in space whose numerical value will not change under any change of coordinate system is termed a scalar. For example, the density, pressure, or temperature of a given fluid at a given point are scalars; also such quantities as the potential of an electric field and the density of matter are scalars. These remain invariant under a transformation of coordinate system. Obviously, since they involve direction or directional assignment, vectors do change in valuewith a change in the system of coordinates. 2-4.3 Trajectories A moving point P which at timet has position coordinates pl(t), P2(t) and p3(t) may bethought of as generating by its motion a vector process P(t). P(t) would be written out in detailas [p"p2 ,p3] if the intent were to emphasibe the vectoral character of process, but would be writtenout in detail as [pl (t) ,p2(t ),p3 ( t)] if the intent were to designate a particular value of the process.Any such process is familiarly termed a trajectory. 2-4.4 Velocity By the velocity of the point Pat timet is meant the vector V(t) = [vdt),v2 (t),v3(t)], wherevi (t) equals dpi (t) jdt, i = 1,2,3. Velocity is thus a vector of the same dimensionality as position.The single number which we term "speed" is merely the magnitude of velocity in the direction ofmotion but does not signify the direction of motion. The advantage of writing the vector in terms of its coordinates along the three axes of spaceis that velocity can then be written as a "coordinatewise" differentiation with respect to time ofposition since dp1 (t) jdt =vi(t). Velocity is thus defined as the time rate of change of position.A velocity vector, for example of two-dimensional motion in a plane, say [v"v2 ], is naturallyrepresented as an arrow or ray pointing in the direction of motion and of length proportional to thespeed. The ray thus suggests the amount of change of position (displacement) that would occurper unit of time at that velocity and the direction in which such displacement would occur. Thecoordinates are the lengths of the projections of the arrow on the coordinate axes, as shown in thefigure 2-1. Since position can be regarded as accumulated displacement, position could also be graphed asa ray from the origin of the coordinate system to the given position, and hence a vector. 2-5 y v 2 X Figure 2-1 2-4.5 Magnitude The value of the magnitude of a vector is not prescribed by the definition of vector. The fact that physical space is characterized by the Pythagorean Law of the right triangle means that the magnitude m(V) of a geometrical vector V = [vl,v2] as in figure 2-1 should obviously be taken to be m (V) = VV1 + V22 ; if V is the three-dimensional vector [v1,v",v3 ], then v (V) should sim 2 yv12 + V2 2 + V/. In this case, speed is then in fact the time deriva ilarly be taken to be M(V) = tive of the magnitude of change of position along the path of motion. For nongeometrical vectors, as we shall see, other definitions of magnitude will be more useful. 2-4.6 Direction Direction in two dimensions is easily intuited by an angular measurement. In three dimensions, it is more difficult to intuit direction in terms of angular measurements in two planes. However, for the purpose of analysis, it is better to represent direction in space as the proportionality (direction cosines) among the coordinates of the vector along the x, y, and z axes. This method of representing direction is more usefully abstracted to nongeometrical vectors. 2-4.7 Scalar Multiples If each cordinate of a vector V = [vhv2,v3] is multiplied by a constant c, the resulting vector [cv1,cv2,cvJ is termed a scalar multiple of V, and is symbolized as cV, with c being a scalar, or a number, and V a vector. The vector c V is "parallel" to V if c is positive, and lies in the opposite direction to V if c is negative. 2-4.8 Acceleration Acceleration, the time rate of change of velocity, is similarly a vector and is related to the velocity vector analogously as velocity is to position. Thus for the particle at time t, acceleration =a =[V] = dV/dt = [a1 (t),a2 (t),a"(t)] where a;(t) =dv;(t)/dt for ·i = 1,2,3. In the presence of a given inertial mass m, acceleration is proportional to force, i.e., f = ma. This equation means that f = [fl,f2J,J where [; = ma;, i = 1,2,3. 2-4.9 Force Force vectors are customarily represented as rays in the same way that velocities are. When two different forces f and g both act at a point P, the observed effect is the same as if the single resultant force r acted on the point where, as in figure 2-2, r is the diagonal of a parallelogram the sides of which are f and g pointing in their correct directions. r is thus the vector sum f +g. This parallelogram law of addition applies as well to the addition of velocities and to the addition of positions (displacements). Algebraically the law reads [f1,f2J + [gl,g2] = [fl + g1J2 + g2], i.e., like components add. The law might equally be described as one of coordinatewise addition. 2-6 f 2 +g2 -;I -,1_/____ / : t+ 9 I I I g --------1I II lg2 Fi,qure 2-2 2-4.10 Analysis of Vectors Into Components A force vector F = [/1 ,/2 ] may be analyzed into the vector-sum of force vectors, or components,in given directions, say in the directions of the vectors G, =--= [gu, gl2] and Gz = [gwg22J. To do so,one must find the scalars a, and a2 for which F = a,Gt + a2G2, i.e., for which the following pair of equations are satisfied: f, = a,gn + azg21 and fz = a1g12 + az2Yzz· If G1 and G2 are not colinear, then a, and a2 can be uniquely found from these equations. The required components are a,G, and a2G2.Similarly, any velocity vector V may be uniquely analyzed as the sum of scalar multiples of any two velocity vectors V, and V2 that are not colinear. The required "velocity components" wouldbe a,V,, a2 V 2, with V being the vector-sum of these two components. The same would be possiblefor any position vector P, but is not so often useful. The above analyses may be done uniquely in three dimensions if three non-colinear componentsare specified (but nonuniquely if the number of components exceeds the number of dimensions). 24.11 Unit Vectors In figure 2-1, the projection of the vector [v,,vz] along the axes may be represented by the vectors [v"O] and [O,v2 ]. These are the rectangular components of [v1,vJ.By treating v1 and V 2 as scalars, we have [v"O] v 1 [1,0][0,V 2 ] Vz [0,1]The vectors [1,0] and [0,1] have dimensions of unity and are called unit vectors. Similarly, inthree dimensions, the [1,0,0], [0,1,0] and [0,0,1] are of unit length, and lie, respectively, along thethree axes of space. They are the unit vectors symbolized as/,, lz and Ia, respectfully.Unit vectors viewed merely algebraically have the especial importance that any vectorX= [xux2 ,x"] can be written uniquely as a linear combination of It> lz and I:,, namely[Xt,Xz,xJ x,Jt + Xzl2 + X3/3 X 1 [1,0,0] + X 2 [0,1,0] + Xo [0,0,1]2-4.12 Dot or Vector Product Many of the operations on geometrical vectors have no obvious extension to nongeometricalvectors. An exception to this is the dot product or "vector-product," X · Y, also symbolized as XY,of two vectors X and Y. Its value is defined to be scalar, not a vector. Thus, [xhxz,X;.] . [y,yz,yJ = XtY! + XzYz + x3y3In two and three dimensions this product is numerically equal to the products of the magnitudesof the two vectors times the cosine of the angle from the first to the second. 2-7 Two vectors are perpendicular if their dot product is equal to 0. Any two of the unit vectors are perpendicular, as are any two of the following nonunit vectors: [1,1,1,] [1,-1,0] and [-1,1,0]. 2-4.13 Time-Integrals of Vectors Position is simply the time integral of a velocity vector + starting position, (and velocity bears the same relationship to acceleration). The integration is performed componentwise. If a moving point P has position [P1 (0) ,P2 (0)] at time 0, and if at each time t its velocity is a vector v(t) = [vl(t),v2(t)], then the position of the point at timeT is P(T) = [Pt(T),P2 (T)], where 7' fori= 1,2, Pi (T) =Pi (0) + f vi (t) dt. 0 2-5 n-Dimensional Non-geometrical Vectors No particular physical geometrical character is intended to be ascribed to an n-dimensional vector x = [x~> ... ,Xn]. It is customary to refer to it as a "point inn dimensions," and some properties of n-dimensional vectors are assigned geometrical names (see ch 8 and 9). It should be noted that n-dimensional vectors are not confined to Euclidean space. A particular interpretation of a given vector is apt to afford the most natural medium for inituitive grasp of the concept of ann-dimensional vector. Population, activity, strategy,. and distribution vectors will be discussed in the paragraphs which follow. Some of the characteristics of geometrical vectors that apply directly to n-dimensional vectors in general will first be noted. The standard mathematical definition of a linear vector carries over to n dimensions two properties that characterize geometrical vectors: (1) Scalar Multiple. By a scalar multiple of a vector X = [X1, ... ,Xn] is meant a vector [aX1, ... ,aXn], denoted by aX. (2) Coordinatewise Addition of Vectors. Two vectors X and Y of the same dimensionality are to be added coordinatewise, i.e., [XH ... ,Xn] + [Y1, ... ,Yn] = [X1 + Y1, ... ,Xn + Yn]. Any collection of vectors for all of which both properties (1) and (2) hold is mathematically termed a linear vector space. The algebra of linear vectors, important to the subject of linear programming and to the theory of games, is treated in chapter 9. The notion of a vector-function-of-time, an n-dimensional trajectory, and its coordinatewise derivatives and integrals, is quite useful, and carried over intact ton dimensions. Thus if x(t) [x1(t), ... ,xn(t)] is a vector expression whose ith coordinate at time timet is xi (t) then x(t) the time derivative of x(t) the velocity of the entity described by the vector x [x1 (t), ... ,xn (t) ] and x(t) thetimederivativeofx(t) • the acceleration of the entity described by the vector x [x~(t), ... ,xn(t)] and similarly for higher derivatives. Similarly, a vector process may be coordinatewise integrated. n-dimensional position (velocity, etc.) is the time integral of n-dimensional velocity (acceleration, etc.). 2-6 Population-Vectors 2-6.1 Definitions A collection of objects. that has some particular integrity may be abstractly termed a population. Obvious military examples are: any given organization (e.g., battalion, task force); the troops, friendly and enemy, that are engaged in a given battle; any given military inventory (e.g., of many items at a given location, or of the amounts of a given item at various locations). 2-8 Suppose that a given population is for a given purpose analyzed as to the kinds of members of which it is composed. If K different kinds of members have been defined, denote the number of members of the kth kind at time t by Pk (t) . We assume that the K kinds exhaust the membership.3 Then, the population as a whole at a given time, P(t), may be regarded as the vector [P1(t), ... ,Pk(t)] or [Pdt)]. 2-6.2 Considerations in Treatment of Population Vectors (1) Dimensionality In treating population as a vector, the kinds of members will normally be much larger than three. The vector [P1 ( t), ... ,Pk (t) ] can be regarded as a point in K-dimensional space, i.e., at all times K numbers are required to specify completely the analytic composition of the population. During the course of time, if the Pd t) 's change, this point travels around in this space. (2) Adding Population Vectors Coordinatewise addition of two population vectors of the same dimensionality conforms directly to the requirement that in adding populations, all and only members of like kind are to be added. Thus if [P~; (t)] + [ Qk (t)] = [Rk (t) ] , then Rk(t) = Pdt) + Qdt). (3) Scalar Multiple Some populations contain only whole numbers of members of any kind, and in that case a scalar multiple aP may not be meaningful as a population unless the values of a are restricted to be integers or occasionally suitable fractions. This is especially likely for populations which are small or of very discrete design (e.g., a fire-control unit, or a piece of equipment which is a critical assembly of K kinds of parts). Scalar multiples may then be meaningless. The requirement that any scalar multiple of a vector be a vector is essential to the notion of physical vector and also to the modern algebraic theory of vectors upon which linear programming is based. In order to retain these requirements, attention must be limited to two types of populations : (a) Continuous-valued populations. Certain populations are typically continuous valued, e.g., an inventory of petroleum, manpower in general. (b) Continuous-valued approximations to integer-valued populatifms. Difficulties of measurement and/or economy of concept may make quite acceptable the approximation by continuous values of an integer-valued population. A principal type of example is a large population that tends to change slowly or smoothly with time. Continuous-valued approximations to population sizes are freely exploited by common theories of statistics (e.g., the normal distribution) and are useful for continuous methods of analysis in general. Example: One version of the Lanchester theory of combat attrition hypothesizes that if E(t) and F(t) are the respective numbers of enemy and friendly troops surviving at time t after the start of a continuous battle, then E (t) and F (t) satisfy the following differential equations: dE (t) Idt = -fF (t) and dF(t)/dt = -eE(t), where e and fare the respective rates at which enemy and friendly troops destroy each other. The battle-population-vector-process [E (t) ,F (t)] is thus assumed to be continuous-valued, as are firing and destruction rates. (4) Magnitude Since the vectors are not restricted to Euclidean space, we are not obliged in connection with population spaces to adopt the square root of the sum of the squares 3 If these kinds are mutually exclusive, so that no member can be simultaneously of two different kinds, then the grouping of the popula tion is commonly termed a distribution. 2-9 of the components as the magnitude of the vector. For populations it is more natural to use the sum of the components as the magnitude, i.e., the "size" of the vector. Then certain corresponding scalar measures are automatically prescribed for the derivative notions of speed and force. (5) Velocity With the velocity of a vector taken to be the vector of the componentwise timederivatives ( cf. para 2-4.4), then the magnitude of the population velocity vector is also the sum of the magnitudes of its components. This sum might be 0 even while the individual components were changing in size during time, as in the shifting from political party to political party of a vector population of constant size. Note that the velocity of the vector differs from the velocity of its magnititude, when magnitude is chosen as in ( 4) above in contrast to the case of physical vectors with magnitude chosen to fit the Euclidean nature of space. (6) Direction The notion of direction is abstract only geometrically, not numerically. If each coordinate Pk of a population vector [Pk] is multiplied in size by the same nonnegative constant scale factor a, then the resulting scalar-multiple vector [aPk] possesses the same quantitative proportionalities among its coordinates as did the vector [Pk]. Just as in the case of physical vectors, it may be called parallel or similar, to [Pk]. Direction may thus be regarded as the joint quantitative proportionality among the cordinntes of n vector. This in fact covers the case of physical vectors. As noted before, when the coordinates of a population vector are required to be integers, not every scalar multiple may be a feasible population vector, i.e., have integer coordinates. This is consistent with the interpretation of direction given above. (7) lnfinite-Dimensionnl Populcdion Vectors For practical purposes, it may on occasion be convenient to provide for an infinite number of possible kinds of members into which to analyze a population. Commonly, k is then a numerical index of the corresponding kind. Examples: Pdt) is the number of (a) soldiers of height (weight, endurance index, etc.) k; (b) vehicles of age k since purchase; (c) pieces of manufactured output with k defects. The vector [P1J is now termed infinite-dimensional. 2-6.3 Illustrations of Populat'ion Vectors (1) A table of organization representing a population of force-units 4 of various types [P1, ... ,Pn]; (2) The numbers of a given type of force units that are at various locations; (3) The numbers of a given population of force-units that are in states sl,st, ... , etc.; (4) Let [Ek] represent the number of enemy troops at position k of a battle grid. Then the same relntive concentration of enemy strength would be given by any scalar multiple of the vector, for example by [2E,J, or by [Ek/3], etc.; ( 5) Suppose that one production schedule calls for making n pieces of a product which is assembled from I<. types of parts, Pk of the kth type of part being required in each unit of the product assembled. Then another schedule requiring that n * pieces be made would be parallel to the first in respect to the number of parts that will be required to be available to support the assembly operation. 4 A force-unit will be used to represent a soldier, weapon, vehicle, piece of equipment, etc., or some combination of these (such as a com pany or squad). 2-10 2-7 Hypervectors 2-7.1 Definition Ordinarily more than one set of dimensions is needed to characterize military populations.For example, a military population will normally need to be characterized at least by the numberand type of force-units that compose it, by the status and condition of units of given type, and bythe location of units of given type and status. A useful component of a force vector is thus likely tobe of the form Piik where i indexes the types of force unit, j the strength (condition, status, etc.)values, and k the locations. Sub-aspects of each index may be required, for example: types ofelements in a given force-unit, strength by the amount of time for which it can be expected to last,location by special battlegrid coordinates. Additional major indexing may be required to accountfor missions, objectives and tasks belonging to the associated categories of the population, and theamounts of such missions remaining to be accomplished. Additional indexing may be used to detailthe environment of the location.Finally, of course the typical battle situation involves two forces, friendly and enemy.In order to represent not only military, but other types of populations, it is necessary to have ahypervector, which is composed of other vectors.A typical coordinate of a hypervector P can be written as Pn ,n , ... ,,. where ordinarily 1 ~ M nt = 1,2, ... ,N1; nM = 1,2, ... ,NJI.The dimensionality of P is thus [N1 , ••• ,NM]. For convenience, P is ordinarily written as[Pn n ... n ] unless ambiguity results. When the dimensionality is clear, P may simply be written 1 2 M as [Piik] in, for example, the case of three-dimensional characterization. 2-7.2 Matrices The simplest example of a hypervector is a matrix, i.e., any rectangular array of numbers v; 1arranged in rows and columns as follows: v~] Vm1 Vm·~ Vmn Thus when hypervector is of the structure [v; J, its coordinates may be visualized in such arectangular array. For hypervectors of more complicated structure, such as typical militarypopulations as cited above, it is not necessary to be able to obtain such a simple ordering, althoughsuch orderings may favor the performing of mental analysis.Matrices, have two principal uses: (1) as hypervectors, i.e., as vectors; (2) to permit transformations of things of one set of kinds into things of another set of kinds. The second of these twouses is of such importance that it is reserved for treatment in the chapter which follows. Toexamine a matrix as a hypervector is to examine its structure and properties as a vector. In thisrespect matrices may be added cellwise, i.e., [uii] + [vii] = [uu + vii]; and a matrix is cellwisemultiplied by a scalar to produce a "scalar multiple" of the matrix. Frequent use of matrices is made in statistical testing of populations and in statisticalexperiment. 2-8 Activity Vectors 2-8.1 Matrix NatureOrganized activity may be regarded as being essentially matrix in form. It consists inorganizing an input, which is typically a vector of effort-components (labor, weapon capabilities, 2-11 geographic positionings, materials, equipment, information) so as to produce an output which is also typically a vector of the products of the activity (casualties, material damaged or manufactured, training accomplished, information obtained, etc.). Often the principal effect of activity is to alter a force-composition vector. The true matrix-nature of activity-i.e., that activity is the performing of a transformation-may then sometimes be obscured by the fact that the familiar name of the activity refers only to some familiar aspect of the activity, but not to the entirety of the activity. Thus, for example, the term "search" tends to connote only the effort, not the product of a search; the term combat refers to the struggle without itself indicating the logistic preparation required, or the result of combat. The modelling of the major kinds of activity--combat, transportation, reconnaissance, service, manufacture, etc.-are treated individually in later chapters. General structures of activity are treated in chapter 7. The objective of these paragraphs is simply to illustrate and summarize the vectorial aspects of activity. 2-8.2 Combat Expressed somewhat formally, the objective of combat is to alter the force-composition vector of both opposing forces in a way that will result in defeat of the enemy. The typical course of a battle or campaign is one of progressive alteration of the force-composition vectors. In the case of the enemy's forces, it is desired to degrade the condition of a rather large-dimensional vector that details his force composition. So far as one is able to choose among alternatives, this is done by selecting, at each point in the progressive conduct of battle, certain aspects of the enemy's forcevector for concentrated attack. The typical rates, and their short-term time-patterns, at which population velocities (changes in the force-composition vectors) occur, may be distinctive of the type of war-e.g., whether it be cold war, nuclear war, guerrilla war, massive land warfare, etc. What happens in any short timeinterval may be regarded as an exchange of input for output, in which materiel and manpower have typically been exchanged for enemy dead; medical care and traini-ng have been exchanged for combat troops; etc. For some purposes, the characteristic course of combat may be indicated by the exchange rates that occur. Current methods of war-gaming to a large extent simply represent such force-composition vectors, with the assistance of high-speed computers, in exhaustive detail. The battle area may be divided up into grids, and the quantity of each type of troop and materiel present in each grid position is kept track of at all times during the war game. Other details may include: condition; readiness (loading, under repair, moving, firing, etc.); mission (direction of objective); and capability considering the nature of the grid position (e.g., velocity at which tanks can move is made to depend upon the average profile of the grid, denseness of forest, location of rivers, their depth, etc.). In short, the game simply displays the entire combat force-composition vectors in detail. The playing of the game then follows the changes in these vectors over time. I The "Lanchester theories" of combat attribution (para 2-6.2) attempt to summarize the average quantitative effects that may inevita~ly dominate the nature of development of the battle, simply because of the fundamental importante of such factors as force size and average attrition rate (see ch 15). 2-8.3 Developing Capabilities Activity may consist primarily in the building up of capability for certain intended future activity. In the case of combat, this capability is measured in terms of combat postures and potentials. In the case of noncombat activity, it may be the rate of tooling, of investment in equipment, or the construction of installations. Formally such buildup may be as complex a vector in design as is the intended resulting posture.. 2-12 2-8.4 Transportation Transportation as used here refers not to the en route aspects, or cargo-handling details, of theactivity, but rather to the coordinative aspects-i.e., the planning and design of a particular shipping program that may typically involve the coordinated transporting of troops, materiel, or othercargo from a number of origins to a number of destinations. The design of the activity consists indetermining the amounts of each type of cargo to ship from each origin to each destination, and theroutes and times of shipment. Once the coordinated flow has been designed, the execution becomesprimarily a routine physical activity, although it is subject to later design in view of unanticipated attrition, costs or other factors.The set of flows is emphatically matrix in form, with S;i denoting the average rate at which,for any given item of supply, the item is to be shipped from origin i (factory, arsenal, manufacturing point, regional warehouse, port) to destination j (post, camp, station, port, warehouse, etc.).The design of total coordinated flow in such a way as to satisfy requirements and constraints,and to optimize given objectives, has been the subject of many analytical studies. See chapter 9,where peacetime and combat forms of the transportation problems are contrasted. 2-8.5 ProductionProduction is here defined as any type of productive activity. The production center may be anarsenal, or even a department of an arsenal, a factory, a school (producing graduates), a repairshop, a hospital, an administrative headquarters, a communications center, a service center, etc.Typically a center may be important simply because it is capable of performing a substantialvariety of work. Thus the momentary input is a multi-dimensional demand vector, each coordinatecorresponding to the type of work to be done, and the coordinate value representing the rate atwhich the work is typically to be done. The output is similarly a multi-product stream, a vector inwhich the coordinate positions refer to the kinds of products that are made.Production is, in the case of manufacturing, essentially matrix in form, with r;i representingthe amount of input component i that is going into output component (product) j, r;i beingexpressible as a rate or as a total amount in a definite period of time as appropriate. Such matricesare often on file in factories in the form of parts-explosion lists for assemblies and similarspecifications of the input required for a given output. (See ch 3.)The complexity of the form of the matrix [r;J may be an index of the complexity of theproduction process. For example, a sparse matrix (one with many O's and blocks of O's) may signify considerable subdivisibility of production as a process; i.e., the process can be decomposedinto subprocesses that do not share input or output. When, however, decomposition is difficult, thentypically the various production subprocesses share resour·ces. The most efficient allocation of resources for the organization as a whole then becomes an important objective of the proper designof the production activity treated as a totality, much as in the case of transportation as discussedabove.Selecting the best production matrix by which to transform a .given set of inputs or resourcesinto a required set of outputs or products is termed programming. Recent developments in the fieldof mathematical programming have greatly increased the methodological capability now availablewith which efficiently to design the total activity of such production complexes. 2-8.6 Sums and Scalings of Linear ActivitiesSometimes a process is encountered which is basic in the following sense. When the activityis operated at a given level, it characteristically consumes each of a set of M inputs at ratesh ... .!AI and simultaneously produces a set of N outputs at rates p,, ... ,p.v. The activity maythen be represented as an input-output vector A = [f,, ... ,/.11 ; p,, ... Px]. The only alternativesof operation of the activity are scalars of this vector-i.e., at vector rates in which the inputsand outputs are adjusted by the same proportionality constant. Such a process is readily regardedfrom the formal standpoint as a scalar activity. Examples of this type of activity are found inchemical refining processes and in other process technology. They also occur in manufacturingin the continuous production of a given type 9f product on some machine or in some center. 2-13 Sometimes the total activity of the production center can then be represented as a vector [At. ... ,AK] of such activities, each coordinate position corresponding to the conduct of a basic activity at a level whose numerical value is the coordinate value. In this way the total activity of the center is represented as a vector of basic scalable activities. The dimensionality of the total activity is then only the number of basic activities that are active. Transportation represents a rather trivial example of this, in which inputs (amounts dispatched out of the origins) and outputs (amounts delivered at the destinations) are shared in a rather simple fashion. Some examples of the addition of vectors representing linear activities and their multiplication of scalars are given below. This subject is developed further in chapters 7 and 9. (1) Let the vector S, = [s,, ... ,sn,] represent a search activity in which the ith of a given set of regions is searched at intensity s;, by units of one type of reconnaissance organization. Let Sz = [s,z, ... ,Snz] similarly represent the searching of the same set of regions by a second type of search organization, which must search the regions in proportions not necessarily the same as those of the first organization. Measures of search intensity are given elsewhere in this volume. If each of the above operations can be constructed linearly, then the linear combination aS, + bS2 represents the search of the set of regions by a units of the first type plus search by b units of the second type. (2) Suppose that a certain manufacturing process in a certain arsenal can characteristically make a certain set of N products in amounts p,, ... ,pN by consuming a certain set of M inputs (labor, materials, machine capacities) in amounts f,, ... ,fM. Its activity may be regarded as an input-output vector A, = [/,, ... ,fM; p,, ... ,p;v]. Now suppose, as is sometimes true, that the process can be operated at any time rate kA,, where k is a proper or improper fraction in certain ranges (for example, the process can be operated part of the time only, or during extra hours, etc.). In other words, the process is supposed to be scalable. Now suppose further that the same set of inputs and outputs characterize. a process in another arsenal, but in different proportions of the coordinate values. Let Az denote the corresponding activity vector for the second arsenal, and suppose that 'it too can be operated at rates which are scalings by a factor kz. Then the linear combination k,A, + k 2 A 2 will represent the combined input and output of the two arsenals, which if the ranges of values of the scaling coefficients k, and kz are used to advantage, may be made to bring about total combined mixtures of inputs and outputs in proportions of which neither arsenal by itself is capable. (3) Transportation in both its peacetime and combat forms may be regarded readily and usefully as a sum of activity-vectors. Peacetime transportation is often quite scalable, but combat may not so· often represent a scalable activity. 2-8.7 Effects of Timing and Scheduling on the Linearity of Activities An activity may, for programming purposes, be scaled simply proportionally to the amount of time for which it is conducted at a given rate. This is the commonest way in which activity rates are brought into mutual balance-i.e., one is run for more time than the other in order to match their joint output. With respect to its conduct through time, most activities are in this way quite linear. However, when the transitions that must be programmed from one type of activity to another consume time and effort in' themselves, linearity of the input and output of the activities as a group may be lost. In manufacturing, this phenomenon occurs most commonly in connection with "setup." Activity of all types, not just manufacturing, exhibits from time to time the equivalence of a "setup" aspect that interrupts possible linearity of the activity in time. This is discussed more fully in the section on Activity Structures (ch 7). 2-9 Allocations The term allocation of effort is a common and appropriate one for designating the design of an activity in which x, represents the amount of the ith activity that will be performed. A 2-14 quantity x is said to be allocated among a set of I respects if the amount x1 is assigned to theith respect and if l;x; = x. The vector [x1o ... ,x1] is the allocation. In the organization of joint effort, allocations play frequent roles. Typically there is allocation of the resources that must be shared. However, there may be an allocation of output, particularly of gains that are made.Examples of allocations are in coordinated fire, where X;j may represent the amount of firefrom location i that is to be directed against target j; or the rate of search of region j by searchunit i, when mission can be described quantitatively. Transportation typically requires an allo cation of the form X;jkt = the amount of item i to be shipped from origin j to destination k intime period t. In fact what is usually call the "transportation problem" of linear programmingcould be more generally termed the allocation problem, since its essential structure is merely anallocation of sums. (See ch 9). 2-10 Assignments An assignment is typically an allocation in which the quantities allocated are discrete. Forexample, at any moment an organization may be assigned to a unique mission. If the total timeduring which the mission is assigned to various organizations is analyzed, the time will be foundto be allocated among the organizations to which the mission was from time to time assigned.A significant assignment problem occurs in connection with defense systems, for examplein antimissile defense. Here it is desired to assign targets to defensive weapons, in an individual fashion. The problem occurs also in the design of offensive raids upon enemy targets. Quantitatively, an assignment may be representable structurally as a binary matrix of O's and l's in which the element a;i is equal to 1 if ith element to be assigned (weapon organization, machine-operator, etc.) is in fact assigned to the jth assignment (target, mission, machine, etc.) ,but otherwise a;i is 0 in value.A form of assignment is routing through a network of nodes connected by branches, such as might be used to represent abstractly a command network, or a communications network, ora supply network. Note if node i is "assigned" to node j, we may set the element aii of an assignment matrix equal to 1, and otherwise equal to 0. The matrix [a;j] now represents the structure of connections in the network. "Connection"may stand for "commands," "is attached to for supply," "is to send all communication to," etc.How products of such matrices may be used to analyze flow through such networks is dealt within chapter 3. 2-11 Vectors in Strategies of Action In military and other activities, there are frequently alternative courses of action betweenwhich choices must be made: (1) A combat decision may be to choose among a set of alternative routes along which to move, to choose the force-size (if any) to send along each route or to concentrateat each position, and to choose what appearance to give to the enemy. (2) A production decision may be to choose which of a set of products to make next,to choose the force-size to assign to each of a set of tasks, and to choose which machines to remove from production for maintenance and repair. (8) A weapon-design decision may be to choose which weapon to develop, whether todevelop more than one of a somewhat similar group, how many to develop and howrapidly to develop them considering the likely enemy counterdevelopmental tactic once the decision is made. (4) A research decision may be to choose which experiment to conduct next, and howmuch funds and effort to invest in a given investigation.The result of the decision can be a strategy, or a plan consisting of choices to be made foreach foreseeable alternative. If the course of action can be opposed, as in combat, then it is sometimes necessary to choose a mixed strategy, in which a given alternative is followed some part of 2-15 the time or with some probability. Vectors can assist in the representation of strategies. For example, a mixed strategy could be represented as [x1, ... ,x,] where xi is the frequency with which the 1:th alternative is to be followed, and L" xi = 1. The determination of the frequency i=l that any alternative should be followed is the subject of game theory, and is covered in chapter 10. If each of the various alternatives consists in initiating some one of a set of linear activities, then all of the possible strategies may be found as linear combinations of a certain basic set of the activities. (See ch 9.) In this case, vectors are employed to represent the various re sources that may be shared, the products that are to be made (missions accomplished) and the activities that are to be engaged in. As has been indicated by examples earlier in this chapter, linearity of alternatives may well be the case in choices of strategies in transportation and in production. The choice of the design of a weapon or weapon system may sometimes involve both the discreteness of alternative characteristic actions of a game and the option of physical characteristic mixtures of an allocation. For example, once the weapon-design is chosen, commitment to that weapon has been made typically for some period of time; hence the choice of move cannot be varied once the choice is made. To this extent the choice is discrete, the discreteness taking the form of commitment to one design for a period of time. But the particular design may possess a mixture of capabilities as a weapon-degrees of ruggedness, versatility, etc. as measurable in definite ways-that to some extent overcomes the discreteness of the commitment to the particular design. The structure of this action may be contrasted with a linear activity which is a physical mixture in which by definition the numerical mixture can be varied at will during the course of time. Sometimes the very multiplicity of alternatives can tend to discourage careful quantitative analysis because of a scarcity of reliable estimates of the many quantities entailed. But without actual analysis, it may in fact be difficult to judge the effects of these unreliabilities of the in gredients upon the complete strategy. 2-12 Distributions 2-12.1 Quantitative Usages of the Term The term distribution is commonly used in several rather different specific meanings, of which the following are perhaps most familiar: ( 1) Spatial distributions, including geographic distributions of resources, of population, of targets, of activity, of defensive sites, etc. Dispersion of troops in combat falls in this category, as does the design of shrapnel bursts to achieve a given characteristic radial distribution of lethality. The distribution of the amount of heat throughout a gun barrel at any instant is a quite dynamic example of a physical distribution of this general category. (2) Statistical distributions in which probability is not necessarily implied. This category includes the great number of the natural and acquired characteristics of persons, places and things, including height, weight, endurance, and technical capability of people; size, weight, and defects of a manufactured output; and environmental effects such as rainfall and humidity. (3) Probability distributions that specifically imply the effects of independence of values within a given overall distribution of frequency of each given value; for example, the probability that a given soldier will have a given height, or endurance profile, that a given mechanical system will at a given moment be found in a given condition, that a given strategy be selected in a given play of a given game. 2-12.2 Distributions as Vectors Quantitatively, a distribution may of course be regarded as a vector whose coordinates specify the amounts of a quantity that are present in each of a given set of respects. The respects 2-16 correspond individually to the coordinate positions of the vector, and are ordinarily mutuallyexclusive and collectively exhaustive as a set.It is not usually stressed that a distribution is a vector, in spite of the formal similarity.Nevertheless vector distributions provide multi-dimentional wholes in terms of which to characterize a given operational situation. When their usefulness in this way is recognized, distributions may provide insight into the nature of the operational phenomena or situation in question,since they tend to specify the net total effect of all of the operational forces that are at work.From this standpoint, two principal categories of distributions may be distinguished: (1) Non-probabilistic distributions that typically specify the current simultaneous value of the distributed quantities. These include both spatial distributions and statisti cal distributions. (2) Probability distributions which arise when the conditions of independence are met. A full development of the uses of probability distributions goes beyond the scope of this paragraph, and is treated in chapter 4. The significance in a distribution of the vectorial property in the sense of a multi-dimensionalwhole may be referred to as the integrity of the distribution. This integrity is the basis for thosestatistical laws and other quantitative laws of distributions that can be depended upon operationallyfor purposes of predicting the future. This is the most abstract meaning of the term vector, yetmay be the most useful meaning in dealing with organizations and activities of great complexity.These principles are illustrated in the following examples and discussion. 2-12.3 · Statistical Distributions in BallisticsThe Gaussian distribution is frequently employed in exterior ballistics. In what might be referred to as "Gaussian rounds," the lateral deviation, x, of the point of impact of an individualround from the average impact point, when measured in units of the standard error of the deviation, is given by the formula f (x) = -1 e-rc 2 f 2 • This formula refers to some definite pop y27T ulation of rounds fired under a given condition. Consider now two different such populations, differing only in their standard deviations, sothat the frequency distribution of deviation for one of the two populations must now be written as1!1(x) = exp [-( x:) ( _ ) Jand the frequency distribution for the other population of 2u1 u1 y21T 1 rounds as f 2 (x) = exp [ -( xz. ) ( 2 ) J.Moreover, consider still a third population 2u2 u2 y27Tof shots which consists of N 1 shots from the first population and Nz rounds from the second. Thenin _the composite population the total density N_ (x) of rounds that will have a deviation of · Nd1 Nd2 x Will be N(x) = N (x) + N (x) where N1 + N 2 = N Th. I tt . . f h" h . IS a er expressiOn Is a orm w IC represents scaling and addition of vectors.The distribution f (x) refers to a certain assumed population of shots. The shots will be madeover a period of time, and the distribution of frequencies expected is compared with that obtainedby accumulating the record of each shot into one compilation. If the conditions of shooting do not change during the progress of shooting, then the distribution f (x) is regarded as "characterizing the shooting-situation at any moment." It thus summarizes all of the effects that may be operativein the outcome of each shot. The distribution consists of specifying the frequency of occurrencethe likelihood or probability-of each deviation. If the tendencies that determine the numericalvalues of each of these likelihoods are regarded as being present in the firing of each shot, andeffective in each firing, then the distribution represents the simultaneous "presence" of all of thesetendencies. 2-17 In this respect, the distribution is a multi-dimensional whole of special sort. It is not easy to see why the distribution should be correct, nor to see how the variety of physical forces in effect at each firing is translated into the frequency of occurrence of each possible deviation. Nevertheless for practical purposes the availability of a correct predictor of the frequency of deviation, i.e., the availability of the distribution, is sufficient for practical purposes. For example, hit probabilities can be calculated and ultimately weapon systems can be evaluated with the assistance of these fundamental hit-probabilities. (See ch 15.) 2-12.4 The Design of Lethality of Burst as a Distribution The distribution of lethality characteristics of the burst of a given projectile is as important as the aiming dispersion of shots. The former distribution depends upon the distribution on the one hand of targets around and away from the point of burst, and on the other upon the deliberate design of fragmentation or diffusion of the projectile's charge. In the case of nuclear bursts, which can have enormous lethal radii, one result of the importance of these factors has been a considerable investigation of the geographic distribution of population around typical metropolitan, and other likely nuclear targets. Also important in the discussion of lethality is the variation of reaction to a given lethal mechanism from target element to target element. Changes in weapon yield or changes through time in the characteristic dispersion-pattern of enemy targets may obviously warrant recalculation of the designed spread of the weapons' blast effects. 2-12.5 Deliberate Discursive Unity in Game-Theor-etic Designs In the example of exterior ballistics above, all of the frequencies in the distribution that are realized over time result from inability to control the behavior of each shot, projectile, and ballistic trajectory more precisely. But in game theory the opposite ability to random'ize each play of a game may be used deliberately to realize the full occurrence of all the frequencies, only over time, with the effect of rendering each choice of play unpredictable by the enemy. As described earlier, suppose that in a given game (duel, contest, struggle) each player has a definite set of possible moves, of which only one can be made on a given play of the game. Suppose further that for one of the players the optimal strategy is for him to choose the ith of the moves open to him with probability Pi· Then if his total moves are n in number, the vector [p11 ••• ,pn] of probabilities describes at once his total strategy. That is, although in each play only one move can be chosen, nothing less than this "vector of probabilities" will completely describe the strategic potential that is inherent in the situation prior to the actual choice of a specific move. In determining and designing such probability-vectors of the choice action, evidently a distribution is being designed whose unity is realized only discursively, in principle, i.e., by repeated plays of the game. Note that if the game is played only once, we may choose instead to focus attention on any sequence of different games that are played in order to identify the realization of strategies only during the course of time. 2-12.6 Variety There are varied degrees to which a given characteristic may typically be present in a population, whether the population be one of humans, of equipment, supplies, or locations. Still more varied are the degrees to which a given set of joint characteristics may be found present in a given population. Thus equipment of standard specification will exhibit a variety of operating defects and fluctuations from standard, due to rate of use, environment, and maintenance policies. As a result of the above variations, sometimes a frustrating variety of outcome will characterize supposedly identical repetitions of a given posed situation; e.g., successive firings at a given type of ta1·get with the same type of weapon are not uniformly accurate or homogeneous in results; the repetition of a given task produces a different result. Variety can also be used in strategy, in order to confuse and defeat an enemy, when the contest does in fact offer alternatives of action. 2-18 2-12.7 Statistical Laws In dealing with the variety that is typically encountered in experience, the greatest need isoften to be able to represent it in a way that makes it possible to predict it. When the variety canbe represented in terms of a distribution, then the problem of prediction may not be to predict theindividual occurrence, but merely to predict the distribution. That such prediction is possible iscontained in the principle that a distribution may be stable even though the population which it!characterizes is not. A very great many examples of this have been turned up either in theempirical statistical observation of the natural world and in the study of theoretical probabilitydistributions. Such distributions are "statistical laws." Some of the most common are given inchapter 4. 2-12.8 Dimensionality When the dimensions of a vector correspond to levels of a given characteristic, the dimensionality of the vector, i.e., of the distribution, will not be invariable. By choosing to observe,measure or specify the characteristic at a finer level, the dimensionality of the vector may beincreased. To each mesh of measurement of quality differences there corresponds a dimensionalityfor the vector. In the case of probability vectors of the state of a system, the dimensionality may be precise.Specifically, let the vector [p]J ... ,pn] describe the probability that a given system is in any one ofn states, and is in particular in state i with probability p; as given by the vector. Let a matrix[a;i] consist of the probabilities aii that the system will as a result of a given trial be in state jgiven that just before the trial it was in state i. Then the probability vector of the system's statemay change as a result of the trial, and will be given by the product of the vector [p;] by thematrix [a;i]. The product must preserve the dimensionality of the state-space of the system, i.e.,of the possible states in which the system can be. Measurements on the system must be consistentwith the dimensionality chosen; otherwise the state-analysis of the system may be presumed to bein error. This topic is more fully dealt with in chapters 3 and 6.Precise dimensionality is also a matter of concern in constructing game-theoretic probabilitydistributions for strategies to be employed. 2-19 CHAPTER 3 MATRICES, GRAPHS, AND NETWORKS 3-1 Summary Any particular numerical matrix is simply a rectangular array of numbers, with notation [a;j], asillustrated in hypervectors in chapter 2 and as described in this chapter. Military uses of matrices in operational analysis include two broad categories which are conceptually quite distinct, namely: (1) Hypervectors: i.e., the simultaneous characterizing of something in two different dimensionalaspects. This use of matrices is primarily for structural purposes. Formally, it involvesno new concepts beyond those of a vector itself; practically, it includes many 'importantexamples: man-weapon and man-machine combinations, weapon-target assignments, organization-mission assignments, and transportation links. (2) Transformation of Vectors: This is a more abstract use of matrices than that given aboveand represents the transformation of one vector or situation into another, thus representing action, development, change and transition, and history. This .use of matrices isdistinctive and of the greatest importance. Examples of the practical application of matrix methodology to operational topics are: (1) The design and evaluation of networks for command, communication, supply, and taskprocessmg. (2) The history of the state of a working system (e.g., a weapon system) including the effectsof random developments on system states and histories. By extrapolation, the development of the history of groups of such systems, for example, constitutes the history of abattle. (3) The allocation of military resources in the conduct of multi-dimensional productive activities, including combat, transportation, and manufacturing. This method is especiallyuseful in programming these allocations. (4) Evaluation of the consequences of the repeated making of decisions when decision affectsthe change of state of the participating systems.Repeated transformations can be represented by products of matrices, successive products representing the total changes that occur over successively longer periods of time. Diffusion of the facts ofthe present into the possibilities of the future occurs in such products when the elements of the matrixrepresent diffusion coefficients, e.g., probabilities or alternatives of disposition.The details of these various uses of matrices are developed progressively in this chapter and otherchapters of the text. This chapter is concerned with the central formal aspects of matrices, particularlythose associated with product of matrices and the basic notion of a transformation-its graph and thedevelopment of a network therefrom. The natural order of arrangement of the topic is to consider the use of matrices as hypervectorsafter discussing their use in connection with transformations, since some of the former uses depend uponthe performance of transformation. However, some uses of matrices, which are essentially two-dimen sional compounds of vectors, depend merely upon the presence of a joint dependence or relationshipand not upon the effect of any transformation that may have been founded on this relationship. Manycompound structures seem to be of this sort.A section on basic operations of matrix algebra is included at the end of the chapter. 3-1 3-2 Definitions and Notation A matrix may be represented by an M x N (read "M by N") rectangular array of elements, MN in number,-arranged, in M horizontal rows and N vertical columns, thus: aM2 A matrix such as this is usually represented by the notation [a;j] whose general element is a;j, where to column number. It may be regarded as a row the subscript i refers to row number and the subscript j vector of column vectors or a column vector of row vectors, as required. The mathematical origin of matrices as transformations was primarily in connection with change of coordinate systems or axes. This use of matrices, however, in no way indicates their potential usefulness in operational topics. The paragraphs which follow provide detailed development of how matrices may be used to describe activity and change. 3-3 Matrices and Graphs of Relations 3-3.1 Examples of Relation Let R denote some given relationship that may hold between an element i of one set of M things and an element j of another set of N things, where the possibility is not excluded that the second set of things is the same as the first set. Examples are: (1) i commands j; (2) i is a geographic point from which there is a direct route to point j; (3) i provides supplies or services to j; (4) i is used in manufacturing j; (5) i is a communication point from which messages can be transmitted directly to point j; (6) i is a state or condition of a given system which can be followed at any time by state or condition j. 3-4 Types of Relations Various types of relations may be identified, namely: (1) Symmetric: A relation, R, is termed symmetric if whenever i bears R to j, then j bears R to i; (2) Transitive: R is termed transitive if whenever i bears R to j and j bears R to k, then i bears R to k; i and j belong, respectively, have no (3) Bi-partite: R is termed bi-partite if the sets to which members in common. The above discussion pertains to binary relations, i.e., the relation R joins only two things. Higher order relations may be recognized, e.g., trinary relations that hold between three things at a time. Here, however, emphasis is confined to binary relations. 3-4.1 Graph of a Relation In connection with a binary relation R, the graph is defined as the collection of things that are connected by the relation together with a specification of the pairs of these things between which the relation R holds. Geometrically, one may regard the things related as represented by a set of points with a line drawn from the point i to the point j if the relation R holds from i to j. If the relation R is not symmetric, then an arrow is used on the line to indicate in which direction the relation holds. 3-4.2 Relation Matrices Corresponding to a given relation R, a binary matrix can be constructed which is the matrix of the relation R. The method of construction is as follows: assign the element r;i of the matrix 1 if the 3-2 VALUES OF j 2 3 4 0 0 0 VALUE OF i 2 0 0 3 0 4 0 0 2 Figure 3-1 element i of the first set does in fact bear the relation R to the element j of the second set, otherwise assign r ii the value 0. Then [r;j] is an M x N matrix. Matrix and graph representation of the same relation are shown in figure 3-1. 3-5 Networks 3-5.1 General We will illustrate this concept by considering the above example where i is a geographic point fromwhich there is a direct route to point j. A set of such points may typically constitute a travel network.Each point, or node, is a station of some sort-i.e., message center, supply point, town, port, spacestation, early warning site, etc. Thus the network's properties arise principally in connection with what are termed, in connection with the graph of the relation R, chains, paths, or arcs linking together pointsthat are successively connected by the relation R.The concepts that develop from a detailed formal analysis of such a network have a ready application to other types of networks which can be associated, respectively, with the relations in other examples: networks identified in connection with the fabrication of equipment from parts and the assembly (disassembly) of equipment for repair; with the historical development of weapon systems; and withorganized military units as they pass from state to state during time. 3-5.2 Initial Movements Imagine a single vehicle traveling through a network. If the vehicle can travel from node i to nodej along a direct route-i.e., passing through no other nodes on the way-assign the a;i element of amatrix the value 1. Otherwise, assign a;j the value 0. The matrix A then indicates the relation "j isaccessible from i along a direct route." The accessibility is then the relationship R. The element a;; = 1means that there exists some "circular" route out of i leading back again to i.3-5.2.1 Trips The matrix also can represent how a vehicle can advance from node to node. Let a single trip berepresented by a vehicle advancing from one node to the next and assume that with the making of any such trip there is a definite time period (days, hours) which passes. For purposes of conceptual conve nience, attention can be first confined to the case in which there is some constant value for the lengthof every trip, however, this is not necessary. 3-5.2.2 Changes of Position Consider the position of the vehicle just before any given trip. Its position can be represented by one of theN-dimensional unit vectors, in particular, by a unit vector that has a; = 1 in the ith coordinate position if the vehicle is at position i. In general, denote the vehicle's position prior to the trip by avector P = [Pt, ..., PN] which is in value a unit vector with the vehicle at node i prior to the trip.Now let the vehicle advance by one trip. What vector describes its new position and how can thisnew vector be generated from the old vector and the old matrix [a;i]? To find the answer to the question, 3-3 which is the basic instance of the multiplication of vectors and matrices, several related categories of problems may be considered. This is done in the following paragraphs: 3-5.3, Unique Routings; 3-5.4, Random Routings; and 3-5.5, Further Successive Trips. 3-5.3 Unique Routing If in the matrix [ad for every i, the row vector [a;17 •••, a;N] is a unit vector, then at each node there will be exactly one choice of node to go to next (possibly the same node if a;; = 1) and the routing will be unique. For each node j consider the expression P1a1i + p2a2i + .... + PNaNi· If the vehicle starts at node i, then p; will be 1 in value and the preceding expression will be 1 if a;i = 1; in which case the vehicle will go to node j. If a;i = 0, the above expression will be 0 in value and it can be concluded that the vehicle could not be at node j at the end of a single trip starting at i. Consequently, for each j, the above expression correctly gives the value of the jth coordinate in a vector describing the position of a vehicle after a single trip. This position is denoted by the position vector by P(1) = [p1(1), p2(1), ..., Pn(1)]. Under unique routing, it is anN-dimensional unit column vector whose jth coordinate is 1 if and only if the vehicle is at node j after the trip. Methods of forming the matrix and vector products are discussed below. (1) Scalar Product of Two Vectors For any two vectors X = [x17 •••, Xn] and Y = [y1, ..., Yn] of the same dimensionality, the scalar quantity X1Y1 + X2Y2 + •• • • + XnYn is termed the scalar product or product of X and Y, symbolized by X· Y or XY. (See para 2-4.12.) The notation [x;][y;] may be used or even X[y;] if it were emphasized that X was a vector and not a scalar. With this definition of the product, the jth coordinate in the vector P(1), giving the position of the vehicle after one trip, is numerically equal to the product of the position vector P-which for convenience will now be designated as P(O) = [Pt(O), P2(0), ..., Pn(O) ], the position vector after the Oth trip-by the jth column vector in the matrix [a;j] representing the routing. The vector P(1) is therefore a vector composed of such scalar products. (2) Product of a Vector by a Matrix The position of the vehicle after one trip is thus a vector P(1) which can be visualized as the product of the vector P(O) by the matrix [a;i] in the following sense: P(l) = [Pt(l), P2(1), P3(1), ..., PN(l)] where P;(l) _ {1 if vehicle is at node i -0 if otherwise P(1) = [(ptau + P2a21 + p3a31 + ... + PNaNI), (p1a12 '+ p2a22 + p3a32 + ... + PNaN2), (pta13 + P2a23 + p3a33 + ... + PNaNJ), ..., (ptatN + P2a2N + p3a3N + ... + PNaNN)] i.e., Pi(1) = (ptati + p2a2i + ... + PNaNi) This vector is easily obtained by writing P(O) as a row vector (really a matrix "1 by N," i.e., 1 row by N columns), writing the elements of [a;j], and performing the matrix multiplication. Reference is·made to the end of chapter for the mechanics of this operation. P(1) = P ·A = [p;(O)][a;j] 3-4 = the row vector whose jth coordinate is the product of the row vector P(O) with. the y'th column vector of the matrix A= the product of the vector P by the matrix A This product may be regarded as a vector of vector products, defined earlier. Thus thescalar (dot) product of the two vectors X· Y may be regarded as being a product of rowvector by a column vector, i.e.: XY = [x1 X2 Xa ••• XN] Y1 Ya Y2 = XtYI + X2Y2 + XaYa, + ... + XNYN YN If initially the vector Y had been a row vector rather than a column vector, then the indicated product would not be defined as XY but as XYT, i.e., X multiplied by the transpose of Y. (3) The Product Viewed as a Transformation Multiplication in this case is still more abstract or generalized in meaning although, of course, the rule in its calculation is quite precise. Multiplication represents what happenswhen our vehicle makes a trip-when its position is changed, i.e., transformed. To multiply is thus to transform; in this case to transform over the period of time required toexecute the transformation. 3-5.4 Successive Trips (1) Random Routing Suppose now that during a second time period the vehicle makes a further trip. Forthis second time period let us consider the following revision of the matter: in place ofthe unique routing matrix [a;j], suppose now that for each pair of nodes i and j there ismerely a definite probability h that if the vehicle is at node i it will advance to node jon the second trip. The probability may be associated with the nature of the day, withthe driver's behavior, with the network, or possibly with all three or other factors aswell. (2) Stochastic MatrixConsider the matrix F = [J;j]. For each given i, the row vector (f;t, J;z, ..., f;N] is a stochastic vector, i.e., an ordered n-tuple of random variables, each of which represents some aspect of the outcome. The matrix F is termed a stochastic matrix since every rowis a stochastic vector. At the end of the second trip the position of the vehicle now will be indefinite andcan only be represented by a probability distribution. Let Pi(2) denote the probabilitythat the vehicle arrives at node j after the second trip. Let P(2) be the vector [Pt(2), P2(2), ..., PN(2)]. Then it can be verified simply by inspecting terms thatP(2) = P(l) ·[J;j]. Moreover, had P(l) been a stochastic vector (a unit vector is a specialcase of a stochastic vector) then the resulting vector P(2) would still be a stochasticvector. (3) Transition Matrix The matrix F is commonly termed a transition matrix because its elements repre sent transition probabilities. The matrix A for the first trip is a special case of a transition matrix. 3-5 3-5.5 Product of a Matrix by a Matrix (1) Matrix Products Let us examine in detail the coordinates of P(2) = P(l) ·[f;;]: P(2) = [Pt(l) P2(l) P3(1) ... PN(l)] fn f21 f31 = [[pt(l)fn + P2(l)f21 + P3(l)f3t + ... + PN(l)fNt], [Pt(l)ft2 + P2(l)f22 + P3(l)f32 + ... + PN(l)fN2], [Pt(l)f13 + P2(l)f23 + P3(1)f33 + ... + PN(l)fN3], ..., [pN(l)ftN + PN(l)f2N + PN(l)f3N + ... + PN(l)fNN11 or P;(2) = [Pt(l)ft; + P2(l)f2; + ... + PN(l)fNJ1· Now substituting the values of p;(l) from the expansion in paragraph 3-5.3(2), the jth coordinate of P(2) becomes: P;(2) = Pt (anft; + a1d21 + a13f3J + ... + atNfN;) + P2 (a2d1; + a2d2; + a2d3; + ... + a2NfN1) + P3 (a3dt; + a3d2; + a3d3J + ... + aaNfN;) + PN (aNlft; + aNd21 + aNaf3J + ... + aNNfNJ) The right-hand side of the above expression is interpretable as the product of the vector P(l) by a single matrix whose jth column has as its element in its ith row that quantity which is the product of the ith row of the matrix A with the jth column of the matrix F. Denote this latter matrix by: A ·F = the matrix whose ijth element is the product of the ith row of A by the jth column of F. Note that this matrix, A ·F, is the result of multiplying the matrix A by the matrix F. It may be regarded as a column vector of products each of which is the product of a row vector by a matrix. Considered in this manner, it is the natural extension of the product of a vector and a matrix; this operation was discussed in an earlier paragraph. We may also arrive at the same result by considering the transformation as the product of three matrices, which indeed it is: P(2) = P(O)·A·F = P(O)[A·F] It should be emphasized that the multiplication of matrices is associative but not commutative, i.e.: ABC = (AB)C = A (BC) A ·B ·C ~ A ·C ·B (except in special cases), but (2) Two Successive Transformations Regarded as a Single Transformation The product AF of the two matrices clearly corresponds to a single transformation representing the net outcome of two successive trips, i.e.: P(2) = [P(O)A]F = P(O)[AF] The product of the two stochastic or transition matrices is thus a stochastic or transition matrix. 3-6 3-5.6 Further Successive Trips (1) Higher Matrix PowersMn may be used to symbolize then-fold product of a matrix M by itself. The outcomeof n successive trips by the vehicle at the end of the nth trip may be summarized as follows: (a) PAn represents the definite position of the vehicle under unique routing. (b) PFn represents the probability distribution of the position of the vehicle under randomrouting. (2) Nonstationary StipulationsUse of matrix multiplication is not altered if it is required that the matrix A or thematrix F be varied from trip to trip, or time-period to time-period, to represent, for example, the effects of weather changes or of other systematic or environmental factors thatalter the routing over the course of time in a manner which is independent of the vehicle'spath. For the kth trip, that matrix is used as the kth factor in the matrix product that pertains to a trip taken in the ith time period. Thus if M k represents the transition matrixdescribing the kth of a successive sequence of transitions or trips, then the n-factor productM 1M 2 ••• Mk ... Mn is the matrix representing the total net transition viewed as a singletransformation or change. 3-6 Relations Associatable with Matrix Products Certain relationships that describe what might be termed the "connectedness" of the network canbe associated with higher powers of the routing matrices A or F. These relationships are readily identified examining still a further type of matrix product, namely, a Boolean product. (1) Boolean Product of Vectors If X and Y are two vectors of the same dimensionality and are both Boolean vectors-i.e., every coordinate of each vector is either 0 or 1-then the Boolean scalar product ofthe vectors may be defined by performing Boolean operations. By the rules of Booleanalgebra, the product of X and Y will be 0 unless at some coordinate position i, both X;and y; are equal to 1. If this happens at more than one coordinate position, i.e., bothX; and y; are equal to 1, the Boolean sum of such 1's will reduce to 1. Thus the Booleanscalar product is either 1 or 0.Return now to the accessibility matrix A of the network. Suppose, that in computingthe product A 2, Boolean scalar products of the vectors had been made instead of ordinarynumerical scalar products. Denote the general term of A 2 by a;j(2). Then aii(2) would be0 in the case that it would be impossible to go from node i to node j in exactly two trips,but aii(2) would be 1 if it were possible to do so. A similar result holds for higher powersof the matrix A, i.e., a;ln) is 1 or 0, and is 1 if and only if it is possible to reach node j from node i in exactly n trips. (2) ConnectednessThus the node i can be said to be connected to node i if, for some n, the elementa;i(n) of the matrix An is 1. Add the matrices A + A 2 + A 3 + ... + An + ... and represent the sum by C, the addition to be Boolean addition. Then C will represent the relationship of "ultimate connectedness" because if the element C;j of the matrix C is 1, thusa way exists of going from node ito node j; if C;j is 0, then no way exists and the point jwould be isolated from point i.Formally, the matrix C may be designated as A/I-A when I represents the identitymatrix whose diagonal elements consist of 1's and all other elements are 0, i.e., a;j = 0 ifi ~ j and a;i = 1 if i = j. This is but one of the algebraic properties of matrices that areanalogous to the properties of single-dimensional quantities. (3) Degree of Connectedness The above Boolean analysis of connectedness does not provide a measure of varyingdegrees of connectedness of a given graph. One measure is simply the value of the numerical power An of the matrix A. Examine the matrix A 2, for example. In the sum of the terms 3-7 in the scalar product that adds up to a;j(2), all terms will be 0 except when for some node k, both a;j = 1 and aki = 1. But in contrast to the Boolean sum, for each such k a contribution will occur towards the sum a;j(2). Thus a;i(2) will represent the number of ways in which the vehicle can travel from point i to point j in two consecutive trips, i.e., a;j(2) is the number of paths of length equal to two trips that join node i to node j. By extending this concept, a;j(n) is the number of different routes, or paths, of length equal ton trips that permit node j to be reached from node i. In some cases the probability of going from node i to node j may represent an appropriate measure of immediate connectedness or adjacency. In the paragraph below on flow through a network, an additional measure of the degree of connectedness appears in the form of the volume of flow that can go from a given node i to each other node j. 3-7 Examples and Applications The usefulness of this analysis of the characteristics of a given network is evident in the design of command networks, communications, and evaluation of their efficiency and reliability as the following examples suggest: (1) Command Hierarchies If the relation R represents "i commands j," then the resulting matrix C would represent the ultimate extrapolation of authority. Presumably then for each pair i and j, either a;j = 1 or a;j = 0, but not both; otherwise a contradiction is possible. Since command is stratified so that there is a command with respect to separate matters along separate chains, Boolean sums of the matrices C now become of interest. Again, suppose that a command is at some time given by node (organization, command) i and the command is to be relayed by all those points j to all those points k for which rik = 1. Then the element a;j(n) of the matrix An will represent the number of times that the command originating at point i will be received at node j after n relayings of the command. (2) Communicative Networks When the relation R means that point i of a communications network can transmit to point j, then the lines may or may not be directed; i.e., depending upon the type of communication, transmission in the reverse direction may or may not be possible. In the design of communication networks, especially for their use under combat conditions, various strategies must be considered for ensuring the connectedness of the network that is needed. For example, in an automatic network nodes might relay all incoming messages. Again, if the nodes are mobile units, then the matrix may vary as the dispersion of the units of a given set varies as they are moved around. Also from time to time, nodes or lines will typically be destroyed by enemy action. Against such action by the enemy, a reliability of the network can be defined, i.e., the probability that the communication net work remains connected after an attack. (3) Supply and Transportation Networks Not all of the designable properties of a supply network can be treated simply in terms of the accessibility communications matrix A; however, the latter contains the basic structure of such a network. Consider, for example, the national and even international network of depots, posts, camps, stations, and miscellaneous supply points. The matrix A may represent authorized directions of supply or, more generally, transmittal andjor return of materiel. In fact, whether or not it is feasible to supply along a certain route (line) may, at times, need to be represented by a different network and the two compared. The capability to maintain a sufficiently connected supply network and to rapidly repair breaks in it is of significant value in maintaining adequate supplies on combat fronts. The use of a matrix with the assistance of a digital computer can indicate procedures to improve the reliability of supply organization in a complicated network. To represent the current state of such a network at any moment in the actual course of its functioning, the matrix A and a probability distribution matrix F may require additional concepts. Consider, for example, a theoretical tramp steamer traveling from port(node) to port. The element f;i may well describe the choice of a next destination, namely, random but with certain frequencies. The length of time required for a given trip, denoted by r;j, significantly affects the chance that if an observation is made, the steamer will be found in any given portion of the network. The kind of process represented by the location, in time, of the steamer is known as a semi-Markov process which is discussed in chapter 6. (4) Task Networks for a Processor Consider a machine which is normally used to make one of N products, one at a time, and which typically makes several pieces in succession of each product, once it makes anything of the product. Now let a "trip" from node i to node j represent that the machine is making pieces of product j and had previously been making pieces of product i. When it is finished making pieces of product j, it is then interpreted as being at node j of the network. Possibly there may be a special cost, effort and time required to change the settings on the machine so that it will now make product k. Consequently, the amount of time required to make a given number of pieces of product k will depend upon the fact that previously it was making product j since this time must include the time to change the machine over from product j to k. Frequently the continuing demand for such a processor will make preferable a certain sequencing of the processor through the list of tasks. A rigid sequencing would be represented by a matrix A that represents unique routing. Fluctuations from a rigid se quence might be representable by a probability matrix. Occasionally, however, the sequencing might have harmonic properties with task j following task i only on every nth passage through the sequence. In this case the simple matrix must be modified by adding counters that record the number of passages remaining before the processor upon leaving node i, is to move to task y". Evidently, this treatment of a processor may be applied to many types of service units meeting a continuing demand, such as information clerk, repairman, doctor, and even a weapon in certain respects. 3-8 Routing Continuously Divisible Flow Through a Network The examples so far have been ones of the traversing of a network by a single unit with movement in discrete trips-e.g., the vehicle, the message, the processor. When a number or mass of units is to flow through a network, one may by contrast consider the routing of flow that is continuously divisible. Examples of such quantities include some types of material flowing through a network of supply points.When quantities are involved that are from a numerical consideration sufficiently large, this type of flow may be useful in representing the flow of such ultimately undivisible magnitudes as combat units. Abstractly, suppose that the flow occurs simultaneously along the arcs of a network. Divide time into discrete periods. Of the total flow coming into a node i in a given time period, suppose that a fraction j;; goes out in the next period of time along the arc connecting ito j. Let F denote the square matrix [f;;]. Suppose that the flow to leave node i in the first time period is a given amount v;(O) and let V(O)denote the vector [v1 (0), v2 (0), ... , Vn(O)]. It is then readily verified that the flow coming into the nodes in the nth time period is given by the product of the vector V(O) by the matrix Fn-and is thus the flow to leave the nodes in then + 1st time period-i.e., is given by V(O)Fn. The fractions f;i need not sum to 1 since at any node there may be a net withdrawal or addition of flow to the network. Such flow may be termed nonhomogeneous. In addition, the routing may be nonstationary with respect to time. A theoretical example of nonstationary flow with gains and withdrawals is that of ordinary metropolitan traffic flow with the nodes located at the points where flow originates,terminates, or is switched. In such a case let A(k) be a vector representing the amount of new flow which originates at the nodes in period k. Then the total flow leaving the nodes in the nth time period will be nL:-1 A(n -i)Fi+1• Any gain or loss of flow may be assigned as taking place at the nodes or on • i=o the arcs, as desired, providing that the vectors and the flow coefficients f;i are accordingly interpreted. 3-9 3-9 Systems' States and State-Transition in Weapon Systems In the previous examples changes in the state of the system with time have been represented by flows between the nodes of a matrix. Consider now a weapon system in field use. At any given moment it may be in one of a number of possible states: have just acquired a target; be inoperative because of component failure or damage; be under repair; be in transit; etc. Now make the assumption that there are N possible significant states of the weapon system, "significant" meaning that only in terms of these states are operational quantities such as performance, effectiveness, cost, and future conditions to be calculated for purposes of evaluating or using the system. During the course of time, the weapon system will spend varying amounts of time in the various states. Divide the time into successive periods of sufficiently short intervals. The passage of a time interval of length t is to be regarded as the performing of a "trial" in the history of the system. Now let f,1 be the probability that the system is in state i at the beginning of a time interval t.t, and at the end of the time interval, in state i-Thus his a state-transition probability. Initially, let the probability distribution of the state of the system be a vector V o. Then at n time intervals, the probability distribution of the state of the system is V o-[Jdn. As in the case of the previous examples, there is no difficulty in representing the effects of the time of the year on weapon usage, night and day, etc. For practical use, concern focuses on the matrix [f;1] and how the numerical values of its entries are to be obtained. Note that some transitions are impossible-e.g., having just acquired a target to being repaired-in a short time interval. In order to establish the probabilities in detail, it is necessary to go into more analysis than has been done here. For example, the probability of failure of the weapon due to component failure will in some cases depend upon the amount of use which the weapon system has experienced since the last performance of maintenance. This use-record then becomes a part of the weapon system's current state, analogous to the significance of length of trip in the tramp steamer example. The precise manner in which these notions are to be applied to a given system depends greatly on the nature of the system. Whether the external influences upon the system can be treated as random or not are considerations that are matters of detail (often considerable). Methods for handling the complexities are the subjects of other chapters of this pamphlet. The requirement here has been merely to demonstrate the employment of matrices and their products in the representation of systems through time. Thereby, histories may be developed in terms of which evaluations are calculated and which strategies are selected. 1 3-10 Production Networks 3-10.1 General The most important general application of concepts of networks is in representing the contributions which the productive efforts of each part of an organization make to the total activity of the total or ganization. Such applications range from the comparatively "micro-flow" evidenced at production points and repair stations which act like nodes in logistic networks to the comparatively "macro-flow" between general sectors of the national economy that have been subjected to study in "input-output" models. The paragraphs below illustrate these uses. N onsquare matrices now appear and represent physical transformations such as occur in production and other activity that produces physical trans formations. 3-10.2 Arsenal Manufacturing Like any typical factory or plant, an arsenal will ordinarily manufacture a number of products during any given period of time. Focus attention first on those products, e.g., vehicles, which are made of subassemblies, e.g., engines and transmissions; and the subassemblies in turn being made of parts, e.g., bolts and castings. Suppose that in some typical time period, e.g., one cov~red by a progress report 1 This use of matrices first acquired importance in modern science in quantum theory, in attempting to account for the states of an atom in which changes appear to occur randomly. However, probably more of the use of matrices in this way is developing in other fields today, including specifically the subject area of operations analysis. 3-10 of the arsenal's activities, K different types of products have been made. Let Ai denote the quantity that has been made of the ith product. List the types of subassemblies of which these products are composed and assume that they are J in number. For the pth type of product, let 8pq be the number of subassemblies of type p that are required in one piece of product q. In many cases, of course, 8pq might be 0. The matrix [8pq] is called a consumption matrix, commonly known as the subassembly explosion of the products. Note that the consumption matrix will not generally be square. Let Si denote the number of subassemblies of type j that are required to produce a given output vector [Ak] of products. Then 1 [8jJ, 8j2, ••., 8jK] -At l A2 LAK_. Consequently, the total requirement for subassemblies may be represented as a column vector [Si] given by: 8n 812 •.. 81K 8nAt + 812A2 + ... + 8tKAK 821 822 ••. 82K 821A1 + 822A2 + ... + 82KAK SJ [8ik] [Ak] Premultiplication of the column vector [Ak] by the subassembly consumption matrix [8ikl converts the product requirement into the subassembly requirement.2 Let Pii denote the number of parts or the quantity of materials of type i that are required to make one subassembly of type j. Corresponding to the output vector [Ak] of products, the vector [P ;] that represents the number of parts required to produce [Ak] can be computed as the iterated product: The latter multiplication is valid because matrix multiplication is associative. The matrix product [Piil [8ik] thus represents as a single transformation the consumption of parts by products. Evidently repeated, such multiplication may represent the consumption at successive points "upstream" along a productive network of inputs required for a given output. 3-10.3 Output by Fractionation In fabrication, each unit of output required the assembling of a certain proportionate mix of inputs at each stage of production. By contrast, in some activities an input characteristically yields a certain proportionate mix of outputs. As in the refining of oil, the process may be abstractly termed one of "fractionation." Let Yii denote the number of units of output of type j that are produced by a unit of input of type i. Then the ith row of the matrix [y,Jl will represent a unit of the ith type of input. If [F;] is a row vector representing the respective amounts of each of I types of input that are made available-e.g., for fractionation during a given time period-then [F,] [YJl will be a row vector representing the mix of amounts that will be produced. Some examples of fractionation in output are (1) Allocation of military funds according to predetermined ratios; (2) A given type of vehicle may be designed to carry a certain mix of cargo. Then i may refer to a vehicle type and Yii to its design; 2 Before the advent of electronic data processing machines, calculating this conversion typically required a great deal of clerical labor which is now saved by storing the matrix [Sikl in electronic form, up-dating it as engineering changes are made, and performing the matrix multiplication automatically. 3-11 (3) An ore of a given composition will correspond to such a row vector [yi1], the values of i corresponding to the various different materials that can be extracted from the ore; (4) In human capability the subscript i may refer to types of persons and the subscript i to proclivities or capabilities, with the latter being expressed by the actual invariable activity of the persons. Then [Yii] may denote the amounts of such activity that will be engaged in, e.g., hobbies, physical, social. 3-11 Input-Output Matrices-Leontief Models Models known as input-output, or Leontief models, have been employed for a number of years in an attempt to represent the interrelationships of production and consumption between industries, between economic sectors of a country, and even between economies. The following hypothetical example suggests how a model of this type might be attempted at representing military activity in the large via an input-output matrix. Imagine four oversimplified and fictitious military activities: (1) Manning the armed forces or MAF (2) Producing the military vehicles or ATAC (3) Making weapons or A WC (4) Preserving the peace or ALERT For a given period of time each row in the following matrix shows how the output of an activity is allocated, including to the continuance of the same activity. For example, 25 units of the 150 units of military vehicles produced by ATAC are used up (consumed) in producing military vehicles. INPUTS TO TOTAL OUTPUT DIMENSIONS ATAC AWC ALERT OUTPUTS OF { ATAC AWC MAF 25 15 20 30 5 25 95 80 15 150 100 60 vehicles weapons man-years The table implies that none of the outputs of ATAC, A WC, and ALERT are used as inputs to MAF. ALERT is a final consumption, producing no inputs. The point of this table is not to suggest actual relationships but to suggest the scope which inputoutput analysis may be attempted. 3-12 Matrices as Hypervectors As a hypervector, a matrix characterizes a "logical intersection" as a vector. Something is to be characterized in two respects, one represented by the rows and the other represented by the columns. Each position in the row (column) represents a level of the respect associated with the row (column). The entry ai1 describes the simultaneous occurrence of the row characteristic at the level i and the column characteristic at the level j. In statistics this is often referred to as displaying a "row effect" and a "column effect." If the detailing were done simultaneously in three respects, one would have a hypervector that might be called a "three-dimensional" matrix. Usually most operational uses of vectors for military purposes require that the detailing be done simultaneously in many respects. Nevertheless, certain important categories of military hypervectors are for all important purposes primarily two-dimensional. The use of a matrix as a hypervector is primarily a structural use, in contrast to its use to represent transformation which may be regarded as a dynamic use. (1) Populations Pif may represent the number of units in a given population or system-e.g., force, inventory, stockpile-with i and i referring to any two of the following: (a) Location (b) Condition, e.g., age, operability, etc. (c) Mission 3-12 (2) Velocity and Acceleration The time rate of change of the matrix is the matrix of the time rates of change ofits elements. Examples are the "velocity" of an inventory of many items at various locations and the time rate of change of a battle force in a war game on a battle grid. Acceler ation may similarly be regarded as the time rate of change of the velocity matrix. (3) Activity. Rates Apart from the fact that a typical activity may consist in the simultaneous occurrenceof many concurrent activities, the simplest of any of these can be analyzed as an "inputoutput" structure of some sort. The input will include effort, materials, resources, time,etc., and the output will include what results in the way of products and consequences.In the simplest activity there are usually several simultaneous inputs and several simultaneous outputs. More significantly, all of the inputs tend to find their way into all of theoutputs, to a greater or lesser degree, according the "decomposability" of the activityinto independence subactivities. A matrix expresses the total dependence. X;j may representthe amount of the ith type of input going into the jth type of output. Examples: (a) Transpurtation: X;j is the quantity of cargo (troops) shipped from source i to des-·tination j. (b) Fire: h is the amount of fire being delivered from point i to point j. (c) Search: S;i is the amount of search effort of type i being devoted to search mission j.All of these instances are to be distinguished from the amount of input of the ithtype that may be required per unit of the jth type of output. This type of matrix wouldbe a transformation, not an activity rate, and may not coincide with the activity rate. ( 4) Assignments Operating units are, of course, often formed of two types of units, e.g., man-machine,worker-task, weapon-target, organization-mission. The structure of an operating unit maybe described by a matrix in which a;j represents the number of units of type i assigned toa unit of type j; or i and j may themselves be "serial" numbers, in which case a;j mayrepresent the fraction of time in which unit i is assigned to unit j. Also, X;j may be aprocess, in time, having its current value 1 if unit i is assigned to unit j and 0 if notassigned. This is a special case of allocations. (5) Allocations In many instances an activity rate or assignment may be a case of allocation of avector. The matrix [a;i] may be regarded then as the allocation into column vectors ofthe row sums, or equally as well, the allocation of the column sums into the row vectors. (6) Payoff in Two-Sided Games In a contest between two contestants-e.g., artillery duel, choice of battle route,choice of time to fire, choice of concentration of battle strength-let one player possessM mutually exclusive courses of action which represent all his alternatives of choice,while the other player possesses N such choices. The game is then played by each playerchoosing exactly one course of action out of his set of alternatives.Associated with the game is the payoff matrix [p;i] which represents the payoff, orgain, to the first player if he chooses strategy i and if the other player uses strategy j.This payoff is formally an output of the contest viewed as an activity, or an evaluationof the output. Games are discussed in chapter 10. (7) Branch Flows in Networks • Of the flow in a network, X;j may represent the amount of flow along the branch(arc) from node ito nodej. The examples in subparagraph (3) above are often of this type.The network then refers to the routing of flow through successive nodes and thus to themulti-stage sequencing of activity. Networks with flow occurring simultaneously in allof the branches may be useful in representing the problems of coordinating such flow. 3-13 Often this coordination must be accomplished under the limitations of branch capacities, i.e., of limitations upon the amount of flow that a branch can handle. The case of sharp restrictions on capacities and of linear cost on the flow is developed in Chapter 9. 3-13 Graph Representations of Projects "CPS" (1) Graphical Representation Some projects that are to be carried out-e.g., campaigns, construction, operationsover a period of time can be analyzed into a number of tasks, each of which has a rather precise commencement and a rather precise termination, with the tasks standing in some precedence relation to each other; i.e., before certain tasks can commence, certain other~ must be finished. The project can then be presented as a graph. This type of representation has recently come into widespread use as a means of developing a reliable and efficient schedule for completion of the overall project. This graph-representation forms the bases of "critical path scheduling" (CPS) and of PERT;3 the methods have been called by a variety of names. It is important to note that it is possible to have more than one critical path through the network and for the entire network to be critical. These uses are discussed below and in chapter 9. The manner in which the project if to be represented as a graph is illustrated in figure 3-2. Each task is regarded as a branch, the rearward (left-hand) node representing commencement and the forward (right-hand) node representing completion of the task. If when some task, for example the task A in the figure, is partially completed, other tasks can begin; then A is first separated into elemental tasks A1 and A 2 so that it can always be assumed in the iraph-representation that each task in the project must be fully completed before any of its successors can be started. The subgraph with branches E, G, and X illustrates a case in which two concurrent tasks, E and G, can be started together and must both be finished before task R can begin. The task X is then a "dummy" task; it requires only a time interval of zero length to perform and would not resist any efforts to shorten the length of the graph from its start at node 0 to its finish at node 9. A ,_---------~----------i: Al 2 F )------t9 G Figure 3-2. Sample graph. aProgram Evaluation and Review Technique. 3-14 The subgraph consisting of task A2, B, J, P, and Q could be aggTegated into asingle activity, call it U, and that part of the graph replaced by a single branch. Similarly,the total project may be a single branch in some larger network. (2) Completion Times and Effort A task may have a normal completion time. This time is, in reference to the graph,a quantity tii where i is the node representing the start of the task and i is the node representing the end of the task. This time t,i may depend upon the amount of effort devotedto the task, for example, increasing the rate of effort might shorten t,i· (3) Critical Path The length of time required for the total project will then be the maximum of thesum of the task-times along all possible paths leading from node 0 to node 9. Any pathalong which the maximum total occurs is termed a critical path.For any particular project whose scheduling poses as a sizable consideration, constructing a graph for the project accomplishes two things: (1) it assures that each taskof the total project or job is identified and put into the proper precedence relationshipwith the other tasks; and (2) assuming that the time to perform each task may beestimated in advance, the graph will reveal the sequence of tasks upon which the time tocomplete the total project will depend most critically. The graph then provides a methodof identifying where additional effort can be most effective in any attempt to shorten thetotal time. Typically the amount of effort required to shorten a task-time will vary with thetask. Consequently, there will be a least effort required to accomplish the total projectin any given time. When the effort required to shorten the task-time is linear in the amountof reduction, a linear programming problem occurs. This is treated in chapter 9. 3-14 Basic Operations of Matrix Algebra A matrix is a rectangular array of mn quantities, called an "m by n matrix," arranged in m rowsand n columns. These mn quantities are called the elements of the matrix. If m = n, i.e., number of rowsequals the number of columns, the matrix is said to be a square matrix of order n. The element of amatrix that is in the ith row and /th column, where i may have any value from 1 tom and i may haveany value from 1 to n, is called the general element of the matrix; a usual notation is aii·Matrices, even though without numerical value, can be treated as entities and thus can be added, subtracted, multiplied or have other operations performed on them. Such arrays offer a particularlyconvenient method for calculating simultaneous changes in a series of related variables. The mechanics of the matrix algebra is illustrated below. (1) Addition Addition of any pair of matrices [A] and [B] is possible only if the number of rows and the number of columns repectively are equal. Addition is both associative and commutative, i.e., ([A] + [B]) + [C] = [A] + ([B] + [C]) = [A] + [C] + [B] = [C] + [A] + [B]etc. Addition is performed row by row, each element in each column of the first matrixis added to the corresponding element in the corresponding column of the second matrix,thus forming one matrix. The process is illustrated: [au au au] [bu bu b,] [au+ bu a,+ bu au+ bu] [A] +[B] = a21 a22 a2a + b21 b22 b2a = a21 + b21 a22 + b22 a2a + b2a aa1 aa2 aaa ba1 ba2 baa a31 + b31 aa2 + ba2 aaa + baa 2 3 2 2 4 5 [i 6 7 ~] +[_: -8 0 -n~u -2 7 !] -1 -2 -3 -9 5 4 4 2 3-15 (2) Subtraction Subtraction is the same as addition except that the corresponding elements are subtracted from each other. The associative and commutative properties apply. The process is illustrated: 2 3 2 -6 0 9 5 3 1 4 - [i 6 7 :] [~ J]-1 -2 -3 6 -3 -9 2 7 (3) Multiplication Multiplication is a more complex process in which the two matrices, [A] and [B], need not be the same size. However, the two matrices must be compatible, i.e., the number of columns of the left matrix must equal the number of rows of the right matrix. Thus the multiplication of the matrices [5 X 3] X [3 X 8] is possible. The multiplication would result in another matrix of size 5 X 8 with each element of the matrix consisting of the sum of three terms. Multiplication is associative but, except for a special case, not commutative. It is emphasized that the matrix on the left multiplies the matrix on the right, i.e., premultiplication. [A] . ([B] . [C]) = ([A] . [B]) . [C] [A] . [C] . [B) ~ [A] . [B] . [C] In multiplication each term in the upper row of the left matrix successively multiplies the corresponding term in the first column of the right matrix, the sum of the resulting products is the number entered into the position at column 1, row 1 of the product matrix. The upper row of the left matrix is now used in an identical manner with the second column of the right matrix to find a value for the position at column 2, row 1 of the product matrix. This operation is repeated with the first row of the left matrix multiplying every column of the right matrix. The entire operation is repeated with each row of the left matrix. The process is illustrated: aubu + a12b21 + a1aba1 aub12 + aub22 + a1aba2] = a21bu + a22b21 + a23ba1 a21b12 + a22b22 + a23ba2 [ aa1bu + aa2b21 + aaabal aa1b12 + aa2b22 + a3aba2 [6 18] [1 2] [6 + 3+ 0 12 + 4 + 40] [ 9 56] 0 -3 5 ° 3 4 = 0-9+0 0 -12 + 25 = -9 13 1 2 6 0 5 1+6+0 2 + 8 + 30 7 40 (4) Multiplication by a Constant If a matrix is multiplied by a constant, each element of the matrix is multiplied by the constant. ka., k[A] = ka21 ka22 ka23 [kau ka"] 2[: 2 :J ~ [: 4 :J -2 -4 (5) Transpose The transposed matrix of [A] [aii], indicated by [A] or [A]T [aiil, is formed from IAl by interchanging rows and columns. a21 aa1]a22 aa2 a2a aaa 3-16 CHAPTER 4 MILITARY USES OF PROBABILITY 4-1 Scope Probability is recognized as a subject in descriptive science. It is less well recognized that in some important cases only the methods of probability can provide objective engineering standards for operational effectiveness and operational effort. Further, it has been less commonly emphasized that ~hen used systematically, probability can afford a precise means of comparing offensive or defensive strategy. Unofficial "tactical" use of probability is sometimes recognizable as "gambling." If it succeeds,gambling is difficult to disapprove. If it is not systematic, gambling will tend to lack effectiveness. Military uses of probability have developed greatly in the past few decades. Examples will be found throughout this pamphlet. The text which follows is an approximate guide to the treatment of probability in this and following chapters. This chapter covers basic notions of randomness, pseudorandomness, and random variables, including a description of the simplest probability distribution for reference. Chapters 5 and 6 cover random or "stochastic" processes, their concepts, measures, equations, and characteristic types and uses. Optimization under probability is treated in chapter 8; use of probability in game theory, in Chapter 10. Processes in which probabilistic factors are important include the following: (1) Demand processes (demand for capability, for materiel, for the use of a weapon in combat; demand for service occasioned by the unexpected failure of a system); (2) Utilization and effort processes (the work backlog, and effort at production centers and at service centers of all descriptions); (3) Performance processes (effectiveness achieved, service rendered, demand supplied); (4) Finally, combat processes, which include hit probabilities, kill probabilities, damageprobabilities, survival probabilitites, win probabilities, probabilities for effort and time required, and random models of battle losses and gains and of war alliances. These numerous probabilistic processes are treated in chapters 11 and 15. 4-2 Randomness 4-2.1 Deterministic Behaviour Suppose that at some time t, the state of a given process of interest-e.g., a battle, a transport operation, a supply situation, a defence posture, etc.-could be completely represented as some value S(t). The process is then regarded as deterministic if, for every t, the changes that occur in S(t) between t and t +dt are found to be uniquely determined by the value of S(t) itself. S(t) might have to be quite complicated and might well have to include within its content a representation of the entire earlier history of the process so far as any aspect of that history directly influenced the current development of the process. 4-2.2 Random Behavior By contrast, the process S(t) is viewed as random if the changes in it between t and t + dt are not completely determinable by the value S(t). In mathematical terminology, the changes would not be a "function" of any such S(t). 4-1 Military experience, no less than any other, testifies that operational phenomena display to a greater or lesser degree: (1) incomplete predictability from available knowledge; (2) unexpectedness; (3) lack of precise correlatability with accepted causes; (4) independence Of result of individual replication from the norm; and (5) lack of controllability in detail. For practical purposes, these are evidence of randomness. 4-2.3 Examples of Randomness The following list of examples show how pervasive and significant randomness is in military processes: (1) Combat Outcomes (a) We have come to accept probabilistic representation of ballistic errors and of the detailed distribution of sprayed fire-e.g., fragments, chemicals, etc.-from a given burst source. Current methods of weapon design utilize the forms of the actual distribution deliberately to design weapons whose bursts will have maximum lethal effectiveness in regard to the probability distributions of likely targets around a typical burst point. (b) Fluctuations in hits, unexpected good or bad luck, and the general combination of many unknowns may accumulate to alter the outcome of an engagement or battle from prior expectation. Current experimental models of battle are designed to generate error-producing factors of realistic type with appropriate randomness. For example, in a simulated tank vs. antitank-gun duel, each shot from either weapon system is classified as a "hit" or "miss" according to a hit-probability estimation based on the weapons's characteristics and the type of duel being fought. Weapon systems are selected for national adoption after evaluation against such deliberately randomized representations of combat. (c) Statistical studies made of historical patterns of alliances between nations support simple theories that allying tends to be random within likely predilections. (2) Demand for a System's Capability; for Materiel and Services Complex equipment is composed of many parts and components. If one fails, the whole equipment may fail. The times of failure, and the operating lifetimes, tend to be unpredictable (cf. ch 12) and create a probabilistic demand for an equipment support effort.Other examples of random demand are combat losses to be replaced and high priority requisitions to be processed. (3) Performance of Equipment, Personnel, and Organizations A weapon or piece of equipment will occasionally unexpectedly fail to perform as it should. As personnel, we all exhibit great variety and fluctuation from any standard of performance either of the group or of our own longer-term average. Consequently, the effort that is required to achieve a given objective-e.g., the number of shots needed to neutralize a target, the amount of time required to complete a job, etc.-becomes a random quantity. (4) Quality of Production The control of the quality of production of materiel for government and military consumption is now based upon accepted theories of probability of random fluctuation of quality from a given desirable level. Thus an occasional unit of the output of a manufacturing process-e.g., a run of ammunition-is assumed to be defective in random frequency of occurrence. The defect may be simple or it may be a random accumulation of smaller defects or impurities. (5) Rate of Production, of Accomplishment Apart from the effects of quality upon the rate of acceptable production, the rate of a productive activity-as measured in pieces made, services rendered, etc., per unit timewill be subject to random drags, interruptions, obstacles, breakdowns, etc. 4-2 (6) Traffic Traffic conditions are affected by the random rates at which vehicles enter roads and travel. The randomness of the demand for a support system-service crew, repair shop, transport pool, fire-support unit-also produces a type of "traffic congestion." Demands must wait for service, and queues occur of unexpected lengths and fluctuations. The subject of queues is treated in chapter 12. (7) Environmental Factors (a) The reflection of a radar beam from a target may be distorted by atmospheric disturbances, which make target identification difficult. (b) Messages-whether transmitted through electromagnetic, electromechanical or completely human transmitting systems-are apt to be randomly distorted in the process. Modern error-correcting codes for mechanical systems afford a highlysophisticated example of the use of probability to control randomness effectively. (c) Frequency of occurrence of natural obstacles of the pattern of terrain may be random. 4-2.4 "Chance" Randomness is sometimes attributed to "chance", i.e., to lack of cause or lack of purpose. But the connection is not necessary so far as the practical application of probability is concerned. Deterministic behavior may appear random if it is sufficiently unpredictable. For example, most people probably view the outcome of a throw of a pair of dice as completely caused, but regard the causative influence in each roll as simply too difficult to know in enough detail to predict or to control the outcome.* 4-3 Probabilities 4-3.1 Trials If the changes in a given process S(t) that occur in any interval (t, t + dt) are random, it may be possible to associate probabilities with them in the following way. Regard the state S(t) of the process as including explicitly or implicitly the entire history of the process up to and including time t. Consider now that a conceptual experiment is performed in which each possible change in S(t) during the interval (t, t + dt) is regarded as an outcome of the experiment, and that, the experiment is performed under a definite set of conditions which are uniquely determined by the very value S(t) itself of the process at time t. The experiment is termed a trial. 4-3.2 Outcomes Identify the set of all possible or basic outcomes of the trial. Any characterization of a given outcome will then be at most a logical combination of basic outcomes. Basic outcomes are sometimes called sample points. The set of all such outcomes is termed the sample space. Each performing of the experimentis regarded as selecting a single point from this sample space. Such a point may of course perfectly well be a multi-dimensional quantity (e.g., the three-dimensional location of the point of burst of a nuclear bomb). 4-3.3 "Average Frequency"per Trial The probability of a given basic outcome B is a quantity dependent on the sample space of all possible changes in S(t), and denoted by p[B IS(t)], read as "the probability of B, given S(t)." It will correspond to the average frequency of occurrence of the outcome B, per trial, if the experiment were repeated a great number of times, in the following sense. Since the outcome B is uncertain in any performance of the experiment, the very meaning of "rate of occurrence" or of"average frequency of occurrence" of the outcome has to be defined. Accordinglylet the experiment be repeated N times, and inN repetitions let bN denote the fraction of times that the outcome B occurs, i.e., the average rate of occurrence per trial of the outcome B in the N repetitions.Now bN does not necessarily approach any limit as N increases, for it is always possible, because of the randomness of each outcome, that a string of indefinitely many successive occurrences, or non •In the vocabulary of philosophy, a theory that attributes randomness to essential change or other behavior of the phenomenon in itself is termed an ontological theory of randomness, while a theory that says that randomness is to be explained by our lack of knowledge is termed an epi.•temologicaltheory. In practice it is immaterial which theory is correct. occurrences, of the outcome B will develop at any stage. However as N becomes large, the possibility that bN will remain near some value p increases. This is expressed in the law of large numbers. The" strong" law of large numbers is a theorem stating that with probability 1, the ratio bN computed for a large number of trials will approach and remain near p in value. Specifically, for any f > 0, and any sequence of repetitions, the probability is 1 that there will be only a finite number of values of N for which IbN -p! > €. This law implies the weaker proposition that as N increases, the probability that the average number of successes differs from p by more than any preassigned € > 0 approaches 0 as a limit. This proposition is known as the "weak" law of large numbers. The law of large numbers provides the quantitative link between the measurement of frequency on the one hand, and the concept of independence on the other. These three comprise the theoretical concept of probability. Note that the link is itself merely a probabilistic statement; the concept is quite economical logically. Note also that the concept of independence is quantitatively identified with the multiplication of probabilities. Procedures for developing probabilities for practical military use are treated below and at various points throughout this volume. 4-3.4 The Example of the Die The throwing of a die provides the classic illustration of the feasibility of applying probability theory to a completely random phenomenon. Interest in such gambling phenomena stimulated the design of the theory of probability. The outcome of a throw of a die is sensitively dependent upon (1) the input, i.e., the detailed pattern of spin, momentum, and direction of path that is imparted to the die initially; and upon (2) slight irregularities in the surface that it encounters along the roll, i.e., upon factors that develop during the trial. Such factors are difficult to select, to control, to measure, to predict. They are different in each successive throw. Their joint effect is impossible to calculate, much less to predict. Yet the factors that cause the outcomes result in only six possible outcomes. Throwing the die performs the classification without explaining how it was made. The outcomes of successive trials (throws) tend to be independent of each other. Thus the die is an excellent "randomizer." But the die is more than an excellent randomizer. It might be termed an excellent "probabilizer.'' For while the precise selecting of factors cannot be controlled nor their effects predicted, nevertheless the frequency of occurrence, defined as the probability of occurrence, can be controlled. The rules of professional gambling control the frequency of selection of the factors (e.g., the die must bounce against the end of a long table) and the die is carefully constructed from materials so as to make it mechanically symmetric. The probabilities are then insignificantly different from 1/6 for each outcome. Thus probability is feasible, even though individual prediction is impossible. While gambling sponsored the creation of the mathematical theory of probability, the insurance business has made it legal and into a great social enterprise. 4-4 Types of Outcomes 4-4.1 Conditional Outcomes An outcome A occurring under condition Cis symbolized A JC, (read" A, given C"). Its probability is symbolized p[A JC], or prob[A JC], read "the probability of A, given C." In principle, all outcomes have conditions, but the conditions of experienced outcomes may be difficult to identify. When all out comes that are referred to in a given discourse have the same condition or set of conditions C, it is common practice to drop that symbol, and to read "the pr:obability of A"; but this does not refer to a different entity than A JC. ' Two principal categories of conditional outcomes may be identified: (1) Stochastic: C denotes the conditions :under which the trial is performed of which A is the outcome. Historically A is then an event subsequent in time (at least differentially) to the situation C. (2) Logically Overlapping: A and C may 1 be two different ways of describing a given outcome X. Example: "If the shot was a hit (C), what is the probability that it was a kill (A)?". 4-4.2 Combined OutcomesAn outcome which is both an outcome A and an outcome Cis denoted as an outcome (A,C), orjust AC if this latter symbolization is clear in context. If either or both of A and B have probability 0,then p[AB] = 0. If the combined outcome AB cannot occur, then p[AB] = 0.4-4.3 Complement. ContradictoryThe complement or contradictory of an outcome E under condition C is symbolized as E !C, read"not-E, given C." for any E, p[E/C] + p[$/C] = 1 and p[E/C, $1Cl = 04-4.4 IndependenceTwo outcomes A and B are termed independent under condition C if p[AB ICl = p[A I.Gl p[B iClor p[A /B] = p[A] p[B].Thus p[AB ICJ = p[A ICJ p[B !C], or just p[AB] = p[A] p[B].In both cases (1) and (2) in paragraph 4-4.1 if p[C] = 0, then p[AC] = p[C] p[A !C]. For any three outcomes, p[A1A2Aa] = p[AI] p[Az/AI] p[Aa!A1A2], and similarly for larger numbers of outcomes.The outcomes E1 ... EN of a set of N outcomes are defined to be mutually independent as a set if p[X1X2 ... XN] = p[XI] p[X2] ... p[XN] for every one of the two combination that can be obtained by choosing each X,. to be either E, or 1/J,.4-4.5 AlternativesAn outcome which is either an outcome E or an outcome F or both, is symbolized as"E or F." If E and Fare mutually exclusive (EF cannot occur) then p[E or F] = p[E] + p[F] Whether E and Fare mutually exclusive or not, since EJ/', F$, and EF are all mutually exclusive, then p[E or F] = p[E] + p[F] -p[EF] Ifa set of mutually exclusive alternatives A1 ... Av is collectively exhaustive, i.e., it includes all possibleoutcomes, then p[AI] + ... + p[AN] = 1 4-4.6 Bayes' TheoremIf an outcome A can be analyzed as the exhaustive set of mutually exclusive alternatives (AB1 or ... (ABN), then p[B,./A] = p[B;] p[A/B;]L p[Bk] p[A !Bd k This formula is useful in some sequential search strategies. An unreliable "look" into some region revisesa previous estimate of the probability that the object of search is in the region, according to the above formula. B; is then" the object is in region i" and A is"a look (in some specific region) did not discover the object." • 4-5 Random Variables and Their Distributions 4-5.1 Random VariablesWhen the outcome of a trial can be numerically valued, the outcome is termed a random variable.Since it is drawn from a sample space, it is associated with a definite probability, although the probability need not be known in advance. 4-5 Some important special types of random variables (which will be represented by the symbol V for convenience) are as follows: (1) Representation of a Non-quantitative Outcome by a Random Variable If the possible outcomes of a trial are not quantitative-but nevertheless are clearcut, identifiable and distinguishable and collectively exhaustive-then they may be numerically coded, i.e., numbers may be assigned to stand for them. The simplest case is that of a binary outcome, hi which only two mutually exclusive outcomes are recognized, namely either the outcome E or the outcome 1/J. In that case define V = 0 if 1/J occurs, and V = 1 if E occurs. V is termed a Bernoulli random variable. The average numerical value of V in a series of trials will now correspond to the average number of occurrences of the outcome E. V "counts" the number of occurrences. The average value of V per trial will be the probability of occurrence of the outcome E. (2) Discrete Random Variables The possible values of V are separate and discrete, either consisting of the integers (whole numbers), positive, negative, and/or 0, or consisting of some of them, or else consisting of values that can be completely numbered by the integers. Most organizational quantities are discrete (number of personnel, amount of equipment, number of operations, etc.). (3) Continuous Random Variables The possible values of V are continuous, like the real numbers. Example: the radial deviation of a shot from target center; distance traveled; time required. Two subcases may be identified: (a) Mixed With "Impulses" There is a positive probability that V can take on certain exact values. For example, the deviation of a shot from target will not be measured if it is a miss of sufficient distance; it will then merely be assigned the value"miss." This is a discrete outcome. Since the other outcomes are continuous (exact deviation for hits or nearhits) this discrete "miss" is an impulse, and will have a definite probability. As another example, the wait for service at a center will have a finite probability of being 0, but if the wait is not 0 then the possible values of the wait are continuously measurable. Again, the operating lifetime to failure of a piece of equipment will likely have finite probability of being 0 because of the phenomenon of "initial" failure, i.e., failure upon first use. If initial failure does not occur, then the lifetime will have a continuously measurable value. (b) No Impulses-Probability Density This case is usually termed the "continuous" case. The amount of probability associated with any one point outcome in a continuous set of outcome points will necessarily have to be 0, like the "mass" at a point in mechanics. But a probability "density" will usually exist, like the mass "density." The integral of this probability density between two separated points X and Y will then give the probability that the outcome V falls in the interval (X, Y). Example: the amount of fuel used on a trip, if measured closely enough; the time required to ·kill a target; the average production rate at a work center; these are all continuous and would therefore have probability densities for their various possible values. _ (4) Higher-dimensional Vectors of the Above In most practical cases, the outcome V of a trial is a vector V = [V1, ...] of simultaneous outcomes V;. Some of these component outcomes may be discrete (e.g., the number of components still working in a damaged system) while others are continuous (e.g., the remaining time until blast-off). Thus, the probability "function" for such a vector V will normally be a probability for some coordinate outcome, and a probability 4-6 density for others, as the need may be. The function is then neither a probability nor a density and will, accordingly, be termed generally a probability function. (5) Stochastic Variable Most operational random variables are time-dependent, i.e., the probability distribution of V depends upon time. Equivalently, time is an important condition of any trial.This is acknowledged by referring to the random variables as V(t), or as V 11 according to convenience and clarity. 4-5.2 Distributions The probabilities of the entire set of values for random variables constitute a frequency distribution,or density function. Since, as noted above, the values of V may be continuous, the frequency distributionmay be continuous, such as the well-known normal or gaussian distribution.However, random variables typically occur in operational problems as a result of other randomvariables. For example, inventory arises as the result of demand; the number of casualties in an engagement arises as the result of hits; failure of a system arises from failures of the components; the wait in a queue is the consequence of the rates at which arrivals to the queue occurred previously and how rapidly the were served; and so on. The forms of the above variables are seldom any of the simpler distributions of statistics books. Instead, the distribution of the random variable has more often of necessity to be derived, by specifically finding its distribution from the functional relationship which the random variable bears to the other random variables. Nevertheless certain distributions are of particular importance and usefulness. The forms of some of these distributions are given in paragraph 4-9.1. Some mathematical characteristics are given in paragraph 4-9.2. 4-6 Monte Carlo Methods 4-6.1 General In some cases, the usual numerical methods cannot be easily and directly applied to obtain asolution, and use may be made of Monte Carlo methods.In these, numbers obtained from a table of random numbers or in some other random manner aresupplied, either directly or according to a representative probability distribution, to an experiment representing a problem. Numerical solutions can then be determined.Monte Carlo is frequently used in representations of actual situations, or simulation models. For· example, in a war game the probabilities of various outcomes, such as a target being sighted by areconnaissance aircraft, might be known or assumed. Monte Carlo can then be used to determine thespecific outcomes of individual events. For example, aircraft A2, A4, and A9 were shot down; targetsT19, T36, T38, ... were detected. Similarly, the results of fire directed at any of these targets may beobtained by Monte Carlo. Monte Carlo methods may be used also in the solution of other operational problems, such as servicequeues (see ch 12), and in obtaining approximations of complex mathematical terms. 4-6.2 Procedure Let r(n), n = 1, ... , be an available sequence of random numbers with rectangular distribution(i.e., equally likely) in the interval from 0 to 1. Let V be a random variable whose distribution function, F(x) = pr(V ::;x), is given. To "Monte-Carlo V" means to find a sequence V(n) of values of V corresponding to the sequence r(n) and having the following property: Regard V(n) as h(r(n)), i.e., as a functionof r(n). Then the property desired is that for every x, pr[r(n) ::;x] = pr[V(n) ::;h(x)], where these probabilities are measured over the entire space of possible sequences.There are two useful choices of h(x) (mutually exclusive): (1) h(x) = prob-1[V ::;x] = F-1(x)or (2) h(x) = prob-1[V>x] = G-1(x) where G(x) = 1-F(x) Both depend upon the following property of a random variable: Regard the distribution function of arandom variable as a function of the random variable, so that whenever a value x of the random variable 4-7 is sampled that value is replaced numerically by the value F(x) of the distribution function at x. Then the property is that in intervals of x in which F(x) is differentiable (so that the probability density exists and the random variable is continuous there) the numbers F(x) are equally-likely distributed in their corresponding intervals of value. At points where F(x) has an impulse-e.g., at all points if the random variable is entirely discrete-the value of F(x) will also have an impulse of the same magnitude. The property is illustrated in figures 4-1 and 4-2. It may be regarded as the fundamental theorem of Monte Carlo. In this statement of the property, "distribution function" may be replaced by "complementary distribution function," and the same property results for it. Accordingly the following procedure can be used to obtain the V(n): (1) Plot G(x) = prob [V>x], as in figure 4-1. (2) For each value r(n) locate V(n), as indicated in the figure by solving the equation r(n) = G(V(n)) for V(n). r 1 r(n)~--------------4 0_.____ V(n) X Figure 4-1 The distribution G(x) may be a histogram compiled from raw observed data, consisting, for example, of N observations of V, as in figure 4-2. In the data let N(x) be the number of values of V that are less than x, and let GN(x) = 1 -Nt). r x=observed values o V Figure 4-2 If raw data are used, then only data observed in the past will be drawn by the Monte Carlo sampling. This may be undesirable if other values are possible of future occurrence in the real situation. In that case, a distribution that interpolates and extrapolates values of the random variable, and their probabilities, should first be fitted to the data before Monte Carlo-ing is done. 4-6.3 Pseudorandom NumbersPseudorandomness refers to the systematic generation of patterns that, while they may possess statistical properties approaching those of a purely random pattern, are nevertheless quite deterministic, being in fact generated by a definite, although obscure, formula. The principal practical use of pseudorandomness is in connection with tactical employments ofprobability and Monte Carlo. Since the numbers rn of any given probabilistic sequence are random, there is by definition no wayof predicting the precise value of, for example, the nth number r, of the sequence given the knowledge of the values of all of the previous numbers (or even of all of the subsequent numbers as well). The number r n is in mathematical terms "not a function" of any other parts of the sequence. Neverthelessits probability is quite definite and, as random samples, such sequences may perfectly well have definiteand reliable statistical properties. If however such a sequence r n is generated by a definite formula, in such a way as to possess desirable "statistics"-i.e., in such a way as to appear random, and to meet ordinary statistical tests of randomness-then the sequence is termed pseudorandom. The numbers r" are then called pseudorandom numbers. Pseudorandom numbers have many uses, especially in Monte Carlo sampling. Their reproducibility(because they are generated by a formula) makes them desirable for statistical inference (though not for cryptography; and indeed the identity of any sequence of pseudorandom numbers employed in war-gaming to select a strategy against an enemy should be kept secret as well). On high-speed computers,certain types of pseudorandom number sequences, in plentiful supply, can be reproduced at very high speed, with but a word or two of memory required. The "multiplicative congruential" method of generating random numbers is given below. Let r" be the positive integer generated by the recurrence relation rn = a rn-1 + c, for n = 1,2, ... 'modulo M where the value of ro is given. Here ro, a, c, and Mare integers as discussed below. The symbol +modulo Mmeans that the numbers on either side of this symbol are to be added, and then only the remainder,above the largest multiple of M that is less than their sum, is to be retained as the value of rn· Suchaddition is termed "congruential." It may be geometrically put into correspondence with rotating awheel of circumference Mas follows. Distances are laid off around the rim of the wheel, in one direction,and their ending positions labeled. To perform x + y, start at position x and move around themodulo Mwheel (or rotate the wheel) a distance y in the positive direction. The answer is the position arrived at.Thus 4 + 5 = 3. The integer M is termed the "modulus" of the addition.modulo 6 Evidently such a sequence r" will repeat itself after at most M steps, and will thus have a period equal to some integer p. Unless ro, a, c, and M are carefully chosen, p may be quite small. If, however,a careful choice is made, certain statistical properties of the resulting sequence can be assured.Table 4-1 provides sample illustrations of what can be done. Each horizontal line in the tablecontains the numbers, in sequential order, that occur in the period of the sequence generated by thegiven selection of values of M, a, c, and ro· If the period does not include all of the numbers given,those that occur in the period are underlined. Casual inspection of the sequences given will confirm that there are appearances of randomnessmingled with suggestions of regularity. For example, the sequences that have c = 0 and a = 5 select 1 + each multiple of 4 in a somewhat special order that develops variety as M increases. For MonteCarlo purposes those sequences are preferable that have a period fully equal to M and consequentlyselect every integer from 0 to M-1 inclusive exactly once. They provide a source of "rectangularly distributed" numbers. On a large computer, M might easily be of the order of 235• ...~4-9 Table 4-1. Examples of Periods of Pseudorandom Number Sequences M a c To r1 ...... 8 2 0 1 2 4 Q 1 1 3 1. 2 1 4 2 6 7 1 3 0 1 3 1 1 4 5 0 3 1 6 5 2 5 1 0 5 4 5 0 1 5 1 1 6 7 4 5 2 3 0 3 1 0 3 2 5 4 7 6 4 1 16 3 0 1 3 9 11 1 1 4 13 8 9 12 5 0 3 1 6 5 2 9 14 13 10 5 0 1 5 9 13 1 1 6 15 12 13 2 11 8 9 14 7 4 5 10 3 0 3 1 8 11 10 5 12 15 14 9 0 3 2 13 4 7 6 32 5 0 1 5 25 29 17 21 9 13 64 5 0 1 5 25 61 49 53 9 45 33 37 57 29 17 21 41 13 4-7 General Categories of the Employment of Probability Three separate general categories of the employment of probabilities for military purposes may be identified: (1) Descriptive. Fit the probability distribution that for a given purpose best describes a given random phenomenon. The requirements for this type of task have been discussed by example in the preceding section. Two rather contrasting subcategories may be dis tinguished: (a) Observed frequencies and outcomes are given as data. The task is then primarily the statistical one of testing hypotheses, and of using regression methods. (b) For a proposed weapon system or tactic which is being studied for planning, a probability distribution for the set of likely outcomes is hypothesized based on experience with similar systems or tactics. In both types, analytical understanding of the operational mechanism that produces the randomness in the phenomenon can be more important than excessive statistical methodology. For practical validity, the resulting probability distributions must be reliable as predictors. When possible, resort should be attempted to using fundamental probabilistic distribution laws in order to reduce data requirements and to parameterize distributions in order to facilitate analytical evaluation. (2) Tactical. Accompanying terms for this category are "prescriptive" and "normative." When the outcomes of a situation are to be deliberately designed to be random, the task is to determine the best values of the probabilities with which the outcomes should be made to occur. Two important subcategories are: (a) Constructing a mixed strategy in game theory (see ch 10). Related to this is the deliberate randomization that is used in cryptography, and in tactical feinting. 4-10 (b) The design of physical distributions, such as the dispersive pattern of a projectileburst, with the objective of maximizing the average effect. (3) Decision in the Face of Uncertainty. When there is incomplete information, the outcomesof given actions may be estimated in terms of a probability. Typically these outcomesinclude the rewards of action. That course of action is desired which will maximize theobjective of activity.This category of employment of probability covers not only the subject of decisiontheory, but also that of game theory. (See ch 8 and 10.) 4-8 Random Imitations of Deterministic Outcomes It can be operationally efficient and effective to substitute a random conceptualization for a military process that is deterministic. If the process is sufficiently complex-e.g., in time-pattern of intensity and variation-the process even though deterministic may have strong statistical properties.This possibility is illustrated in the following two examples, both of which have to do with using a random imitation of an activity in order to establish standards for the performance of the activity: (1) Recurring and Non-recurring Demand Of the typical peacetime demand on the national inventory of a given item of supply,during any typical period of time, part of the demand used to be termed "non-recurring,"while the remainder was termed "recurring." Recurring demand is typically the morerandom and unexpected portion of the total demand. Mostly it is demand for replacementof material that has failed, being damaged, or worn out. Against the unpredictabilitiesof such demand, it is obviously advisable to have "safety" stocks, in amounts that dependupon the statistical magnitudes of the deviations of the demand from an average. Thisaverage may well be roughly known from average engineering estimates of lifetimecombined with field densities of end items in organizations and indices of military activityof the organizations themselves. The average rate of demand experienced in previoustime periods is generally used as a basis for forecasting future averages, to which safetystocks are then added.Whether safety stock is needed for non-recurring demand is not so simple a question. Much of this demand if roughly forecastable, being due to planned replacement or repair,or due to initial issues to newly formed units. To meet this demand, it would appearthat no safety stock is needed beyond the forecast if such plans go according to schedule.Of course the time pattern of the actual ultimate occurrence of this demand-for example,as measured by the actual placing of requisitions-will be irregular, more having typicallybeen scheduled to occur at one time than another. The detailed forecast of non-recurringdemand may, consequently, appear quite complicated as a function of time. The stockthat must be available to meet this demand will thus not typically be needed to be available at a constant rate through time. In short, then, the. requisite schedule of stock neededfor non-recurring demand is better known, but more complicated in time-specifiability,than is the stock needed to meet recurring demand.To the extent that the fluctuations in non-recurring demand from schedule may notbe so violent as the fluctuations in recurring demand from previous averages, it wouldseem desirable to develop separate coefficients of fluctuations (specifically, coefficientsof variance) for the two categories of demand, and apply them separately to the two forecasts of expected demand to develop total safety requirements for stock. However, asimpler alternative is in fact merely to treat non-recurring demand as if it were recurring,i.e., random around the time-average of its scheduled rate of expected occurrence. Thisalternative is the more recommended by the fact that the scheduled occurrence of suchdemand is at quite an irregular rate in the first place. These fluctuations in expectedintensity are now assigned as part of a total of unexpected fluctuations around an averageintensity. This has in fact been done experimentally in the stocking of Army nonrepairablesecondary items at the depots in recent years (cf. ch 14). Although the appmach is based 4-11 upon randomness, it can be superior in both administrative and supply effectiveness to one based upon a more careful, more complicated, deterministic design that can require more effort. (2) Operating Performance of a Test Facility On some missile test ranges, only one missile can be flight-tested at a time. The amount of time that will be required for a given test will vary with the type of missile. It will be the more unpredictable because of unexpected test difficulties that will often develop. The unavailability or preflight failure of a scheduled missile may unexpectedly free the range for use by the scheduled following test, but the latter may not be ready ahead of its planned test-time. How can standards for the utilization of the range be determined under such specifications of its use? Evidently one possibility is to schedule tests in careful sequences well in advance, keeping ready as spare tests or tasks those that are of a sort as to require no preparation. In this way the range can be so far as possible saturated with testing, at the expense possibly of extraordinary plannin·g. An alternative is the random one of treating tests to be performed as arrivals at a server, arrivals that occur randomly in time and that each take a random time to perform. The range is then a "server" for a "queue." The theory of queues provides formulas (cf. ch 12) which predict the average wait that a proposed test must endure before being run. This average wait may be made part of a set of standards of range performance. The theory of queues will also provide a formula for the average fraction of time that the range will be "busy" and this fraction is a measure of utilization of the range. If the actual average wait can be reduced by scheduling, then the schedulers and testers may have earned a "bonus." The theoretical wait is a basis for an incentive for them. If the average actual wait is systematically longer than the random norm for such delay, causes for this condition should be sought. 4-9 Notations and Forms of Random Variables 4-9.1 Notations and Terminology The lists which follow summarize terminology and notations that are used in standard fashion throughout this volume, and also include some useful formulas. They are stated with reference to a stochastic variable V (t). In a given context a non-stochastic random variable may be substituted for V(t) in the notation, by suppressing tin all symbolism. Throughout, the symbols "p", "pr" and "prob" are interchangeable; "pd" indicates probability density. The lists are organized into two columns to distinguish the discrete case from the continuous case. Common usages are given in the headings. In the column headed discrete, the variable of summation n is assumed to run over integer values. CONTINUOUS DISCRETE (1) Distribution function: Prob( V(t) ::::; n} = V(n;t), prob IV(t) ::::; x} V(x;t) (2) Complementary distribution function: prob(V(t) > x} = prob(V(t) > n} = V(>x;t) = 1 -V(x;t) V(>n;t) = 1 -V(n;t) (3) Probability function: (density function) pd{V(t) = x} = v(x;t) p(V(t) = n} = v(n;t) (4) Function of V(t):f(V(t)) 4-12 (5) Expected value (mean value) of f(V(t)) = E{f(V(t)) l ~f(n) v(n;t) 1f(x)dV(x;t) = Jj(x) v(x;t)dx + L f(x) v(n;t) n where n ranges over all the impulsesof V(t) (6) Transforms: "generating function" "Laplace transform" E {zv } = v(O;t) if -8 = 8' > 0, this termed the moment-generating-function or "mgf" (7) V(t) = A(t) + B(t), when A(t) and B(t) are independent. Convolution. v(z;t) = a(z;t) b(z;t) I v(o;t) = a(o;t) b(o;o (8) Expected value of [V(t)]n = Vn(t). For n = 1, V(t) is used. let VI(z;t) = v(z,t) Vn(Z;t) = zv'n-I(z,t) then vn(t) = v' n (z;t) I z~1 (9) Variance: u 2(V(t)) = V 2 (t) -V(t) 2 = E{[V(t) -V(t))Z} (10) Properties of Expected Values of Random Variables (andfunctions of them): (a) aV = aV (b) VI+ Vz = VI+ Vz (c) If X and Yare independent random variables, then: (a) XY =X Y (b) u 2 (X+Y) = u 2 (X) + u2 (Y) (11) Mean of a non-negative random variable: V(t) = LV( > n;t) n=O V(t) = {' V( > x;t)dx (12) Formulas concerning transforms for non-negative V(t): ~zn V(n;t) = v(z·t) !"" v(o·t) 1 _:__ z e-8x V(x;t)dx = -'" 8 1 - v(z·t) ~zn V(>n;t) = 1 _ z' Jco e-Bx V(>x;t)dx = 1 -v(B;t) 0 8 (13) Asympotic formulas concerning transforms of non-negative quantities:lim p(n) = lim (1-z) "':.znp(n) lim f(t) = lim 8Je-81f(t)dt n-1-oo z~ 1 n t-1-oo 8-.:, 0 lim p(n) = lim "Lzn p(n) lim f(t) = lim 8 J e-8tj(t)dt n-+0 z~O n t-)o0 8~oo 4-9.2 Form of Some Distributions Certain distributions are of particular importance and usefulness, and these are briefly listed here. (1) Binary. V = 1 with probability p, V = 0 with probability 1-p. The mean of V is p, and all higher moments are also equal to p. The variance is p(1-p). The generating • function is q + pz. To Monte Carlo V, set V = 1 if the random number r is less than orequal to p; otherwise set V = 0. 4-13 Within a specified range, from a to b, V has the probability function 1/(b -a). (2) Rectangular. (a+b)/2. The variance is (b-a)2/12 in the continuous case, and The mean is (b-a)(b-a-1)/12 in the discrete case. (3) through (5) are connected with Bernoulli trials and Variables connected with Bernoulli trials. are detailed in Chapter 5: discrete continuous trials trials ~eometric exponential (3) Time until first success (4) Time until kth success negative-binomial Erlang Both (3) and (4) are distributions of the amount of effort required to produce a given requirement. (5) Yield: number of successes resulting from I a given effort binomial Poisson Tables of the Poisson are given in the appendix. (6) Demand for capability: Compound-Poisson. See Chapters 5, 11, 14. The following distributions are artificial mathematical distributions that are sometimes useful as numerical or conceptual approximations to the above distributions, or as limits of them. They have no simple operational structure. This distribution has been proposed as a model for task-performance times m (7) Beta. Its probability density function iscritical-path scheduling (cf. 3-12; Chapter 9). xn-1(M -x)m-1 r(n) r(m) . nM . . , 0 < x < M. The mean IS --and the vanance IS b(x) = ( )M + n+m r n+m n m-1 ( ;7M2 1). A beta variate is the ratio gn/(gn + hm) where gn and hm are two n+m 2 n+m+ independent gamma variates with means kn and km, respectively, the parameter k being common to both. (8) Gamma. This is a continuous interpolation of the Erlang distribution. It includes the chi-square as a special case, but the chi-square is more often used in problems of statistical inference connected with sampling from normal distributions. The Gaussian is the standard normal variate with probability density (9) Normal. Gaussian. -x2/2 function g(x) = ~ 1-, which has a mean of 0 and a variance of 1. Tables of this distri v 211" bution are given in the appendix. The normal is a linear transformation aG + b of a _1 Gaussian variable G, has a probability density function 1-e 212a2, a mean of band av 21r a variance of a2• (10) Weibull. This distribution is popular in reliability studies. Its properties are developed in Chapter 13. (11) Lognormal. A variable Vis lognormal if the natural logarithm of Vis normal. A lognormal variable can thus represent the product of an indefinitely large number of sufficiently independent random variables. (12) Pareto. In this distribution, pr[V > x] = ( Xxo y, where x ;::: x o > 0, the minimum value x o of V being specified. This distribution has been observed in economics in the distribution of incomes (V) and, in demographic statistics, in the distribution of the number of communities of size equal to V. It arises in order statistics. References 4-1 Handbook of Probability and Statistics with Tables. Burington and May. Handbook Publishers, Inc. Sandusky, Ohio. 4-2 Monte Carlo Methods. J. M. Hammersley and D. C. Handscomb. Wiley. 1964. 4-3 Statistics of Deadly Quarrels. Lewis F. Richardson. Boxwood Press. Pittsburgh. 1960. 4-14 CHAPTER 5 POISSON AND RELATED RANDOM PROCESSES 5-l General 5-1.1 Types and Uses Random processes, also termed stochastic processes, develop the use of probability theory indescribing and designing the development of an operation, activity, or phenomenon through acourse of time.The tactical effectiveness of such employments of random processes depend upon theirdescriptive reliability and versatility. Largely in the years since the World War II birth of operations research, a variety of random processes, some with quite distinctive names such as renewal and birth-death, have been developed. These include (1) Input demand processes to weapons, service systems and supply points. (2) The outcomes of reconnaissance, of search and of battle duels, of production ofpieces in manufacturing, and generally of the trials that occur in processes involving directed effort against many opposing factors. (3) Accidents and exceedingly unpredictable phenomena. (4) Stochastic population processes in which such apparently disparate topics as chemi cal reaction rates, epidemic spreads, combat attrition, and advertising and propaganda are all put into a common structure. (5) Markov and semi-Markov processes to represent systems or situations whose stateor condition changes at random times. These include renewal and other recurrentprocesses useful in representing the typical cycling of randomly failing equipmentthrough periods of operability, wait for repair, idleness in use, etc. (6) Inventory and related processes fashioned to represent the variable dynamic behavior fluctuations in the significant material resources of a continuing military situation, particularly the logistic ones; (7) Decision and control processes, especially those which involve the repeated, possibly even continuous making of decision. 5-1.2 Approach The subject of random processes can be organized in any one of three major ways: ( 1) according to the activity in which they find application-e.g., reconnaissance, combat, communications, supply, logistics, medical support, etc.; (2) according to the type of process in whichthey are employed-e.g., demand processes, effort processes, yield processes, state processes, control processes; and (3) according to the quantitative complexity and quantitative patterns offormal relationships which they possess. In this chapter, the last approach will be followed. Somegeneral properties of random processes will be discussed, after which the Bernoulli, Poisson andrelated processes and their uses will be described. 5-2 Concept of Random Processes If a given process is represented as having a certain given specified value (state of the situation, etc.) S (t) at time t, then the change in S ( t) is to be a random variable. Since S (t) is thusitself the outcome of random changes, it too is random. Any particular history of the process is 5-1 sample out of the possible histories that could have occurred. The probability distribution thus a of S(t) in any given sample history of the process may depend upon t, upon S(t), and upon the entire previous sample history. While such dependence of current change upon present state and upon past history is a concept that has come to be quite acceptable in the case of non-random processes, the theory that history is but a sample of the possible histories that might have occurred is perhaps somewhat extreme. It is a pure concept and may not necessarily be accepted as true of reality. Neverthe less this does not necessarily render the concept potentially less useful for the scientific requirements of operational analysis and even of decision. The test of the adequacy of a concept for these purposes is not its truth, but rather its usefulness as a model of reality. The possible times t at which the process S(t) can take on values are now appropriately termed trials, since the outcomes are not predeterminable. The occurrence of trials in a given or process may be either discrete-i.e., at isolated, separated instants of time-or continuous, some alternation of the two. As in non-random processes, schemes of discrete trials may be employed to approximate schemes of continuous trials, and conversely. The values S (t) of the process are the outcomes of trials. Numerically, these values may be discrete, integer-valued, qualitatively valued (providing they are specified), continuous, or some combination of these, as appropriate to the particular process. The probabilities of given change are termed transition probabilities. Thus if at time t, the value of the process is S(t,) =x,, then the transition probability p(x2,t2 ; x,,t,) is the probability that at time t2 the process will have the value S(tz) = x" given that at time t, the process has the value S (t,) = x,. Transition probabilities are thus conditional probabilities. 5-3 Independence 5-3.1 Definition If for each given time t the probability distribution of S(t) is independent of the history of the process prior to t and independent of every part of that history, then the process is termed an independent process. In a discrete process, SUn+,) is then independent of S (tn) and of all other outcomes as well. In a continuous independent process, independence occurs continuously: p (x,t; xa,to) is functionally just p(x; t). Starting and previous conditions have no effect on the prob ability distribution of S(t). 5-3.2 Relating Non-Independent Processes to Independent Processes large number of non-inde Independent processes may result from the logical union of a of each trial is small. pendendent processes if the number of possible outcomes Examples: (1) Consider all the light bulbs in a given large building, and discount such phenomena as voltages surges (e.g., due to lightning) that may damage a number of them. Because of differences between bulbs, and the relatively independent histories of installation and use of each bulb, the failure of light bulbs somewhere in the building will tend to be one of independent trials, even though for a given light bulb failure is not likely until towards the end of the operating-hours lifetime of the bulb after installation. (2) The above example applies to most ordinary recurring failure of military equipment of a given type. The number of failures in a given period of time will tend to be independent and have a Poisson distribution. This independence may fail if all the parts are installed at the same time or, when one is found defective and removed, the others are also removed whether they can be found defective or not. This latter type of process is more one of renewal, treated later. (3) If in the process of search scanning is performed by a large number of independent scanners, successive trials will be independent. As these are probabilistically efficient in detecting targets, it is important to maintain independence of scanners. 5-2 (4) In combat fire from a large number of uncoordinated batteries, the outcome of all therounds taken in the time-succession of their firing will tend to be independent. (5) If S(t) is the demand at time t for supplies at a given depot or supply point or thedemand at time t received for its capability by some system (weapon, service center,repair shop, etc.) , then S (t) is often an independent process.Note however that the sum of an independent process-for example, the cumulative demandup to time t in (5)-is not an independent process, although it may be said to have independentincrements. 5-3.3 Other Considerations Independence should not be confused with lack of statistical relationship. Successive samplesfrom the same distribution will have identical statistics even if they are independently sampled.Since independent trials reduce the need for maintaining functional dependencies, processescomposed of independent trials are natural choices as components with which to attempt to construct syntheses of other complicated non-independent processes. This pamphlet treats many suchsynthetic distributions. 5-4 Stochastic Categories of Random Processes 5-4.1 Definitions If the probability distribution of S(t) is independent of the timet, then the process is termedstationary. If the conditional distribution p(x,t; Xo,to) is so far as time is concerned dependent,no matter the choice of to, only on the time difference t-to or the rate at which trials are beingmade, then the process is termed homogeneous. Otherwise the process is termed nonstationary,implying that the outcome of a trial is affected by the time at which the trial is performed.If S(t) is not stationary, but yet as t-to approaches infinity S(t) does become stationary,then the process is said to go to steady-state, and ultimately to be in steady-state. A necessarycondition which is then much used to find the limiting probabilities is that limit dp (x,t;xoto) /dt = 0t-t0 ~ 00 t When S(t) is not stationary, the quantity J p(x,T; Xo,to) dT/(t-to) is the time-average up to to timet of the probability that S(t) has the particular value x. If as t-t0 approaches infinity, this time-average goes to the same numerical limit as does the steady-state limits of p(x,t; Xo,to), then the process is termed ergodic. 5-4.2 Stationarity Seemingly most processes would have to be nonstationary because of the systematic fluctuations, according to the current time, of all human activity. Yet many processes are nearly stationary for significant periods of duration, and consequently can sometimes be modeled as onlyoccasionally changing in probability distribution.There are many special devices for avoiding the complexities that frequently attend nonstationary models. Many of these devices have nothing to do with probability theory or randomness per se. One example-the frequency of arrival of telephone calls at an exchange will varysharply with the time of day, yet the number of circuits to be available to calls cannot be varied. Therefore, it is common practice to estimate the adequacy of capacity only at the peak hours ofthe day, and to assume that the calling rate is stationary at the peak rate incident at that timeof day. The attendant model is clearly false but the capacity estimates so computed are conservative; are simpler to present in tables and charts_for engineering use; and can be readily relatedto each other in managerial analysis of performance, advisability of investment, etc.From any independent process it may be possible to derive a corresponding stationary independent process that will be easier to work with. Consider any random process with discrete 5-3 trials and independent outcomes not necessarily binary. Let S (i) denote the outcome of the ith trial. First, one may select for each i the median value of S(i), call it m;, have the property that the probability that S(i) is less than or equal to m; is 1/2 in value. Now classify each term in the process as a success if S (i) is greater than m;, otherwise a failure. If a unique median does not exist for each i, the above procedure cannot be executed simply as described. But instead of the median, any given percentile may be selected, the value of p corresponding to the percentile selected. The procedure is a useful one for making practical rough checks for independence and stationarity in a stochastic process. Note that it generates a stationary process even though the procedure can be useful in monitoring enemy tactics. 5-4.3 Examples of Steady-State (1) For a service system (repair shop, port, supply point, information center) whose demand is a stationary independent process, it is essential that the average demand rate be less than the response capability of the system, otherwise the system is probabilistically certain to become progressively more and more overloaded. In the former case, a steady-state or "equilibrium" distribution of "busyness" of the system exists; in the latter case no such stationary distribution exists. Thus the possibility of asymptotic stationarity affords an important engineering test for adequacy of the system's capability to handle its assigned load. It is equivalent to a stability requirement for the system. " (2) Often we may not know the conditions under which the process started initially. If we inspect it first at time t, and this time is presumed to be long after the process began, then the steady-state limit is a good estimate of the probability of the status of the process at the time that we observe it. When a steady-state limit does in fact exist, the rate at which the time-dependent probability distribution of S (t) converges to the limit as t goes to infinity will depend upon the par ticular process. A lightly loaded queuing process will converge much more rapidily than a heavily loaded one. These topics of convergence are similar to ones in physical phenomena such as heat diffusion, wave subsidence, the decay of electrical transients. Often it is sufficient for engineering purposes to deal merely with the steady-state result, and this is typically a simpler mathematical topic than the transient behavior of the process during its approach to steady state. 5-4.4 Ergodicity In actuality few processes are ergodic. But ergodic approximations of them are frequently employed. For example, if S(t) were the sum of a periodic deterministic function and a stationary random variable, then S(t) would not be ergodic. The time-average would have a limit, but in the limit the distribution of S (t) itself would be periodic (i.e., it returns to a given state in regular intervals). This concept is more effectively illustrated in detailed examples as they arise. 5-5 The Basic Independent Processes The following three types of independent processes have a great many operational uses: (1) Bernoulli processes, or as they are more often termed, Poisson processes. The exponential and the Poisson probability distributions arise in these processes. (2) Compound-Poisson processes, in which with each event of a Bernoulli process there is associated a magnitude. These processes have wide employment in representing demand for effort and material, and for random events of significant and possibly randomly varying individual magnitude. (3) The process variously termed Wiener, Gaussian, normal, or Brownian motion process, frequently used to represent random physical phenomena (including electromagnetic) and also serving as a useful asymptotic limit of (1) and (2) above. 5-4 These processes are described in the paragraphs which follow. For the purpose of assistingthe identification of their applicability to a given random process, considerable attention is paid to their detailed structure and nature. 5-6 Bernoulli Processes and Trials 5-6.1 Definition A Bernoulli process is an independent binary process. There are only two possible outcomes for all trials, each trial selects one outcome at random, and the trials are independent. A scheme of such trials is termed Bernoulli trials. Any dichotomy of the outcomes of any process of independent trials will define a Bernoulli process. Often one type of outcome is referred to as a success and the other as a failure. But for the purposes of identifying operational applications, a broader if less suggestive designation is to denote one type of outcome (either as the outcome E and the other as the outcome 1/J, "not-E." This designation is generally used in this volume. A Bernoulli process is the simplest type of stochastic process. Throughout military activity,from combat to logistics to administration, important examples may be cited of all of the following four categories of Bernoulli processes. (1) Discrete trials, stationary process; (2) Discrete trials, nonstationary process; (3) Continuous trials, stationary process; (4) Continuous trials, nonstationary process. 5-6.2 Discrete Bernoulli Trials The trials being numbered as integers, for each given trial number i of a given process the probability of occurrence of the outcome E is specified as some number p (i). Then l-p (i) is the probability of the outcome 1/J on the trial. The set of numbers p (i) for all possible values of i completely characterizes the process from the probability standpoint. If, for all i, p (i) is a constant, say p, then the process is termed stationary. As noted earlier in this chapter, nonstationary processes may be approximated by stationary ones for periods of time depending upon the degree of nonstationarity. The quality of the approximation is measured by the numerical difference between the probabilities calculated each way in dealing with the pertinent distribution (e.g., binomial, geometric, negative binomial). Bernoulli processes are common, and the examples listed here are treated more fully in spe cial sections. Continuous trials are perhaps more common than discrete, but these are discussed and exemplified later. (1) Hits and kills in discrete fire duels World War II naval operational analysis used nonstationary Bernoulli trials to represent the process, for example of a bomber attacking an antiaircraft-defended vessel. Each antiaircraft round was treated as a trial, and success was the downingof the bomber, effects of cumulative damage being ignored. For typical attacks, each trial number could be assigned to an approximate range by relating the rate of fire to the typical approach velocity of the bomber. Hence the trial numbers become an index of range and p (i) is the probability of success at the range ( corre sponding to) i. Independence of trials meant that success depended only on the range at the round fired, and not upon how many hits might already have been scored on the attacker, or upon any development of psychological reaction in human gunners, or indeed upon any consideration not simply reducible to the effect-at the time of the trial-simply of the range then existing.* Analogous uses of Bernoulli trials are to be found in other aspects of combat, in search (aradar sweep is a trial with either a return from a target or none), and in related processes. The approach is widely used in weapon system analysis and in war gaming. ,;. Fire which can be represented as a process of Bernoulli trials may be termed Bernoulli fire. 5-5 (2) Quality of output and performance in discrete production The quality of each successive unit of the output of a discrete production process (e.g., shot from a gun, missile launching, piece of ammunition manufactured, ex periment performed, request serviced, engine repaired, job done by an individual, etc.) may represent a Bernoulli process if the quality is simply defined as one of two possible outcomes-e.g., hit or miss, success or failure, satisfactory or unsatis factory, etc. In many cases the process would normally tend to be nearly stationary, but as time increases (i.e., after much has been produced) the probability of success p (i) decreases due to wear of parts or controls, fatigue, time of day. The rate at which p (i) changes must not be influenced by the quality of current output, or dependence results. In textbooks on probability, the classic example of stationary Bernoulli trials is a sequence of successive flippings of a coin, with say heads as success, or the successive spins of a wheel, with odd or even, or black or red, as binary outcomes. 5-6.3 Statistics of Discrete Bernoulli Processes Starting from any given trial, the number of the trial on which the outcome E (e.g., the first hit, kill, defective, win, dud, accident, success) occurs for the first time will have a geometric distribution. The most frequent number of the trial on which the first success occurs will be the first trial, a sometimes surprising reminder. Starting similarly again, the number of the trial on which the outcome E occurs for the kth time, for any given k, will have negative-binomial distribution. The most frequent trial number will not be the hrst, but will not exceed the average. Detailed statistics are given in paragraph 5-8. The geometric and negative binomial distributions may thus represent the amount of effort, measured in number of trials, that will be required to produce a given number of occurrences of the outcome E. For a given number N of trials, the number of occurrences of the outcome E will have a binomial distribution. Detailed statistics for both the stationary and nonstationary case will be found in paragraph 4-9. 5-6.4 Continuous Bernoulli Trials Suppose that the trials of a given stationary discrete Bernoulli process are performed at a very high, definite rate, e.g., at the rate of v trials per unit time. The average rates of occurrence of the outcome E and If: are now respectively vp and v (1 -p). As v is imagined to be larger and larger, the individual occurrences of these outcomes become difficult to visualize. As v is imagined to become infinite in value, the trials are performed continuously. They are termed continuous Bernoulli trials. The rates at which the outcomes E and It now occur also are infinite. E and 1/J occur in such varied patterns that, considering the rate of trial performance, one might describe the phenomenon as a "Bernoulli-vibrator." Clearly there is no bar to the possibility that the probabilities of occurrence of E and Jf: may vary with the trial number, or, as is now more natural to term it, to vary with time. In that case the trials are nonstationary. 5-7 The Poisson Process-Poisson Trials 5-7.1 Definition This process occurs as a special limiting form of continuous Bernoulli trials. Suppose that the probability on a given trial that the outcome E occurs is 0, yet the average rate of occurrence of the outcome E is not 0, but some positive rate. As a result, the outcome E occurs only at iso lated points in time. It is thus literally an event. In between occurrences of this outcome E, the other outcome, If: occurs continuously. 5-6 For applications, it is instructive to examine the consistency and detail of these concepts. Consider the limit of the following progressive sequence of stationary discrete Bernoulli proc esses. Suppose that to perform N trials of a discrete stationary process consumes a total time interval (or equivalent measure of effort) of length T. Each trial may then be regarded as hav ing length T/N. (Note that a discrete process may in reverse be constructed from a continuous process by dividing time into discrete intervals, but then one has to ensure that only one outcome can occur per interval if one wants Bernoulli trials in the limit. This is particularly necessary in Monte Carlo simulations.) Now suppose that T is kept fixed but that the experiment is reconstructed so that the rate at which trials are performed is doubled while the probability p is halved that the outcome E occurs. Then the average number of occurrences of E during the time interval T remains unchanged.Now reconstruct the experiment again, then again, etc., as illustrated in the figure 5-l each time doubling the trial rate and halving the probability of occurrence of E per trial. With each recon struction the average number of occurrences of the outcome E in the time-interval T remains equal to the same constant, which constant we may denote by aT. Then a is the rate of occurrence per unit time of the outcome E, and at the outset was established as equal to Np/t. The reverse of the above procedure may be used as a guide in constructing Monte Carlo simulations of continuous trials processes. Figure 5-l may be constructed experimentally, using random numbers. In the limit of reconstructing the experiment an infinite number of times, the trials become performed continuously and may be regarded as a flow of trials. Occurrences of the outcome E now stand out as isolated events. They are separated by time intervals, which may be referred to as inter-event intervals I. During any interval I, nothing happens, i.e., there is an uninterruptedcontinuous occurrence of the outcome 1/J. The probability per trial of occurrence of the event E is 0, but the average rate of occurrence per unit time of the event E is a. This rate may be regarded either as an average, or as a random quantity whose average value is a. Such a process of trials is termed Poisson. The outcomes E are, in the abstract, merely the occurrences of something that does not occur in the intervals between them. This indicates the most general scope of interpretations and realizations which a Poisson process may have. The intervals between the outcomes E may, if desired, be regarded as "occupancy" or "residence" time in a state, with an occurrence of the outcome E representing some interruption (possibly a change) in the persistence of the state. 5-7.2 The Basic Counting Processes In connection with occurrences of the outcome E, two basic and inverse counting processes may be distinguished: Discrete Trials p=1/2 N=8Is I sis I I I sl Np=4 p =114 Is I I I I I I Isis I I I I I I Isl N=16 Np=4 Continuous Trials p=O N=OO s s s s No= 4 =aT Figure 5-1 5-7 (1) Length of Time Until the Nth Occurrence of E From a given point in time, the length T(l) of the time-interval until the next occurrence of the outcome E has the same probability distribution no matter when the given starting point is chosen, because of the independence of the process. The length T (l) has the negative-exponential, or simply the exponential distribution. From a given point the length T (N) until the Nth following occurrence of the outcome E has the Erlang probability distribution, whose frequencies peak towards the mean. The exponential distribution will be recognized as the continuous analogue of the geometric distribution of discrete trials, and the Erlang as the continuous analogue of the negative-binomial. Statistics of all these distributions, both in the case of stationary trials and in the case of nonstationary trials, are given in paragraph 5-8. (2) Number of Events Occurring in a Given Time-Interval For a time-interval T of given length and location, let N(T) denote the number of times which the outcome E occurs within the time interval. N (T) then has the Poisson distribution with a mean equal to A.T. This is the reason for terming the scheme of trials Poisson trials. As in the case of discrete trials, the processes T (N) and N (T) may receive natural interpre tations as effort and yield. If for example N hits are required to kill a target, T(N) will measure the time required to kill the target in continuous Bernoulli fire, i.e., in Poisson fire. 5-7.3 Examples of Poisson Trials (1) Reconnaissance and Combat Scanning in search may be the more nearly continuous as it is simultaneously performed by a large number of independent scanners. Fire from multiple batteries may be potentially so nearly continuous as to be so treated. A detailed summary of Poisson trials in search, detection, and offensive fire will be found in chapter 15. (2) Arrivals of Demands at Systems Surveillance and weapon systems, service and traffic systems and supply points typically service a demand-respectively, the acquisition and analysis of reflections from the environment scanned, the acquisition of targets to be fired at, the occurrence of breakdowns to be serviced, the receipt of requisitions to be filled, arrival of vehicles, cargo shipments, etc.-that occur as a process over a period of time. Each individual demand is an additional arrival at the system. In such processes th~deman"d's often originate from many independent sources (sectors, customers, etc.) and may then arrive in such close time-succession that trials (i.e., the question "does an arrival occur?") have to be regarded as being performed continuously and the outcomes as being as independent of each other as are the sources of each other. Demand of this sort, onto systems, is treated further in chapters 11, 12 and 14. (3) Events in the Logical Sum of Many Independent Event Processes This is the principal general category of examples of Poisson processes. Common types include- (a) The number of casualties in a given period of time, when counted over a large battle sector; the fighting effort will determine only the mean number. (b) The number of rounds fired in connection with (a); the number of independent targets hit, etc. (c) The national total of the number of automobile accidents in a weekend; of airplane accidents in a week, or month; of admissions to a hospital. (d) The number of equipment failures in a (fairly large) installation in a day. 5-8 In these examples the word "number" refers to the cumulative count. The trials that produce the increments to the count are the Poisson trials. (4) As an approximation to Bernoulli trials, especially when the Bernoulli trials are performed in rapid succession. Poisson formulas are simpler than binomial. 5-8 Statistics of the Counting Processes The table which follows summarizes the counting processes in relationship to one another. Formulas are given in table 5-1. If the probability of success is the same on each trial, the trials are stationary, and the respec tive distributions are referred to as the geometric, stationary Erlang, Poisson, etc. If (as is frequent in applications) the probability of success can vary with the trial number-equivalently, the same binary variable to not always being sampled from on each trial-then the trials are termed nonstationary, and in this volume the respective distributions are referred to as the non stationary geometric, nonstationary exponential, nonstationary Erlang, nonstationary Poisson, etc. discrete trials First Success continuous trials the number of the EXPONENTIAL, the amount of time trial on which the GEOMETRIC or NEGATIVEelapsing until the first success occurs EXPONENTIAL first success occurs kth Success for given k, the NEGATIVEERLANG, or for given k, the number of the trial BINOMIAL, INTEGER-VALUED amount of time on which the kth or PASCAL GAMMA elapsing until the success occurs kth success occurs Number of Successes during a given num during a given ber of trials, the BINOMIAL POISSON period of time, number of successes the number of successes that occur 5-9 Special Properties of Poisson and Bernoulli Processes 5-9.1 Important "Traffic" Properties (1) The sum of independent Poisson processes is Poisson. This "merging" property can be graphed as follows: Poisson #1, rate a1 ---------.....Poisson, rate at + a.z Poisson #2, rate a2 ___.,.. ----------------------~ (2) Random independent forking in a Poisson process produces two independent Poisson processes: Poisson, rate pa Poisson, rate a ------------------------------~ -----__Poisson, rate ( 1 -p) a t ----- random independent fork 5-9.2 Contagious and Non-Contagious Shift A given Bernoulli or Poisson process that is under concerned surveillance may suddenly shift or may systematically drift away from its expected (probabilistic) pattern. 5-9 The purpose of monitoring the process is to take action if the frequency of success-i.e., the value of p (i)-suddenly shifts from the value we had previously expected. For example, the enemy may be readying an attack, the weapon's aim control may have developed a malfunction, the machine may be suddenly wearing excessively, the operator may have become fatigued, the service center may need reorganizing, the enemy's political policy may be shifting. To detect a change in the value of p (i) from its supposed sequence of values, we may use the statistics of the supposed values as a null hypothesis, computing the probability of occurrence of what does occur assuming no shift in the p (i). When this probability becomes large enough, we take the action necessary. From the logical standpoint there are two extreme categories of type of shift in p(i): (1) a con shift with i independently of whether the outcome of the ith trial is a success or not; (2) tagious shift, i.e., the value of p (i) depends on whether a success occurred in the ith trial. Noncontagious shift is a form of nonstationarity and, if estimates of p (i) are weak it may be difficult to identify. For example, the enemy camp will reflect more heat by day than by night, radar communication will be more accurate during favorable electromagnetic conditions. In these examples, p (i) is considerably influenced by a natural environment. Another example of noncontagious shift is simple wear, as in the case of the machine, or human fatigue. When the communications operator makes an error, if the fact that the error occurred affects the error rate, then the shift is contagious; similarly, if when one part of a system fails, the remainder tends to fail with a different probability. In accentuated contagion, the process can exhibit an explosive fluctuation. For example, a bad piece may even jam the machine. 5-10 Independent Gaussian Process-Wiener Process Suppose that xl, X2, ..., XN are independent observations of a Gaussian process that has a mean mN and a variance u ~. This is an independent Gaussian process with discrete trials, each observation corresponding to a trial. Suppose that theN trials take a fixed time T to perform. Now conduct the same kind of conceptual experiment that was performed in paragraph 5-7, of letting the rate at which the trials are performed be increased indefinitely (N ~ oo) while conserving two averages: (1) the mean, NmN t of W.v = X, +X" + ... +X~· (2) the variance, N u ~ J As N ~ oo, mN and u ~ ~0, the result is a normal process with a mean of aT and a variance of u2 ui of Tu2 where-= -. a mN The process W, which is the limit of W N as N ~ oo, is the Wiener process. Its increments can be regarded as a continuous set of Gaussian random variables each with mean and variance of 0 but with positive mean rate, and positive variance mte (per unit time) and variance-to-mean ratio equal to the fractions above. The Wiener process is very useful algebraically and numerically. It is the stochastic equivalent of the fact that the sum of independent random variables tends in the limit to be approximately normal in distribution. References 5-l Probability and Its Engineering Uses. 2nd Edition., Thornton C. Fry. Van Nostrand. 1965. 5-2 An Introduction to Probability Theory and Its Applications. Vol. 1. William Feller 1957. 5-10 - Table 5-1. Counting Processes, Bernoulli arid Poisson Trials The first four of these distributions are effort distributions, the last two are distributions of yield. The formulas for the exponential, Erlangkand Poisson (i.e., for continuous trials) cover both the stationary and the nonstationary case except where the indication (*) appears. Thenonstationary Poisson process underlying these results from a time-dependent rate a(T) of the occurrence of events at time T. If two times to and t > to are fixed, then A(to,t) = }t.(t a(T)dT is the expected number of events in the interval (to,t). In the formulas, A(to,t) is for simplicity abbreviated by A. In the stationary case a(T) is a constant, a. N onstationary forms can be developed for the geometric, negative binomial and binomial when the probability of occurrence on the ith trial is p;. For example, the probability that V > n is then ;'f..~ (1 -p;), and the generating function for the binomial is n,i:n [1 a+no -p; + p;z] (cf. ch. 4). The continuous distributions are easier to employ for non stationary trials (cf. for example, combat duelling, ch. 15). Exponential Negative Geometric (tables in Binomial (*) Appendix) (*) Erlangk v n t n tk 1:;1 Parameters p a(t) k,p k,a(t) I .... .... Range of V 1,2,... >0 k,k+1, ... > 0 Mean p-1 a-1(*) kp-1 ka-1(*) 1-p k(1-p) Variance a-2(*) ka-2(*) p2 p2 a(t)Ak-1e-A probf IV = } (1-p)"-1p a(t)e-A (n-1)pk(1-p)n-k k-1 (k-1)! ., Ane-A pr I~ V} 1-(1-p)" 1-e-A I:-,n=k n. k-lAne-A pr I> V} (1-p)" e-.1 I:-,- n=O n. transform (cf. pz (} (*) ( pz -y (-(}y(*) Chapter V) 1-(1-p)z a+O 1-(1-p)z a+O Poisson Binomial (tables in (*) Appendix) n n M,p A 0,1, .. .,M 0,1,... Np A Np(1-p) A Ane-A ( ~)p"(1-p)M-n - r n! (M -1) M -1-n M x"(1-x) P n dx r (M -1) M -1-n M X"(1-x) use tables o n dx 11-p+pz}M e-A(l-z) Monte Carlo Solve sum of sum of k sum of M n = fn r A = [fn r] Erlangk ~ t-to fn(1-p) k geometries exponentials binary variates < Erlangk+i CHAPTER 6 MARKOV, STATE, AND RENEWAL PROCESSES 6-1 Scope 6-1.1 Dependent Processes A dependent process S(t) is one for which the probability distribution of S(t) may depend upon part or upon all of the past sample history of the process prior to the given time t. This concept of process dependence is not peculiar to random processes; it is in fact the concept of auto dependence-or autocorrelation, as it is most commonly called-in any process, random or deterministic. The category of dependent processes is evidently so broad that it is impossible effectively to classify it merely by the kinds of useful applications, or by any natural structure, or by mathematical difficulty. The important kinds of dependent processes are those which stand out in applicability, in natural structure, and in mathematical simplicity. These processes are discussed in several chapters in this pamphlet, as follows: (1) Combat force-units and changes that take place in them during combat (ch. 15). (2) The condition and performance of individual weapons and task-responsive-systems (ch. 11-13). (3) Inventories of materiel and production systems (ch. 14). (4) Decision and control (ch 8, 12-15). All of the processes referred to in this grouping are connected with the states of systems (their size, their involvements, their condition) and with the design, support, and use of systems. Systems' states tend to represent the accumulation of significant changes in their previous states, so that state-changes (failure, wearout, repair, start up, damage) are important dependent processes. 6-1.2 Examples The following discussion of types of examples indicates in general how complexes of dependent processes may be developed in specified ways: (1) Interim Accumula,tion of Demand. The current size of an inventory is the difference between the total received up to the moment and the total quantity shipped out or lost up to the moment. Cumulative quantity shipped to date is not a process of independent increments because of occasional stockouts and because replenishments to inventory are correlated, with a time lag corresponding to the replenishment lead time, with previous inventory levels. (2) Task Performance. The killing of a target may require an accumulation of hits, just as most tasks take time to do. Whether or not the next hit kills the target depends upon how close to the end things already are. Times (or efforts) required to produce kills (to complete tasks) will thus not typically have exponential distributions, and are thus not Bernoulli processes; they are dependent processes instead, with the "state" of the process defined as the total partial accumulation up to the moment of each trial. (3) Size of Population. The current size of a population results from the accumulation of additions and decreases. Simple models of attrition of forces in combat suppose that the changes in the two 6-1 opposed forces are dependent on the current size of both forces, i.e., upon how many troops are effectively engaged in fighting each other; the process consisting of force-size is thus dependent, i.e., the joint process consisting of the sizes of both forces at the same moments of time. (4) Backlog of Work. The size of the current backlog of work is somewhat like an inventory,. being the cumulative difference between demand for work to be done, and what of that demand has been satisfied. (5) Operation Conditions of a System. Demand for a system's capability and the response to the demand "wears" the system, and the capability gradually deteriorates with use. The amount of the deterioration up to any point in time corresponds to the amount of demand up to that time. If the demand process is an independent process, as is likely, then the total demand is a sum and is a dependent process with independent increments. The next two examples trace the exact time-pattern of dependence in greater detail. (6) Inventory Control. Inventory control deals with the changes in inventory size. Let S(t) be the size at time t of an inventory of an item for which the demand is a process of continuous trials. During the interval t.t after t, that part of the change in S(t) that is due to the demand that may occur in t.t will be independent of S(t). But another part of t.S(t), call it !12S(t), will be any receipt of replenishing stocks. !12S(t) will usually depend upon action taken earlierearlier by an amount of time L corresponding to the replenishment lead time-to replenish the inventory. That earlier action was no doubt based partly upon the amount of inventory present then, i.e., the value of S(t) at time t -L. Since S(t) is dependent on S(t -L), !12S(t) is dependent on S(t). Thus t.S(t) = !11S(t) + !12S(t) is the sum of one change independent of S(t) and another dependent on S(t). (7) Combat Micro-Readiness of a System. A single-shot weapon requires a maximum time-interval M between shots for reloading. If it is engaged in a continuous duel, the probability h(t) at any given moment t of the duel that the enemy target will be hit will depend upon the time t1 at which the weapon last fired. If t1 is greater than t -M, h(t) = 0. As t1 progressively increases and reaches M, reloading is completed and h(t) may be restored to h, the basic hit-probability for a single shot. Evidently most processes are dependent; their value an instant later cannot be independent of their present value. This is most true of states of affairs, but less true of the amount of change that can occur in a given state of affairs. The more common dependent processes arise as the cumulative effects of independent processes, i.e., as various kinds of integrals of these processes. The analogy to deterministic trajectories is useful. Suppose that a particle moves with constant velocity. Then its position at any time is the integral of this velocity. The position at one time t very much affects the position at timet + dt. But the amount of change in the particle's position during any interval dt is quite independent of the particle's position. The random analog to this is the case in which the velocity of the particle at time tis an independent process v(t). Then its position S(t) will be a dependent process, obtained by summing, or integrating the velocities. S(t) will have independent increments. In a controlled mechanical trajectory, if acceleration is an independent process, then velocity will have independent increments, but position will not. The pattern of autodependence could be termed polynomial if, instead of acceleration, some higher derivative of position was the independent process. A conceptual illustration of the compounding of independence into dependence is what might be termed the hyper-Bernoulli machine. This machine makes a piece at every trial. The trials are independent and the piece made is either of good quality or is defective. Whenever the machine makes a defective piece, an operator stops the machine and resets the controls. There are two positions for the controls, A and B. The operator happens to be able to choose one or the other control position only at random, 6-2 and the process by which the control setting is determined is a Bernoulli process. When the controls are set at position A, the machine makes defective objects with one probability say PA, and when they are set at position B, with another probability PB· 6-1.3 Markov and Renewal Processes In this chapter, attention is confined to three categories of dependent processes of the simplest sort that are useful in dealing with the above topics. Attention is confined primarily to the developing of their mathematical properties. The formulas and equations of the chapter are intended to serve as a reference set for the many solutions of problems that are given in the later chapters on systems. These latter examples serve as illustrations for the use of the formulas. One prototype illustration (cf. the" rabbit" below) is provided in this chapter to cover all of the necessary calculations. More particularly, of the Markov processes that involve trials that occur continuously in time, attention is confined to a special set, namely those in which the trial-schemes are of" Poisson" nature. In these trials, events-which are typically the changes in state of some system represented by the processoccur at discrete points in time, exactly like the events of the Poisson process of chapter 5. These processes are termed Markov-Poisson. (1) Markov Processes In a Markov process, transitions in the process S(t) depend only upon the value of the process at timet, and not upon the past history. The principal use of Markov processes is in connection with system states. A Markov process may be viewed in either of two ways depending upon whether attention or importance focuses upon the states of the process regarded currently, or focuses upon the transitions, i.e., the events in the process. In the former emphasis one refers to Markov states and in the latter emphasis to Markov events. As in Bernoulli trials with two or more outcomes, regard the possible outcomes of a trial as coded 1, 2, ... , N. In Bernoulli trials the probability p; that the outcome i occurs is constant on every trial. In Markov trials, the probability is not constant, but is a probability that varies during the trials, being dependent upon the identity of the last trial, or equivalently, being a number that corresponds to the current state. The art of designing such systems, of representing them mathematically, and of measuring their operational performance are connected through the concept of the system. S( ) will typically be a vector. A principal task in representing systems by Markov models is to determine what the identity and composition of the vectorS( ) must be. The components of S( ) in a micro-representation of a system may be the fuel level in a tank, the value of a control setting, the length of time since the system was last maintained, the current backlog of work confronting the system, etc. The details of how this is done are described in chapters 11 through 15. Only one or two of these component variables in the vector may be of primary operational significance-e.g., the ones, as needed, whose values determine whether the system is operable or not (fuel level, force-size). However, the other variables will be required to explain the changes that will take place in these few variables that are of operational significance. The variables of operational significance are thus like marginal variables, or marginal stochastic processes, in a joint stochastic process of greater complexity. The total vector S( ) will be a Markov process-i.e., its future will be a function only of its present value. But the operational variables may not be Markov variables by themselves in isolation; the values of the other variables are also needed to explain their futures. This indicates in general how any process that has an explanation can be described by some Markov process, of which the process to be accounted for is perhaps but a marginal sub-vector. The other variables in the vector are supplementary variables. (This is the typical method in queuing analyses, for example; cf. ch. 12.) (2) Renewal Processes A renewal process is a special kind of Markov process. A popular illustration of a renewal process is the replacing of a light bulb in a lamp, which renews the serviceability 6-3 of the lamp, or the replacing of a worn tool in a (much longer-wearing) machine, or the reloading of a weapon. Renewal processes are thus useful in describing recurrent changes that occur in the states of systems, for example (a) Failure of an operating system; the event of completing its repair; (b) Being hit in combat; a system becoming inoperable; (c) Becoming busy (e.g., a machine, or system at ready); being attended to after a wait in a queue; (d) Being maintained, inspected, re-trained, reassigned, etc.; (e) In the case of a system like a force or inventory; being replenished. The simplest type of renewal process is the one-dimensional state process, consisting of the "age" of the process since the last renewal. Because such age is fairly easily recorded, this process is of particular importance in determining the best policies for maintaining systems and for supporting their reliability (cf. ch. 13). Renewal structures are of exceptional importance in providing readily computable models that predict future states and events. With respect to renewal processes, the renewal may be complete or only partial. Replacement of a part in a machine may not completely renew the machine, for there may be other parts still in operation that are worn and will soon have to be replaced. In a partial renewal one component of the system state-vector, corresponding to the part replaced, is renewed; the others are not. Processes of the particular Markov-Poisson type emphasize that the following structure is frequently observable in the detailed histories of systems and their activities: when a change in state occurs, the system enters some new state and remains in this new state for a period of time termed the state-occupancy time. At the end of this state-occupancy time, the state of the system changes again. For example, a machine-doing a succession of jobs-enters a state when it commences a new task; stays in the state while the task is being done; and leaves the state when the task is finished, passing then to some new state corresponding to the next job to be done, to idleness, to failure, etc. (3) Semi-Markov Processes As will be seen below, when the state-occupancy times have exponential distributions in time (geometric distributions when time is measured discretely), then the machine's state at any instant is a Markov process by itself. But when the occupancy times have other distributions, the state alone is not a Markov process, and supplementary variables are then needed to account for its state transitions. When the identity of the state to which transition occurs (when the state changes) is by itself a Markov process, then the entire process is termed a semi-Markov process. An example of a process which is a semi-Markov process but not a Markov process, is what might be termed the tramp-steamer process. The steamer travels from port to port, not picking its next destination until it arrives in a port. A state, for the steamer, consists of being in the process of going from one port to the next. A transition occurs when the next port is determined. Likely, the trip-time durations will not be exponentially distributed for a real steamer. If the probability that the next port chosen for its destination depends only upon the identity of its last destination, then the process is semi-Markov but not Markov. But for the fact that trip lengths for taxis tend to be short, with high frequency, the above process might be called the taxi-process. The general semi-Markov structure is of exceptional importance for the analysis of systems. It tends to coincide with just those concepts to which measures can be assigned and for which numerical values can in practice most easily be observed, namely (a) How much was produced in that production run? The answers to these questions describe the length of the occupancy times and their statistics. (b) What task was done next? What job order was produced next on the machine? 6-4 These routings are measured by the numerical frequencies of the transitions from one task (state occupied) to the next. Ordinary Markov processes are simply semi-Markov processes in which the stateoccupancy times are themselves the simplest kind of Markov processes, namely exponential(geometric), but not the more empirically frequent instances of normal distributions, Erlang distributions, constant random variables and the like. 6-2 ~arkov Processes The topic of Markov processes requires attention to the details of what takes place in the processand to the probabilities of process-transitions. The most effective way to achieve care and accuracy in their employment is to go through these details completely and thoroughly. 6-2.1 Markov Chains A Markov chain is a discrete-trials process in which the probability distribution of S(tn+l) in anytrial depends only upon the value of S(tn), being independent of every other part of the sample historyprior to t,.. That is, p{S(tn+l)IS(tn), S(tn-1), ... j = p{S(tn+1)1S(tn) j. Commonly the term "Markov chain" and the above equation refer to discrete-valued chains. However, the concept may be straightforwardly applied when the values of S(t) are continuous. For example,let S(t) represent the level of a liquid inventory being depleted by a compound-Poisson demand process.Then the symbol "p" in the above equation may be interpreted as probability density, i.e., in precise statement, pd{S(tn+1) = YIS(tn) = X, S(tn-1) = XI, •• • j = pd{S(tn+1) = yiS(tn) = xj -)p(y;x) in the case of stationary transitions } the transition = probability -{p(y;x;n) in the case of nonstationary transitions density If the values of S(t) are discrete, then the above equation may be read as a probability equation, the following notation being common: in the case of stationary transitions } the transition probability in the case of nonstationary transitions, This is called a discrete-valued Markov chain. In addition, mixtures of discrete and continuous values are quite permissible. Reference is made to chapter 5 for the modelling of nonstationary processes. Because only the present value, but not the previous history, of the sample process affects the transi tion probability, a Markov process is characterized as a process that is without memory. This nature is sometimes highlighted when applications are analyzed. In a Markov process the future and the past are independent of each other, given the present.The transition probabilities may now be regarded as the elements of a square stochastic matrix, termed the transition matrix. In the case of stationary transitions, this matrix is a constant T for all trials. In the case of nonstationary transitions, the matrix of transitions changes from trial to trial, and the matrix for the mth trial may be denoted as T(m) or as Tmas convenient. Reference may be made to the example of such matrices and the properties of their powers discussed in Chapter 3. In particular, in the case of stationary transitions, the element si/"> of the matrix T" will for any trial number m represent the n-step transition probability s(J,m+nli,m) that S(tm+n) = j given that S(tm) = i. In the case of nonstationary trials the corresponding ijth element in the matrix product T mTm+1 ••• T m+n-1 is the n-steptransition probability s(j,m+nli,m). The vehicle travelling around the network (ch. 3) was thus represented as what might be termed a Markov vehicle. The next node to which the vehicle went from any given node had a probability depending only upon the identity of the node being left, not upon the identity of the nodes previously visited. 6-5 6-2.2 Discrete vs. Markov-Poisson Processes Before discussing further the properties of discrete-valued Markov chains, a great deal of economy of visualization and of formal results can be obtained by following two types of processes in comparison with each other. The two may be identified as follows. For an ordinary Markov chain as described above, a typical sample history is illustrated in figure 6-1. On the time axis, trials are conducted only at discrete, established times with no events in the interval (the exact locations of these times have not been emphasized in drawing the figure). A particular state corresponds to an ordinate-value, and the transitions correspond to the dotted vertical lines. Once the sample history enters a particular state, say state i, the history will remain in the same state ion the next trial with probability p;; (only the stationary case is discussed; the nonstationary case is handled in precisely parallel fashion). The state value may continue to remain i for further trials. In short, an interval of "occupancy of the state i" commences. The length of this interval will have a geo metric distribution. While the state is in progress, Bernoulli trials are being conducted (at discrete time intervals) with probability p;; of "failure" (interpreted as staying in state i) and probability q; = 1-p;; of "success," i.e., of leaving state i. This is the typical Markov chain with discrete trials. Now suppose that trials are performed continuously. One special category of trials is then of especial importance, namely the Poisson trials of the preceding chapter. In these, the event "success" (leaving state i) will still continue to occur at isolated moments of time, but now with a rate per unit time. This rate can be established as follows. Let the rate at which change state to j occurs be denoted by aii· Then the sum a, = L a;1 represents the rate at which the event leave state i occurs. Referring to paragraph ir"i 5-6, the mean occupancy time in state i will be q;-1 in the discrete trials case, and a;-1 in the continuous trials case. I s (t) Occupancy Time I I I I I I I I I I I I I L.-., I I I State I I I I Values I I I I L,_l I r- I I I I I I I I I I I 1.-StateI I Transition I t J t, time Figure 6-1. Sample Discrete Markov Process. 6-6 A scheme of continuous trials such as that just described may be termed continuous Markov trials, or more precisely, Markov-Poisson trials since changes in outcome occur at isolated points in time; the distinction from discrete Markov processes being that, in the Markov-Poisson trial, the time interval is not predetermined. They are the Poisson analogs of Markov chains. The continuous-time processes in question are sometimes termed merely "continuous-time Markov processes (or chain)", but this is not specific unless the Poisson nature of the trials is specified. The processes are often described and introduced in terms of the differential equations given in paragraph 6-4. But for the identification of Markov processes, and especially for distinguishing them from semi-Markov processes so as to control applications adequately, it is much better to regard them in terms of the above state occupancy analysis which is developed further in the paragraphs which follow. For convenience of reference and comparison, the two classes of processes, discrete-trials Markov chains and Markov-Poisson processes of continuous-trials, will be discussed simultaneously, using a brackets to allude to the two cases thus ~disct~ete triatls. fl.. ~con muous na1s 6-2.3 Transitions Suppose then that at timet = {~j the process S( ) takes on the value i. Then a scheme of discrete } . f JBernoullit )p f. type IS. establ. h d ( IS e · d ·f S( ) h as unmterrupte · dl h d . tna1s o . or contmue , 1 y a { con muous t ~ msson the value .i) for some {~~;:~s} immediately prior to {~J In the scheme of trials, the value i continues . Jon trial number n = m+kt . . to recur as the process outcome until tat timet = To+x f the value ~ for the first time does . .t. t . 1 .f fwith probability l not occur. In t he case terme statwnary trans1 IOns on any curren tna, ~ .1 t o recur tat rate J d ai s • l L: p,, = 1-p;; = q;) i¢i ~ . . . . j robabilitiestf.;; a., ~ a, fIn the case of nonstatmnary tcanS>bons the corresponding \~tes fare LP;1 (n) = 1-p;;(n) = q;(n)) }i¢, ~ . Jnth 'trial} . As an mstantaneous part of the ltrial at t the outcome J is then rate L a,,(t) = a,(t)( j¢i } selected for the value of{~~~))} with probability{~:; : ~;;:~:}in the case of stationary transitions, and in the case of nonstationary transitions with probability{~::~~).: ~;;~~)j~:~7}}· 6-2.4 Occupancy Analysis The above analysis is based on the assumption that the lengths of ~ts~rin~s tof trilals} during which ~ 1me m erva s . t. I h {geometric l b b"l"t d. .b . I I" any process ou come va ue t I ~ occurs con muous y ave t" If. pro a 1 1 y 1stn utwns. n app I exponen m cations this requirement may be difficult to fulfill. Non-Markovian processes are discussed in paragraph 6-6. Markov process can be specified by either of two alternative sets of specifications: ,-1) . . {probabilitiesl . , Markov the transitiOn t f a;j can be given; or ra es (2) semi-Markov the mean occupancy times{~;=~} can be given and the conditional transi tion probabilities C;; can be given. 6-7 In practice, the latter specifications are likely to be derived more easily or given directly by the recorded values of numerical observations of the process, as discussed earlier. To acquire the transition probabilitiesl b d' . d'ffi h b · 11 h d f a;k y 1rect measurement 1s more 1 cut1 because t e process must e contmua ywatc e {rates · or sampled to detect transitions and determine their frequency. For the semi-Markov specifications, only the mean occupancy-times are needed along with the conditional transition probabilities C;; given a transition. The uses of semi-Markov processes are still under development. One property which enhances the usefulness of such processes is that the steady-state distribution of the process depends only upon the means of the occupancy times (as well, of course, as upon the transition probabilities C;;) but does not depend upon the higher moments of the occupancy times. In effect, formulas for the steady-state distribution of the process are distribution-free with respect to the distribution of occupancy times. The steady-state probabilities are s; = L k1 C;; "S; where k; L k;C,j i >"i 81 being the mean occupancy time for state j. 6-3 Examples (1) Examples are manifold. The queueing (waiting line) operations described in chapter 12 afford the most popular illustrations. For example, suppose that a process of arrivals occurs at a service station consisting of a single server. Suppose that the arrival process is Poisson, so that the time between arrivals has an exponential distribution with mean a-1• Suppose that units that cannot be served immediately, because the server is still busy with units that arrived previously, must form a single queue to be served in order of arrival. Suppose that the service time for a single unit has an exponential distribution with a mean of S. Consider then the number of arrived units that are still present, either waiting or in service. Let this number at time t be S(t). Then S(t) is a Markov process, with continuous trials and the following specifications: Markov semi-Markov a,. n+! = a, n ~ 0 ao = a, an = a+s, n ~ 1 Cn n+1 = a/(a+s) n ~ 1 a,., n-1 = s = (S)-1 , n ~ 1 1 n = 0 a;; = 0 otherwise Cn,., -1 = sj(a+s) n ~ 1 0 n ~ 0 The steady-state solution given in para 12-11 MIM /1 provides a simple illustrative exercise that may be readily deriverl using the methods of the paragraphs which follow. (2) For an alternative example, see the Poisson duel of chapter 15, between two duelists (or weapons) with stationary hit probabilities (rates) h; and h2, respectively. The state process S(t) is now a binary process with only 0 and 1 for values, 0 if either duelist has killed the other. The transition rates are now ao1 = 0. a1o = h1 + h2. These may now be rangedependent, and for closing or parting duelists will produce nonstationary transitions. (3) The following hypothetical example paraphrases a variety of interesting interpretations. The solution to it is worked out below to illustrate the methods being described through the course of these paragraphs. A rabbit in a closed box with two sections A and B that are accessible to each other, is subjected to a process of shocks. Each shock is a trial, as a result of which if the rabbit is in one section of the box he may jump to the other section, or remain where he is, with given probabilities. The rabbit may be interpreted in many ways, for example: (a) The "rabbit" is a person whose viewpoint is either A or B, (for example, a political loyalty, a preference for a certain type of soap, etc.). The person's strength of view 6-8 point will be seen in what follows. Trials occur when organizations, urging the two viewpoints in competition with each other, direct propaganda at the person. As a result of a trial he may or may not change his viewpoint. (b) A worker changes jobs from time to time. In this case, more than two possibilities A and B have to be recognized, as of course they may be in the previous illustration. For simplicity, only the two-state case will be analyzed. 6-4 Numerical Behavior with Time Either of two symbolic formulas is used, as convenience may recommend, in presenting the numerical behavior with time of a dependent random process in time. One is to discuss the transition probabilities s(x,T[x.,T.) that S(T) = x given that S(T.) = x•. The other is to assume some definite value s(y;O) for the probability distribution of S(T.) and then study the behavior of the probability distribution s(x;T) of the value of S(T). These alternatives are illustrated in the discussion which follows. 6-4.1 Differential Behavior (1) Discrete Trials. Let s;(n) = prob {S(tn) = il and let s(n) denote the vector [s1(n), ...]. In terms of the matrix T(n) of transition probabilities on the nth trial, then s(n) = s(n -1) T(n -1). The multi-step transition probabilities s(j,n[i,m) satisfy the Chapman-Kolmogorov equation: for any n ;::: r ;::: m ;::: o s(j,n[i,m) = L: s(k,r[i,m) s(j,n[k,r) k where s(j,n[i,m) = prob {state isj at step n given that the state is i at step m 1. This simply restates the matrix product relationship Tm+t ... Tn = (Tm+t ... Tr)·(Tr ... Tn) n > m (2) Continuous Trials of Poisson Type. Referring to the nature of trials of Poisson type, note that as 6.t ~0, e-a/l)dl rX a .(y) dy for j = i{s(j,t+6.t[i,t) = { Jt+dte_}t ' L: a;k(x) s(j,t+6.t[k,x)dx for j =P if l k ;<' i={1 -a;(t)6.t + terms in (6.t) 2 for j =i} a;1(t)6.t +terms in (6.t) 2 forj =Pi Substitute these relations into the following Chapman-Kolmogorov equation for continuous trials: s(j,T1 [i,To) = 2..: s(k,t[i,T.) s(J",Tlik,t) for any t between Tn and T1 in two different cases, as follows: k (1) forward equations: in the limit as t ~T1 ~s(J",t[i,T.) = -aii(t) s(j,t[i,T.) + 2..: s(k,t[i,To) ak1(t) k;<'j which may be written in matrix form as ~s(t[i,To) = s(t[i,T.)A(t) where A(t) is the stochastic differential matr~x [a;1 (t)] (i.e., its row-sums are zero) in which for i =P j, a;1(t) ;::: 0 and a;;(t) = a;(t) = -L: a;1 (t). (2) backward equations: in the limit as t ~T ., substitution yields ~ s(j,Tt[i,t) = a;;(t) s(j,T1[i,t) -{; a;k(t) s(j,Tt[k,t). 6-9 The two systems, forward and backward differ in scope of usefulness. For example, if steady-state solu tions exist they may be found by putting the~ = 0 in the forward equations. Example of the rabbit (1) Discrete Trials. The case in which the rabbit of paragraph 6-3 is shocked in trials at discrete times is the same type of problem as the example of the vehicle travelling around the network that was dealt with in detail in chapter 3. As a result of any trial, the rabbit jumps from section ito sectionj with probability p;j, i = A,B, j = A,B, where if j = i, the rabbit's position is interpreted as not changed by the trial. Only the transition matrix of a stationary case is illustrated. The jump probabilities can be given by the matrix. T = [1-PAB PAB J PnA 1-pBA . th t T n. th [s(A,nlA,O) srB,nJA,O)J S o a Is en s(A,nlB,O) s(B,nlB,O) . (2) Continuous Trials. In this case the rabbit is subjected to continuous trials, and jumps at the transition rates aAn and anA· Let s.1 (t) be the probability that at timet the rabbit is in section A of the box. The forward Chapman-Kolmogorov equation is SA (t + flf) = SA (f) [1 -a A nflf] + SB(t) aB.1llf where sn(t) = 1 -sA(t). A similar equation can be written for sn(t + llt), but is not an independent equation. One of the equations is thus always dependent on the others since the state probabilities at any time t must sum up to 1. The forward differential equation then is ds~?) = -a,tnSA(t) + anA [1 -sA(t)]. In matrix form this is SA(t)] [SA(f)] [ -aAB UAB] [ ~ sn(t) = Sn(t) . anA -anA -UAB UAB] The matrix A = is a differential matrix, (i.e., the row sums are zero). [ UHA -UJJA 6-4.2 General Formal Integrated Solution In formal terms the integrated solution is: DISCRETE TRIALS CONTINUOUS TRIALS n l1 T; is the matrix of transition probabilities Jot AC.rl a., is the matrix of transition probabilities i= A(x) = [a;i(x)l 1 s(n) = s(O)[T1T2: ... Tn] f [a;i(x)]dx or s(t) = s(O)e o s(j,m+nli,m) is the ijth element ofTm •• T,.+u-t s(j, T 11i,To) is the ijth element of the matrix• (TI [a;i(x)]dx e}ro STATIONARY TRANSITIONS Tn gives the transition probabilities _eAt gives the transition probabilities s(n) = s(O)Tn s(t) = s(O)eH A = the matrix [a;;]. 6-10 6-4.3 Formal Solution, Ergodic Case When the process S(t) is ergodic (cf. ch 5) and a steady-state limit exists, then the above solutioncan be expressed in more revealing form as shown below. (1) Discrete Trials The matrix T" can be represented as a sum of matrices in the particular form T" = T"' + Vn where Teo is an ultimate component, and V n is a transient component.Associated properties are as follows: (a) The transient component Vn is a differential matrix of the form L: nkr"Vnk, the V,.k k~o being differential matrices and JrJ being <1. As n --t oo, the matrix Vn --t 0. Thematrices V nk are the transient matrices associated with the process. They representgeometrically decreasing sequences and are typical of Markov processes. The sum of each row sum of V" is equal to 0, so that these transient components of the n-steptransition probabilities Pi/") may be regarded as perturbations applied to the limitingstate. (b) If the process is ergodic, then as n --t oo, a steady-state distribution exists, i.e.,lim T" = Teo = a constant stochastic matrix with equal rows (each column thus n --t oo being a vector of constants). In this case each row ofT"' = lim s(n).n --t oo (c) If the process is not ergodic, there is a periodic limit for T", i.e., in place of Too there occurs a matrix sum L:p Tieo where p is the length of the period. Each matrix Tt is i=l a stochastic matrix with equal rows and the following notational relationship holds:for i=O, ..., p-1, Ti+I = T;T; and T1 = TPT.The matrix Teo is found by solving the simultaneous set of equations s( oo) = s( oo) T for the steady state probabilities s;( oo) in the vector s( oo ). Examples of the Rabbit. The rabbit's position is given by the vector s(n) = [sA(n), Ss(n)] = s(O) T" 00 The matrix Teo = [s( )] may be found by solving the vector equation s( oo) Ts( oo)s( oo) for s( oo ), the solution beings( oo) = [PAn/a, PnA/a] where .~·a =PAn + PBA· It can then be verified by induction that T" can be formed as Equivalently, sA(n) turns out to be a-1PnA = (1-a)"[sA(O) -sA( oo )]. The quantitya is fundamental and may be interpreted as the activity coefficient for the rabbit sinceif a > 1 the probabilities sA (n) oscillate between two paths that converge geometricallytowards sA ( oo ), as in figure 6-2. These probabilities, of course, correspond to the average frequency with which the rabbit occupies the corresponding boxes on the giventrial. Note that the values of the probabilities s;(n) after each trial may be regardedas posing a new value of the vector s(O). The transient behavior of the solution is no 6-11 State Hypoactive Rabbit (oI> Trials Figure 6-2. Trials. different from that of any system of difference equations with constant coefficients. In a typical application, these (conditional) probabilities will be predictions of future location of the rabbit, his present location being given by the initial probabilities. (2) Continuous Trials In a fashion quite parallel to the case of discrete trials, the matrix w11 can be analyzed into a steady-state component plus transient components, in the canonical form s(t) = s(O) e·41 = s(O) [M + L tke-u k1 '-J k where the u k are constants, and tke-u kt kvanishes for large t. The component M is a stochastic matrix of the limiting probabilities, i.e., each row of M is the vector s( co) of steady-state probabilities. (a) Finding the Steady-State Solution If only a steady-state solution is desired then the transition probabilities (or rates in the case of continuous trials) must be stationary. The steady-state may then be found by putting the ~'s of the s(j,t li,T n) equal to 0 in the forward equations above, and solving the remaining (homogeneous) systems of simultaneous equations for the steady-state probabilities. For this, denote ltimT s(j,t li,To) by SJ. Then the for -0 --;, 00 ward equations in the steady state, are 0 = -aJis; + L: skakJ or, in matrix form k, Ol If .. 1• S;<~: > 0f" J IS not a t > 0's,J possible sequel to i, note that { ~:; } = 0. A set C of outcomes is termed a trapping set if no outcome j, not in C, can be a possible sequel to any outcome i in the set C. Thus once a sample history has an outcome in a trapping set C, all subsequent outcome are restricted to that set C. If C consists of a single trapping outcome i, then {~;; : ~} for all i'~i. Such an outcome i is termed an absorbing state. If C is a proper subset of the set U of all possible outcomes of a Markov process, then the transition matrix may be (re) arranged so that off-diagonal blocks of O's occur in the rows correspond ing to the outcomes i inC in those columnsj that correspond to outcomes outside the set C. A Markov process and its set U of outcomes are termed irreducible if no subset of U is a trapping set. Every state is then a possible sequel of any state. In addition to reducibility, degree of recurrence also explains the longer-term transition probabili ties. Starting from outcome i on any trial let {J;i((nt))} denote the {probbabb~1~tty d .t } that the outcome t1ii pro a 1ensi Y. 1 1 y 6-13 . . {nth following trial } . . J next occurs for the first time on the t . 1 t t" t f ll . . Numencal recurrence-relatwns that na a 1me o owmg computing these quantities are given in the paragraphs on renewal processes. Let j,1y: useful ) )L h(n) t ) n=~ ( denote the probability that the outcome j ever follows outcome i. Then j;j ~ 1. tf. j;j(t)dt) 6-6 Non-Markovian Processes For a simple mathematical example of non-Markovian processes, let A, = A(t,) be a discrete-valued independent discrete time process, let V, = A1 + ... +A,, and letS" = V1 + ... + Vn. Then V" is a Markov process, butS" is not since S, = Sn-1 + Vn = Sn-1+A,+ Vn-1and Vn_ 1must then be known. However let Jn be the joint two-dimensional process (V,, S,). Then Jn is Markov, for prob IJ, = (i,j) I = L prob {An = k I pr~b {Jn-1 = (i-k,j-i)I k c )I jo if v~j-i } . l tl b {J -c· ")IJ b {A . I "f . . . or equiva en y, pro n -t,] n-1 - u,v - )\pro ,=t-U I V=J-1, In the terminology of statistics, S, is a "marginal" process. It is thus necessary to find the (Markovian) joint distribution of V" and Sn to specify the distribution of the non-Markovian process Sn. This example illustrates that higher sums of an independent process are not Markov processes. For example, let the acceleration of a missile be an independent process, then the velocity is a Markov process but its position (integral of velocity) is not. Of course, position can still be analyzed. In the above example V" acts like a "supplementary" variable for calculating the distribution of S,. Operational examples often require the use of such supplementary variables to explain (predict) the behavior of a given random process variable. Consider the following examples: (1) Although the arrival of jobs being queued up at a single server at a work-center may be a Poisson process, if the probability distribution g(x) of the time-length x of each job is not an exponential distribution, then the probability at any trial that a job in process will be completed at that trial will depend on how long the job has been in process. Consequently the number N(t) of jobs unfinished at timet will not be Markov. But if any job is in process at time t, let t-A (t) be the time at which work started on it. Then the joint process (N(t),A (t)) is Markov, where A (t) is left undefined if N (t) = 0. (2) Controlled Inventories are non-Markovian. A typical inventory decreases under demand and increases with receipts. If demand is an independent process, then the decreases are independent increments, and the new value S(tn) at the end of the nth period of activity = SCtn-1) -[demand in the nth period] + [receipts in the nth period]. Usually the receipts are not independent of the size of the inventory (if receipts were also independent increments to inventory, the theoretical inventory process would be uncontrolled and would show an equal probability of all possible values of inventory from -co to + co). In fact since the replenishments to inventory typically depend upon the value of the inventory at some previous time (earlier by an interval equal to the replenishment lead-time), inventory is not immediately Markovian. (3) The question of whether a given process is Markovian or not is ultimately a question of how its distribution is to be calculated numerically. This is related to the question of the mechanism that generates the process. For example, suppose that demand in continuous increments on a single-item inventory is a statistically independent process and is backordered to wait if inventory is exhausted. The inventory is reviewed periodically, at equally spaced time-intervals of length V. At each revi~~ there is requested as replenishment for the inventory a quantity exactly equal to the total demand received since the last review. Each replenishment requested is received after a constant (replenishment lead time) delay 6-14 of L time units. To calculate the probability distribution for such an inventory, via a Markov representation, it would be necessary to carry along at each review an n-dimensional vector consisting of the n replenishment amounts requested at the last n reviews where L = n V +u, u < V. But if g(x,T) is the probability density (pd) that total demand in an interval of length T equals x, then the probability density of the amount y by which inventory is below the maximum at any random instant after the first replenishment is received is readily seen to be Jiv g(y,L+t)dt. This illustrates how a direct investigation may be easier than transformation to a Markovian analysis. (4) By contrast the following computation requires about five minutes on a high speed computer. Arrivals at a single server queue (e.g., an airport runway) are a nonstationary Poisson process with rate a(n) in the nth hour, n = 1, ... ,24. Assume that to service (land) an arrival takes exactly one minute. The number of arrivals N(k) on hand but not served at the end of the kth one-minute interval k = 1, ... ,1440 during a whole day is a simple Markov chain a; i-l+i(k) = Pi(n k) i 2: 1, j 2: 0 aoi(k) = Pi(nk) j 2: 0 where Pi(n k) = the Poisson probability [a(n k)] i e-a /j! and n k is the hour in which the kth minute falls. The best, and an effective computational approach, is simply to compute the k-step transition probabilities by multiplying the matrices. 6-7 Monte-Carlo Simulation of a Markov Process In a Monte-Carlo simulation it will almost always be simplest and most efficient computationally to follow the process as a semi-Markov process. For continuous trials, this is the only completely accurate way. As a consequence, the simulation will move directly from one state-change to the next. The value of the intervening occupancy-time is statistically sampled. At the end of the occupancy, the state transition is determined from the probabilities c,h j =;t.i. The simulation will generate a sample process such as that illustrated in figure 6-1. 6-8 Statistics of Stationary Renewal Processes In any given event process, when some particular event of a certain type occurs at some time T, attention may be directed to the length, I = T*-T, of the time-interval until the next event of this same type occurs, at time T*. I is termed the inter-event interval. I may well be a random variable. If its probability distribution depends at most only upon the value of T, then the event process is called a renewal process. Each recurrence of the event is then termed a renewal. If the probability distribution of the interval I is a constant independent of T, then the renewal process is termed stationary. For given t > 0, let I(t) denote the probability that I > t, and let i(t) denote the corresponding probability density function. (In applications, I is almost never usefully exactly 0, although it may be nearly 0.) It is assumed that i(t) is everywhere defined, i.e., that there is no finite probability that I is 0 or any other single value. 6-8.1 Current Age In practice, a renewal process is apt to be examined at a time T which falls between renewals. At such a time the current age of the process is the amount of time Ar that has elapsed since the last renewal. 6-8.2 "Mortality" Depending on Age If when the process is observed, the current age since last renewal at some time is a, then I a-the amount of time that will elapse before the next following renewal-is termed the "mortality interval at given age" in applications to failure or death, or more generally the age-dependent interval until renewal. The probability density that I a = t, symbolized as i(t;a), is "(t· ) = i(t+a) z ,a I(a) . 6-15 i(t;a) is thus the conditional probability density of renewal at time t hence, given that the last renewal occurred time a ago, time being measured from instant of observation or consideration of the process. Note that i(t;O) = i(t). 6-8.3 Hazard Rate For a fixed age a, consider i(t;a) fort =0, i.e., h(a) = i(O;a) i(a) I(a)" In applications to equipment failure, h(a) is termed the hazard rate (or function). Knowledge of this rate for every a is equivalent to knowledge of the probability density i(t) itself, either one sufficing to determine the other. To wit: i(O;a) = -.!!_ ln I(a) da so that -f,a h(t) dt i(a) = h(a) e o This interesting way of writing any probability density function has direct physical interpretation when the renewal interval is the interval between failures of a system. The function h(t) is then the rate at which the system is receiving "shocks" at current age t, i.e., the shock-rate has to be measured in units that may change systematically in relation to the system's age. (Cf. ch. 13.) In combat, this same type of representation occurs for survival during a duel. (Cf. ch 15.) In practice the current age of a renewal process may readily be known or determinable at the time of any renewal so that observations only of the hazard rate can be converted to estimates of the" lifetime" probability density i(t). From the above, (l+a -1, h(x) dx i(t;a) = h(t+a) e a Consequently, it follows that i(t;a) is independent of a if and only if h(a) is a constant h. Then i(t) is of the form he-h1 , the exponential distribution. The way in which mortality rate depends upon a can be significant for decisions about taking action that is intended to anticipate the recurrence of renewal (e.g., at what age since maintenance should the equipment be maintained again). This is developed in chapter 13. 6-8.4 Initial Renewal Just as a lifetime may first be observed beginning at some age, so also a given process of recurring renewals may first be observed after its beginning at some time T that does not coincide with a renewal. Then the interval I~ from T to the first renewal after Twill have a probability distribution depending on T. ir(t) = the probability density that I:; = t must be given as a starting condition of calculations about the further process of renewals after T. 6-9 The Renewal Counting Processes The two fundamental event-counting processes in the case of renewals are: (1) tr(N) the time measured from the time T of the initiation of observation until the Nth renewal thereafter occurs, tr(N) = I; + I2 + I3 + ... + IN, where I2 through IN are interrenewal intervals I. (2) Nr(t) the number of renewals that occur in the interval (T,T+t), t being> 0. A basic relationship exists between (1) and (2), namely: Nr(t) < k if and only if tr(k) > t (e.g., the Poisson-Erlang relationship in para. 5-6), so that prob{Nr(t) = kl = prob{tr(k) :::; tl -prob{tr(k+1) :::; tl. 6-16 The following equations detail the relationship in a way often useful in calculations. Let rT...1 1/) ~ LL d 2 z 200 LIJ (!) 4 0:: LIJ > . For a region R of possible values of x, the total probability of detection D will be D = jp (x) [1 ~ e-*>] dx if p(x) is the probability (density) that the object hunted is at x. If only a limited search effort E = f s (x) dx is available, what value should the search-effort allocation-function s (x) have at each point x, i.e., how much searching should be done at each point so as to maximizeD given E? The solution is given in chapter 15. Submarine search was the first tactical problem for which a mathematical methodology, search theory, was extensively developed (in World War II naval operations analysis). The applicability of the model does not depend upon the fact that the objects searched for in that development were submarines; nor even that it might be well applied to searching for guerrillas in a jungle. It subsequently became the classic model of allocation of effort. (7) Dynam·ic Programming of Sequential Act·ion Short-time horizons for programming activity tend to sacrifice longterm gain, the accumulation of which may be significant or critical. In the past a methodology for sequential decisioning has been lacking. Dynamic programming is a new method which has demonstrated its practicality in a number of problems. Among these are the sequential scheduling of any activity, especially actions whose structure is an investment in some tactic for a period of time. Numerous examples are given throughout the volume. Details of dynamic programming are presented later in the chapter. 8-2 The Decision Process 8-2.1 General Procedure The formal procedure for decision making consists of the following step: (1) Identifying possible courses of action and effort and resources required for each; (2) Identifying requirements, objectives, and other constraints; (3) Selecting measures of effectiveness; ( 4) Establishing an analytical framework or model in which the courses of action may be compared;. (5) Using the model to measure the effectiveness of alternative actions; (6) Selecting the best or optimum course of action; (7) Determining the sensitivity of action to changes in effort and effect. In the succeeding paragraphs, the above steps will be further identified. However.. it can be difficult to perform these steps without regard to each other, whether the decision happens to be made by the above formal procdure or not. For example, the availability of resources and the existence of requirements constitute constraints upon action, and determining the boundaries of sets of alternatives that correspond to constraints is an essential part of the task of identifying all of the alternatives. 8-4 The above outline does not presume the critical element of the particular decision, and in anygiven case some substep of the above may be the most important step .of all. For example, in (5) above, whether there is an enemy opposing action, or whether the action is merely an undertaking against nature, may be of critical import in the decision. Again, whether the problem is going to be one of feasibildy-i.e., of finding any course of action that meets all of the requirements and does not exceed the resources-or whether the problem will be one of finding the best of many feasible alternatives, may become the most important part of the process of decision in a given case. Again, in a given case the most difficult part of the decision task may be to establish the value of resources for alternative military purposes when these resources will to some extent be consumed by the action. 8-2.2 Mathematics and Alternatives of Action Not all alternatives of action may be discoverable by mathematics, but not all alternatives of action may be discoverable without mathematics. The following two examples make these pointsin contrast. (1) In reference 8-1 it is reported that in order to transport trucks to an airstrip in the interior of New Guinea in World War II, where trucks were desparately needed, the idea was hit upon of sawing the truck frames in half to fit the piecesinto the small cargo aircraft available at the time, welding the frames together again after delivery; this worked. It is not clear that this was a primarily mathematical problem, in spite of the arithmetic used. (2) A jeep is required to cross a desert, leaving a fuel base from which the distance across the desert is twice the fuel carrying capacity of the jeep. The jeep may establish fuel caches along the way, returning for more fuel, going back to establish farther caches until, by using previously established caches, it has reached the • other side of the desert. At what points should the caches be established so as to use the least fuel? This is a purely mathematical problem. There is no one mathematical way of representing all of the alternative courses of action, nor any best way. For example, in simple games consisting of discrete actions, the alternatives correspond to types of actions and frequencies with which they are chosen. In allocation, the variables of decision are the amounts allocated and these amounts, the alternatives, may be dis crete mathematical quantities or continuous as appropriate. Formally, the alternatives of action correspond to the domain of definition of the mathematical variables representing the possible alternative courses. How these variables should be constructed is illustrated in this volume in various chapters dealing with specialized activities and with the types of mathematical representations that are most effective for them. 8-2.3 Requirements ond Objectives Requirements and objectives are typically stated in terms of what is to be accomplished byaction and, of the latter, two categories can be distinguished (1) Changes in the situation variables. These changes are recorded by the transformation, referred to in paragraph 7-5.2. For example, time goes by, locations change,force-sizes change, capabilities alter, some resources have been reduced, others increased, some or all of the requirements have been produced, etc. (2) Changes in valuations, attributable to (1). The situation may have improved, de teriorated, developed, etc. If the objective of action was to maximize the increase in a certain given valuation or valuations, then-likely, as a result of action-some change in the valuation(s) in question has occurred. In addition, a result of as other actions by other organizations, the valuations of the resources in hand mayhave changed independently of the effects of the local action. Examples are a type of equipment being manufactured has been obsoleted by a competitive design, or in combat, an immediate frontal objective has been rendered worthless by the success of a flanking attack by an allied unit. 8-5 are frequently sought in terms of such quantities as efficiency, gain, Measures of value effectiveness, and figures of merit. Mathematical representations of valuations of the changes brought about by action can be regarded as falling into two categories, one a classical mathematical topic, the other a modern one (1) The valuation is a single quantity, or ordered set of values, which is an ordinary mathematical function of the changes. Typically it is unconstrained and varies continuously with the output of action. Examples are (a) the dollar-value of the inventory, which is composed of many items at various unit dollar-values, (b) the value, measured in present value (discounted) of the increase in productivity obtained by a given expenditure in training, (c) the value equivalent of escorts in terms of vehicle-cargo escorted, measured in terms of replacement value. (2) The valuation is negatively infinite if the change in the situation does not lie in a certain region (the required region), but the valuation is 0 elsewhere. In effect the value of the result is not measurable, but compelling. Examples are (a) the principal example is the fulfillment of a command, or order, that a certain thing be done, such as: "Company A is to occupy the hill before noon unless 20 percent or more casualties are incurred"; "Deliver two days of supplies before 1400", (b) a set of requirements may be a complex of two such commands and conditions, for example: "A cumulative total of S(t) units of a supply item (vector) is to be delivered (troops are to arrive, etc.) at least by time t", where the times t are specified in the statement of requirements. (A require ment is usually determined in terms of expectancies, and should not be overstated as a schedule. Performance of a requirement in advance is something like creating an inventory.) The value may be stated directly in terms of specified changes in the situation variables, as • in the above examples, or it may be stated in terms of valuations of the situation variables, for example: "Do not exceed a budget of four thousand dollars for all effort expended". ~. be ::::; m says that as an objective to be Thus a constraint that the change in action, maximized, m-~is worth 0 if~ ~ m; -oo if~> m. As noted before, when an action whose result or value must lie in a region, it is termed "constrained". It will be seen that the term requirements (as defined in ch 7) is identified with changes in situation whose valuation is given in terms of (2) above. The term objective is used to cover the type of value given in (1). It should be noted here that the typically sequential nature of productive activity and of action means that a requireme11t at one stage of the sequence will generate induce a require or ment at earlier or prior stages. The induction can vary from precise transmission to mere consistency, depending upon the variety of ways which prior stages of production may possess alternatives of capability for meeting the requirements of later stages. 8-2.4 Constraints Three general categories of constraints may be identified (1) Organizational constraints, including those due to enemy action, the prescribed re quirements of command, and the limitations of organizational resourees. These a recurrence of the situation or by deliberate redesign constraints may alter in of larger organization activity; (2) Physical constraints that typically cannot be altered except by technological change, including capacities; (3) Mathematical formulation of problems will sometimes require a third type of constraint, a mathematical constraint (or constraints) that is needed to eliminate ex 8-6 traneous solutions. For example,· reqmrmg that the solution to a problem be an integer (or be a positive number) when the physical quantities involved occur in whole numbers (are positive). These constraints will not concern us here. The principal organizational constraints occur in the form of ( 1) resource capacities, and (2) minimal requirements. Others may occur, for example, directed limits upon how far action is permitted to be carried. Limitations upon action that can be identified inherent in the structure of action (a machine cannot be run backwards, for example) are apt to be included under mathematical constraints. Requirements as forms of constraints were discussed in the preceding paragraph. By resource capacities are meant limits upon capabilities, as described in paragraph 7-5.1, and thus typical limits (a) upon numbers of units of given capability that are participating, (b) due to inertial involvements (which may be programmable for gradual change, through the course of several successive actions), (c) due to amounts of supplies available. Types of physical constraints involving capacity include (1} Velocities. Loading-time and minimum firing intervals will limit the firing rate of a weapon and thus the rate of combat (cf. Combat). The speed of operation of a machine will usually have a maximum limit, and even below this limit there may be some exchange of quality of output with speed that puts a maximum on the rate of output of given average quality. The velocity of a vehicle or soldier or force unit in a given environment will limit transportation. Analogous limits will be found in communication. The speed of which a worker is capable is analogous to machinespeed limits, but can respond markedly to motivation. (2) Area. Volume. The capacity of a storage area, volume of a warehouse or tank, the holding capacity of a transportation link. (3) Density. Area and volume limits will set limits on the density of a given thing in a given area or volume, for example: vehicles on a highway, ships in a canal, aircraft in the air, weapons in a given area, troops per given area, etc. In practice, such density limits are seldom achievable except under undesirable conditions for activity (e.g., all vehicles stalled, the canal clogged, too close for breathing, sanitation, etc.) . ( 4) Flow. Capacities on flow may appear to rise as a composite of the capacities on velocity and on density, but experience often shows that actual capacities are much lower than such theoretical limits because of neglected factors. For example, vehicular flow on highways tends to about half the theoretical limit. The principal neglected factor is traffic congestion. (5) Traffic and Congestion. Turbulence may develop in physical flow, or congestion in traffic. When a machine is required to do a large variety of tasks in succession, the flow-rate of tasks through the machine may depend upon the order in which the tasks are done because of typical variations in the amount of time required to change the machine over from one type of task to another. This phenomenon attends all types of task performance. The flow of production through a factory will typically decrease as in-process inventories become so large as to clog passageways. At a service center the jobs waiting to be done may slow the flow of job traffic. In practice, the flow capacity of a transportation link is better taken as sufficiently moderate to ensure satisfactory flow. 8-2.5 Selecting Measures of Effectiveness It is often emphasized that the most important task in decision is the relational task of relating effort to effect. The measure of this relationship should hold throughout the range of alternatives of action, or hold at least near the obviously best courses of action sufficiently to locate them. 8-7 No universal forms represent the return function of effectiveness for given efforts. The nature of effort, and the return, depends upon the activity. Usually the best that can be done is to devise approximate measures of effort, and approximate measures of effect. Numerous measures that have proved effective are given in later chapters. Presumption that mathematics is less capable than the qualitative intellect is usually false; measures than cannot be put into mathematical form may not be clear at all. For some types of action, the development of suitable measures may be a significant task. Most operations research case histories report instances in which such development was necessary, usually because the type of problem being studied had never been encountered before. As persistent examples of particular difficulty that may be cited are costs of weapon systems; the effects of congestion in the traffic that develops in the transportation of cargo and troops, especially when flow capacities of route elements are stressed; effectiveness over a spectrum of possible uses of any given weapon. When the effort required to develop a measure appears considerable, the question should be raised whether the action has an analogy in some other field of human activity that may provide interdisciplinary assistance. As specific examples of measures that are particularly effective for their purposes, attention is drawn to ( 1) The measures of survival probability as a function of tactics, force-disposition, and allocation, used in reference 8-2, and in the many examples in chapter 15; (2) The measure of losses in combat transportation used in chapter 15; (3) Measures of the reliability of nearly series-parallel structures in paragraph 13-12; ( 4) The measures of congestion in the various types of service systems, illustrated in paragraph 12-4; (5) The measures of obsolescence and of supply shortages in chapter 14 (para 14-8, 14-12, 14-13). 8-2.6 Modelling The steps in the decision-effort, as depicted in paragraph 8-2.1, consist of making the standard "military estimate of the situation." The way they are depicted places more emphasis than is traditional upon the function of modelling. This is appropriate, since in this volume we are concerned primarily with decisions in which the formal process of decision itself is a substantial one, so that modelling becomes the keystone. Of necessity, the mathematical statement of the problem must represent the effects of the action and the correspondence between input, output and objective. To that extent, the statement is a model of the action and thus of the activity between decisions and even beyond each decision into the future. The various chapters and paragraphs of this volume-dealing with the various types of processes, activities, and systemsstress the requirement for accuracy and realism in modelling, and illustrate mathematical representations that may be adequate. Nevertheless, models sometimes may not be the best choices to serve as elements in the statement of an optimization problem concerning the actiivities, in that computation of optima involving them may happen to be numerically difficult. Thus the choice of a model for the purposes of an optimization task is to be made on pragmatic grounds, i.e., it must provide an efficient approach to choosing between alternatives. Application of the model must not constitute a prohibitive task, nor a task which cannot be completed in time to have any effect on the decision. The choice may thus be a compromise between totality of representation of activity and reasonable computation requirements. The important relations must not be obscured by detail. In effect, this is a topic in conceptual approximation, where the objective is to maximize the net gain from action when the cost of deciding is included in the effort of action. The characteristics of the model will in any given case depend upon the general structure of the total activity. Certain important categories of activity are treated separately in the following chapters of this volume, and numerous measures and models are presented in eonnection with them. 8-8 8-3 Types of Optimizations The requirement of decision can thus be represented in one of the following forms: (1) Constraining Action A constraining action may be expressed in the form, "Take any action which meets the (following) set C of restrictions....". Mathematically, this will require finding a solution to the restrictions, or constraints, i.e., finding any set of values of the action variables that satisfies the set of constraints. If there is no solution at all to the set of constraints, they are infeasible. (2) Optimizing "That action is to be taken which optimizes (maximizes or minimizes, as given) the following quantity, and there are no restrictions on action." The corresponding mathematical problem is termed an unconstrained optimization. Most problems in calculus, as it used to be taught, were of this form, e.g., finding the lowest point on a given parabola, etc. (3) Constrained Optimization "That action is to be taken which optimizes the following quantity . . . ( V, given) ... and which meets the following set of restrictions ... (C, given)". Mathematically this is termed a constrained optimization problem. It may be infeasible. Often the optimum will occur beyond one of the restrictions, so that the classical technique of setting derivatives equal to 0 will fail. It is usually impossible to optimize more than one function at a time. Nevertheless, a number of objectives may be required to be simultaneously attained. To this extent, constrained optimization goes some way towards the realization of a number of desirable ends. These ends are not commonly termed objectives or missions, but constraints, or restrictions. The one function to be optimized (e.g., cost) is termed the objective function. This must not be allowed to cause confusion when compared to military uses of the term objective. Note that while the attainment of requirements can be represented by restrictions, there is still freedom to optimize one function as a mathematical objective. 8-4 Inequalities 8-4.1 Forms In many operational decisions, the restrictions are interval or inequality type, i.e., are restrictions of the form gi(x" .. . ) ~ (or~. or=) R. i = 1, ... and constitute a simultaneous set. (In special types of operations, still more specialized forms will occur). Attention is focused on constraints of the above type in the paragraphs which follow. This in turn focuses attention on the characteristics of the functions gi and the objective function. In many cases of analysis it is convenient to use two simultaneous inequalities (x ::::; y and -x ::::; -y) to represent an equality (x = y). A requirement nominally stated in the form g (x,, ...) ~ Q, where g (x,, ...) is the output of the action, can be stated as a negative resource constraint, thus: -g (x,, .. . ) ::::; -Q. Inequalities can be used to define reg'ions of values of variables and thereby to represent constraints. Simultar.eous inequalities, in case many variables (e.g., of action) are involved, can be used to define regions that are complicated in that their boundaries correspond to the simultaneous satisfying of numerous inequality constraints. Examples are a truck need not be loaded to its full capacity, not all of a budget may be needed; a threat may be met at less than the maximum capacity of effort available; a combination of these. A curve y = f (x), figure 8-1, can oe used to divide the plane into two regions, one on either side of the curve. Below the curve, y < f(x). Above the curve, y > f(x). Often the curve itself is 8-9 naturally combined with the region below the curve, the combined region being that in which y:::;f(x). As a standard, the curve y = (x), figure 8-2 may be regarded as being represented by a relation g (x,y) = 0. This is natural in dealing with functions of several variables, as is typically required. Thus, let g (x,y) be a surface in x andy. Then g (x,y) = c, where c is a constant, is termed a contour of this surface. Project the contour into the x,y plane. If it is a single closed curve, as in the figure, then for values of x and y on one side of any given contour g (x,y) = c, g (x,y) will be < c. On the other side of the contourg (x,y) will be > c. g (x,y) :::; c will define the region on one side and including the contour. The contour need not be closed (for example, the contours might be a hyperbolic pair) and, theoretically, it might not have any simple form. Practically, we are apt to be confined often to contours whose equations can be solved at least numerically. Linear inequalities occur in representing the constraints on linear activities. This extensively developed topic is covered in chapter 9. The systematic solution of simultaneous inequalities has the simplest structure when the inequalities are linear. This is covered in chapter 9. 8-4.2 Examples The following examples are not only simple mathematically but have frequent operational usefulness. (1) The interior of a circle (sphere) of radius r centered at the origin is defined by x" + y2 :::; r" (by X 2 + y2 + Z2 :::; r 2 ), and of an ellipse (ellipsoid) by X 2 + a"y2 :::; r" (by x" + a2 y2 + b2Z2 :::; r 2 ). Cf. damage functions under Combat. y y >f (x)region where region where ---y ... ,Xn] for which a,xl + ... + anXn > c (3) A machine can make any one of N types of objects (a servicer can do any one of N tasks, a capability-unit can execute any one of N types of actions, etc.), one at a time. The machine can make the ith object at the rate m; pieces per unit time. The machine can be started and stopped at will (not all machines can, of course), and the time required to change the machine over from making one type of object to making another (e.g., retooling) is short enough to be negligible (in many cases it would not be). In a time-interval of length t the number of objects that the machine can make is any vector [x, ... ,xN], where X; is the number of objects of type i, that satisfies the inequalities I: i assuming that sufficient supplies and take-off capacity will be available. The 1st inequality above can be expressed in the standardized form 1 or L [;X;:::; 1 where li=i md (4) A cargo vehicle is to be loaded with a shipment of I types of replaceable parts destined for a military outpost. The ith type of part has a unit weight of w; and a unit volume of V;. The total weight and total volume which the cargo vehicle can accept are, respectively, W and V. Let x = X; be feasible loading of the vehicle, i.e., xi is the number of pieces of the ith type of part in the loading. Then the feasible alternatives of loadings for the vehicle consist of just those vectors x whose coordinates X; satisfy the following simultaneous inequalities: (a) L W;X;:::; W (b) L V;X;:::; v 8-4.3 Effect of Elasticity Representation of a capacity by an inequality may seem too simple when the capacity is elastic or variable, for example, the traffic capacity of a road, airport, waterway, etc. depends upon the weather. The capacity of a store may be increased somewhat (perhaps expensively) by careful packing-arrangement of its contents. If the fluctuation in capacity has a relatively small effect upon strategy, a simple inequality will suffice (which may be set conservatively, typically, or even expensively, as appropriate). If the fluctuation is large, then it may be necessary to stratify the representation and recognize the occurrence of any particular level of capacity as itself a situation to be individually modeled. 8-5 Extrema 8-5.1 Definition For a given function g (x1, .. . ) , a point x [x, . .. ] in a region S for which g (y1, ...) :::; g (xl, .. . ) for every other point y = [y" .. . ] in the region S, determines a maximum of the function in S, and determines a minimum if the inequality is reversed. In either case x is termed an extreme po-int of the function in S, and in either case there may be more than one such point. • The value of the function g ( ) at such a point is termed an extremum. A desired extremum is termed an optimum. 8-11 8-5.2 Notat,ion The maximum and the minimum values of a function f(x" .. . ) for values of x = [x,, ...] in a region S are often denoted by, respectively, max f(xl, •• • ) min f(x1, .. . ) and [x1, ...] inS [x11 •• • ] inS If the region S is defined by a set of inequalities g, (x,, ...) ~ R,, g~ (x,, ...) ~ R~, . .. , the maximum is denoted by max f(xh .. . ) gl(xh···> ~R1 gz(Xt, .• . ) ~ Rz and similarly for the minimum. Other notations, such as equalities, may be included in an obvious fashion. An especially convenient one, max{f(xh ...) I gt (xH .. . ) ~ R1, gz(X1, .. . ) ~ Rz, .. . } is generally used in this volume. 8-5.3 Some Characteristic Properties of Extrema Maxima and minima have interesting and useful properties, including the following: (1) The maximum (minimum) of a function f(x) is unique, i.e., there is only one value for the maximum. But the choice of x that produces the maximum or minimum need not be unique. (2) If for each value of y the function I (x,y) is maximized with respect to x, and evaluated for any value of x that maximizes it, then the result is merely a function of y. Thus in ihe symbolism max f (x,y) = h (y) , the variable x on the left is like a variable of integration: it is not part of the value of the expression, but only part of the scheme by which the value is found. For example, if y represented the amount of a resource available, x represented a variable ranging over possible courses of action, and f(x,y) represented the effectiveness when the course x is adopted, then the function h (y) represents effectiveness as a function of the amount of resource available when the resource has been optimally utilized. Note that since x is not present in the answer, the optimum is simpler than the problem required to identify it. Thus optimizing a complex functional relationship can produce a simpler relationship. This fact contributes to the organizational usefulness of optimal allocation. (3) Maxima and minima sometimes are shallow, i.e., the value of the function near the optimum may not be markedly different from the optimum. (4) (a,b) is termed a saddle point of the function f(x,y) if at the same time that f(x,b) assumes its maximum (over x) at the point x = a, the function f (a,y) assumes its minimum (over y) at the point y = b. That is, max min f(x,y) min max f(x,y) X y y X A saddle point may be visualized as the highest point of a pass between two mountain ridges, which is at the same time the lowest point of a profile formed by the ridges rising on each side of the path. (5) The gradient of a differentiable function is the vector whose components along the axes are the partial derivatives of the function with respect to the variables. When f(x) contains only on variable, its gradient is simply its derivative. When there is more than one variable, the rate at which the function f(x,, .. . ) changes at a point depends upon the direction in which change is considered in the value of x. Let the change in x be a vector 8-12 ~x = [~x1, ...] . Then for given ~x the amount of the change in x is ~s = [~ (~x;) 2)Y:;. i The corresponding amount of change in the function f ( ) is ~/ = f (xl + ~x1, ...) f (x, ...) . Thus the rate of change of the function f ( ) is the directional derivative df/ds, i.e., the limit of the ratio ~1/~s in the direction specified. If for some direction the directional derivative df/ds has a maximum, compared to its value for other directions, then the vector in this direction with magnitude equal to df/ds is termed the gradient of the function f ( ) . At an optimum of an unconstrained differentiable function, the gradient has the value 0. The gradient of f is symbolized by v/, read "del-f" (because of its character when f is a differentiable function). When f ( ) is differentiable, the gradient vI is \lf = [of/ox1, .. . ] (as can be readily proved by choosing the ~xi for fixed s so as to maximize ~f. and then taking the limit as ~s~O; no vector analysis is needed, though most proofs employ vector analysis) . Example: The gradient of the linear form 3x+4y is [3,4]. The form increases at maximum rate in the direction of this vector, and increases at a rate of y32 +42 = 5. (6) Obviously the optimum may iie on the boundary constituted by one or more of the constraints. The gradient 'V/ will then not necessarily be 0 in value at the optimum. 8-6 Optimization with Equality Constraints 8-6.1 General This paragraph and the following describe the requisite relationships for constrained optimization in mathematical terms. They form the basis of the tactical and logistic illustrations in paragraph 8-8 and in the various chapters of the volume. The various principles are for simplicity of reference all stated with respect to differentiable functions. The detailed modifications necessary for other cases are not included. To optimize {f(x1, ... ,xN) Ig(x1, ... ,xN) = c}, one could in principle solve for one variable, say x, as a function xN = h(x:n, ... ,xN_,) of the others. Substituting this for Xs into f(x,, ... ,X.~·> produces a new function To optimize it, set of, of of oh . - = -+ --= 0 J = 1, ... ,N -1 ox; ox; oxN ox; Since g(x" ... ,xN) = g(x1, ... ,XN-1,h) = c, then on this contour, dg og og oh -=-+--=0 dx; ox; OXN OXj So at any optimum, for any fixed j, o/1 of og _of/ox~ -=-+,\-=0, where A.= oxj oxj ox; og/ox~ where x~ indicates the value of x which gives an optimum value of f(x). This set of necessary conditions for a (local) optimum may be summarized by writing 'VI= A.\lg, or \l(f-A.g) = 0 That is, at the optimum, the gradient of f is parallel to the gradient of g and is a scalar multiple of the gradient of g. 8-13 8-6.2 Lagrangian Function For two functions f and g, as above, and for any number ..\., the Lagrangian function, or Lagrangian, is L(f,g) = L(x1, ... ,xN;..\) = f(xt, ... ,xN) + ..\[c-g(xl> ... ,xN)] The problem "max{! Ig = c}" is equivalent to the problem "max L." Let L* = L(x*,1 ... , x,~;..\) denote the value of L at the optimum. At the optimum, L* =/*,where f* = f(x~, ... ,x~) is the value of f at the (constrained) optimum. The quantity ..\. is termed a Lagrange multiplier. An interesting application of the Lagrange multiplier is in the "Shadow Price" Theorem. If we regard the quantity c as a parameter and let F* (c) denote the value off at the optimum, dF* (c) regarded now as a function of c, then = ..\.. The truth of this may be easily verified. de Imputing dimension and value to the quantity ..\. can be done also from the fact that of/of*N _ dF*(c) ,\=---~---= og/ox~ de That is, ..\.is the rate of change of the optimum per unit change in the resource c. To illustrate, if the minimum cost for producing the ith item is determined for a given set of variables, and additional input quantities become available for producing more than i items, the marginal or "shadow price" of the ith item will extend to each additional item within reason able limits. The following mathematical illustration depicts the above principles. If we maximize x2 +y2 subject to x+ay=C, the Lagrangian is L(x,y,..\) =x2 +y2 +..\[C-(x+ay)]. Set each of the following partial derivatives equal to 0: oL/ox = 2x-..\.; oL/oy = 2y -a..\; oLjo..\ = C -(x + ay). The solution is x = A/2, and y = aA/2, and ..\. = 2C (1 + a2 ). At the optimum, df*/dC = ..\.. 8-6.3 Multiple Equality Constraints If there are a number of simultaneous equality constraints, each of the form g;(Xt, .. . ) = C; i = 1, ... then the (unconstrained) Lagrangian is L(x1, ... ,..\.1, ...) = f(xt, .. .) + L A.;[c;-g;(xt, .. .)] i The resulting procedure is readily verified to be an exact extension repetitively to many variables of the procedure for a single equality constraint. The necessary condition for the optimum is now a simultaneous set of equalities of the form of;axi = L: ..\.; og;/oxi i which can be written compactly in the form \lf = L: A.; \lg;, or \l(f-L: ..\.; g;) = 0 i i Geometrically the gradient off is thus a linear combination, with weights ..\.;, of the gradients of the constrained functions g;. 8-7 Optimization with Inequalities The problem of optimization with inequality constraints can be expressed as opt{f(xl, ...) Ig1(x1, ...) < c1, g~(x1, .. . ) < c2, ...} 8-14 As in the case of equality constraints, when the optimum occurs at the boundary the derivatives need not be 0. If any local optimum occurs on a boundary g; =c;, then the ith constraint is termed active and in that case the local optimum in question can be found by replacing the inequality by an equality. If any local optimum occurs off a boundary gi = c;, then that inequality is inactive and can be eliminated. A systematic procedure based upon these two principles can, in theory, be used to find all of the local optima. This involves trying every inequality in all possible combinations with every other inequality to determine which are active and which are not. No computational experience with this procedure has been reported. The Lagrangian should now be formulated appropriately in each of the various cases of optimization by writing each constraint in the form g; :::;: c; or gi ~ ci, Slack variables are non-negative variables used to reduce inequalities to equalities. For a resource c;, the quantity ci -gi is a slack variable. For a requirement, the quantity gi -c; is a surplus variable. These two should be kept non-negative in forming the Lagrangian. For convenience of reference, the standard form of the Lagrangian for maximization is L = f(xl, .. .) + L Ai [ci-gi(x1, .. .)] i As in paragraph 8-6.3, regard the optimal solution as a function F(c,, ...) of the constraining values c;. Then at the optimum: aF if the ith constraint is active, then -"'-= Ai = 0; UCi aF if the ith constraint is inactive, then -= Ai = 0 oei • The .\; thus serve again as shadow prices. It should be noted that optimization problems are frequently characterized by duality-i.e., if a problem is stated so that an objective function is maximized (minimized), there can be a dual statement of the problem in which a different objective function can be minimized (max imized.). A duality theorem, due to Gale, Kuhn, and Tucker, asserts that if either the original problem or its dual has a solution, then the minimum (maximum) value of the dual is equal to the maximum (minimum) value of the original problem (the primal). 8-8 Special Considerations in Optimization It is emphasized that the procedures based on Lagrangian analysis, while they may provide insight into the nature of the optimization, may not necessarily be the most effective for numerical computing of the optimum in a given case. This is particularly true in linear programming. For the most part the Lagrangian analysis merely gives insight into the results in linear programs. A method based upon optimizing the functions H(x,r,) = f(x) + rn L [g;(x)]-1 (where i rn is a specially chosen sequence) has been recently under development within the Army. Some details are reported in reference 8-3. A discussion of procedures involving Lagrangian methods in resource allocations occurs in reference 8-4. In the special case of allocation, the functions g; typically decompose additively, and this specializes the multiplier structure. For example, when there is only one constraint g of the form ,q(X,,Xc, .. . ) = L ajXi :::;: c J 8-15 then the fact that at the optimum of/oxi ai of/oxk ak is in economics popularly termed the principle of marginal substitutability. It fails when there is more than one constraint as is usually the fact ( cf. para 8-6.3). Piecewise linearization is used to approximate a curvilinear function by a series of linear segments, (fig. 8-3). Thus, if the function f to be maximized (minimized) is concave (convex) and if the set of feasible solutions is convex, then piecewise linearization will in principle make it possible to formulate the problem as a linear program, for which computational capacity is highly developed. A function g (x) of a single variable x is termed convex on an interval (a,b) if for any two points x, and Xz between a and b, and for any point x between x, and X 2 , i.e., for x = (1 -f)x, + fx2 where 0 ~ f ~ 1, the following inequality holds (see fig. 8-4): g(x) ~ (1 -f)g(x,) + fg(x2) If the strict inequality > holds, then the function is termed strictly convex (e.g., a linear function is convex but not strictly convex). Thus a convex function when evaluated at an average point is greater than the corresponding average of the function, for any weights used in the averaging. The value of the expression on the right hand side of the above inequality will lie at the point above x on the chord in the figure drawn between the values g(x,) and g(x2). In the n-dimensional case, convexity is defined in exactly the same way, with a, x,, x, x2 , and b now interpreted as n-dimensional points. Thus using y for x, and z for Xe, the function G(x,,xz, ...) is convex if when [x;] = [(1 -f)yi + fzJ, then g(x,, .. . ) ~ (1 -f)g(y,, .. . ) + fg(z,, ...). The optima are functions, parametrically, of the constraints. In the long run it can consequently be more efficient to solve the problem for ranges of values of the parameters and store the results than to wait to solve it in the individual case. This parametric opNmizat,ion is emphasized in dynamic programming. An example of practical use in this in Army logistics is described in parag.raph 14-16. 8-9 Illustrations 8-9.1 A Tactical Illustration: Perimeter Defense Design Suppose that a compact position is expected to be attacked from two sides simultaneously (on the ground, for example), the enemy's objective being to reach the position. For defense of the position, 20 force-units of a certain homogeneous type are available, of which it is desired to decide the number N, that should be assigned to defend side 1, the remainder, Nz = 20 -N, being assigned to defend side 2. The nature of the defensive weaponry is such that on each side the enemy's attack will constitute a point target for defensive fire and will be defeated by a single hit. Each force-unit will fire once, in statistical independence of each other, with a missprobability of m, = 0.4 for each force unit on side 1 of the position and a miss-probability of Figure 8-.'J 8-16 g (x) ----+-~-+------------~4---~--~---------~ a X b x x2 Figure 8-4 m2 = 0.7 for each force-unit on side 2 of the defended position. The difference in miss-probabili ties is due to the difference in the nature of the terrain on the two sides.The survival probability S of the position is then given by the formulaS = [1 -(mr)Nt] [1 -(m2)N2]and the constraint is N 1 + N2 = 20. For numerical purposes, a solution may be quickly approximated by noticing that Sis about equal to 1 (mr)Nt -(mz)N2. By use of this approximation,and by following the procedure of the previous paragraph the assignment of defensive units is indicated to be Nr = 6.4 and Nz = 13.6. Comparing the two neighboring possibilities of (6,14) and (7,13), the former turns out tobe the best. The survival probability S is about .989. An increase of one in the number of defensive force units available increases S by exactly .0025, while the amount of increase estimatedby the Lagrange multiplier at (6.4, 13.6) is approximately .0027. One more force unit cuts theposition loss rate by .25 percent. 8-9.2 Value of Weapon Systems Optimal constrained allocation can be illustrated graphically in the case of two activities,say a choice between two weapon systems (or teams), A and B. Suppose that a fixed amountE of effort (e.g., a budget) is available. Let E be allocated into, E., + En where the amount E"is allocated to the use (production, deployment, support, etc.) of weapon system A and theamount En to weapon system B. For this allocation, suppose that measures having values x(E,,,R)and y (Eu,R) can be identified for the effectiveness of each activity (e.g., firepower). For fixedE, plot the joint values x and y as in figure 8-5 assuming that they produce a continuous curve,for each possible value of E., from 0 (the point at they-axis end of the curve) to E., (the point atthe x-axis end of the curve). The curve is then a contour of the surface E(x,y), the surface representing the least effort at which the values x and y can be obtained. The contour is an exchangecurve of effort. Suppose that this surface is monotonic increasing in x and y. Then all of thepairs of values below a contour are feasible at the effort represented by the contour, and thosebeyond the contour represent combinations of weapons effectiveness that are infeasible at theeffort represented by the contour. Now suppose that another surface V (x,y) can be drawn representing the value or militaryworth of the choice x,y. A contour V(x,y) = V of this surface is an indifference curve. Suppose 8-17 that this surface is also monotonic increasing in x and y. Then for any given effort E, the optimal allocation (maximizing V) will be at the point of tangency of the contour with an indifference curve, i.e., in terms of gradients, \lE = A(vV) at the optimum, where the Lagrange multiplier A is the rate at which V will change as E is in creased. 8-9.3 Cost Effectiveness of Weapon Systems For an alternative illustration, let E(x,y) represent the total effectiveness that could be obtained with activity A conducted at level x and activity A conducted at level x and activity B conducted at level y; x and y being now the numbers of each weapon employed. A contour E (x,y) = E is now an indifference curve. Now let the surface V(x,y) .represent the total cost of engaging in the two activities at the rates (weapon numbers) x and y. A contour of this surface V(x,y) is now an exchange curve. Then for given effectiveness E, the optimal allocation (minimizing the effort V) occurs again at the point of tangency, i.e., where vE = AvV. Again, the Lagrange multiplier A is the rate of change of effectiveness with respect to cost. Note that the optimal solutions for various values of E and V can be combined into a path, or function, a relationship between E and V that is of loweT dimensionality than the unoptimized problem. It must be emphasized that in each of the above illustrations the above procedure applies only when there is but one constraint. When there are more constraints, as in fact there will be, then the formulas of paragraph 8-6.3 hold. An extensive discussion of the use of such methods for planning at the national level, based upon its use at the RAND Corporation, is contained in reference 8-5. 8-10 The Place of Decision in Action Decision initiates, renews, or continues action. An extended program of action contains in its very structure the effects of the times and results of the decisions that directed it. This fact tends to be intuitive in mere personal decision. In organizing large activities, however, deliberate provision has to be made for the occurrence of decision. Roughly it coincides with the conducting of reviews of the progress of activity, and to this extent it ultimately belongs under the subject of control of activity. Every decision should thus include setting the time of the next review of prog.ress, which tends to become the first time at which decision will next occur if developments in the mean- X Figure 8-5 8-18 time do not exceed certain expectations. Thus whether military or not, decision tends to set bounds on expectations of developments and bounds on the time interval until next review. In combat, the frequency of review is apt to be high. The next decision may be precipitated before the scheduled review by some breaking of the bounds of expectations. At times of scheduled review there is the tacit assumption that action can be .reorganized, redirected, or redesigned. Certainly this is expected to be the result of decision at emergencytimes. Changes in action that can be attributed to the decision can be termed the "response".Inertial factors in the activity at the time of decision may be overlooked, resulting in an uneven, or perhaps unstable, process; but even when the inertial factors are considered, actions which constitute the response of decision to quantitative developments may also produce oscillations of subsequent action. As the requirements for decision have developed and become more capable of being technically stated, so at the same time the decision process itself has tended to become more substantial, with an increase in complexity and ramifications of the action selected by decision. The occurrence of decision within the pattern of the progress of action, and in relation to action, can be identified in relation to the following .respects: (1) Outcomes may or may not be significantly random; (2) There may or may not be an opponent. If there is, the activity becomes a game.Either side may randomize its action and, as a result, create random outcomes; (3) A decision may be a programming of a single action, or it may be concerned with activities that are sequent or dynamic. In the latter case, the outcomes of a decision are conside.red as influencing a subsequent decision. In such cases the objective of action is necessarily longer-termed than is the action selected by a single decision. To some extent all decision is dynamic, but in practice the effects of decision in some situations may decay so rapidly as to have disappeared in significance by the time the next decision is made. The remaining part of this chapter is devoted to case (3). The classical mathematical topicin this connection is the calculus of variations. The modern subject of dynamic programming, as developed by Bellman, reference 8-6, has added considerably to practical capabilities for sequen tial optimization. 8-11 Dynamic Programming 8-11.1 Definition Dynamic programming refers generally to problems involving sequential optimization. It is,consequently, especially relevant to the conduct of action over a period of time. In many cases it affords the most natural representation of future decisions when representing a present decision that dynamic programming in its most specified form emphasizes. The consequences of presentdecisions will be the situations of future decisions. In what is referred to as Bellman's pn:ncipleof optimal,ity, dynamic programming is a recursion of the following form: The total value of action is the the combined value of the under an optimal time optimum of value increment contributed sequ'ence of actions by the choice of immediate action now, with the value, contributed by an optimal time-sequence of subsequent actions. This principle, realized in specific numerical problems of design of action and of systems, is employed in a number of illustrative cases at various points in this volume. These are summarized below. 8-19 8-11.2 Sequential Decisions Two formal classes of problems involving sequential optimization can be identified (1) Finite horizon. A definite time-pattern of requirements is to be met which extends over a specifically prescribed period of time, or consists of a terminal objective (in the military sense of the term, e.g., a point is to be reached by travel). Typically, the prescribed requirements are to be met with least effort or with maximal contribution to some objective function. (2) Infinite horizon. A repetitive process of decision is foreseen that extends indefinitely into the future-e.g., a continuing pattern of demand for an item of supply, manning the armed services, advancing the capability of a given weapons class. Long term gains are now the typical objectives. In principle, all problems of type ( 1) might be considered to be suboptimizations of type (2). In practice, problems of type (1) are more common, either because they are a sufficient approximation of an infinite horizon process or because they are organizationally prescribed. 8-11.3 Sequential Programming of Non-sequential Decisions One of the practical employments of dynamic programming is in the computation of the solution to problems which are not problems of sequential decisions, but are problems in which the possible strategies can be sequentially ordered in a way that makes dynamic programming effective. This category appears as merely a computational one but it nevertheless includes a number of problems, illustrated in this volume, of greater tactical importance and usefulness, to wit: ( 1) The assignment of weapons to targets through a single action; (2) The optimal design of the capability of a system of process: (a) the number of servers at a service center; (b) the optimal amount of redundancy to design into an unreliable automatic system-e.g., warning, fire-control missile, communications. A numerical illustration is treated in paragraph 13-12; (c) the amounts of supplies with which to provision a patrol or expedition. In each of these e~amples, a search for the optimum can be made-i.e., organized, sequentially over the possible values of the design variable, the capacity to be optimally programmedwith considerable numerical efficiency in many cases. Here again, the computational method typically produces much by-product information, typically in the form of the optimal capacity under slightly different conditions than those specified, so that the method is computationally a good investment even if conditions in the problem may change later. The method has consider able cumulative value, when employed with adequate storage of results, in future recurrences of the problem. Figure 8-6 8-20 8-11.4 A Network Example A quite simple prototype example is that of finding the shortest way from a given point oforigin A through a travel network consisting of intermediate nodes and alternative routes, to aspecified destination B, as in figure 8-6. As analogues, the points may be replaced by the statesof a system. The least-effort path between the two system states may be a route of a technological development or a tactical plan, for example: In the case of the network, suppose that the distances from nodes to adjacent nodes areknown. For any node P, let the distance from P to each point a(P) that is adjacent toP be represented by d (P,a (P)). Let s (A,B) represent the shortest distance from point A to point B ofthe network. Then the principle of optimality is in this problem stated in the following equation: s(A,B) =minimum [d(A,a(A)) + s(a(A),B)]a(A) The above equation is recursive, in that the unknown function s ( ) occurs inside what is to beminimized. If s (x,B) were known for every x other than A, then s (A,B) could be computed bythe above equation. In some cases this is in fact the method of computing the optimum sequential action. Note that it supplies the answer to the question of the shortest distance betweenmany pairs of points besides those two, A and B, that are asked for in the problem, namely, atleast for all those pairs of points that lie on the shortest path from A to B, and typically formany other nearby pairs as well. This production is characteristic of the method, and the efficiency of the method is consequently greatest when such by-product information is useful, e.g.,in parametric studies. For the example in figure 8-6, there are 7 nodes, and consequently, the shortest path will notcontain more than 6 segments, one for each branch traveled. A search for the optimal path may,therefore, be completely organized by enumerating over the number of branches in the optimalpath. Suppose that the optimal number of branches in the path from A to B is N. Then the aboveequation of optimality can be replaced by a conditional principle of optimality as follows. LetSx (X, Y) represent the shortest distance from point X to point Y in the network given that theshortest distance is along a path consisting of N branches. Then the above equation can be replaced by the following: SN(A,B) =minimum [d(A,a(A)) + SN-l(a(A),B)]a(A) The recursion is now finite, for it is never necessary to pass through a given node twice. Numerical Illustration For the example in figure 8-6, the shortest path is A C D E F B, which happens to require5 of the branches; the distance is 11. To find the solution, first solve the above equation for N=2for all possible pairs X,Y. Then solve it for N=3 using the results established for N=2, etc.until N=6. The numerical work required in this problem is typical of that required in similar problemsinvolving the allocation of weapons to targets; the allocation of additional redundant stages to anotherwise less reliable mechanical or electronic system; and, not infrequently, to problems involving the determination of the optimal capacity to build into a system. References 8-1 General Kenny Repm'ts by George C. Kenney. Duell, Sloan and Pearce. 1949 pp. 197 ff. 8-2 "Planning by Resource Allocation Methods-Illustrated by Military Applications," byJames E. Taylor and John E. Walsh, Opns. Res. 12. 1964. pp. 693-706. 8-3 "The Sequential Unconstrained Minimization Technique for Nonlinear Programming,A Primal-Dual Method," by Anthony V. Fiacco and Garth P. McCormick. Management Science 10. 1964. pp. 360-366. 8-21 8-4 "Generalized Lagrange Multiplier Method for Solving Problems of Optimum Alloca tion of Resources," by Hugh Everett III. Opns. Res. 11. 1963. pp. 399-417. 8-5 The Economics of Defense in the Nuclear Age, by Charles J. Hitch (Controller of DOD, 1961-) and Roland N. McKean, Harvard Univ. Press. 1963. 8-6 Applied Dynamic Programming, by Richard E. Bellman and Stuart E. Dreyfus. Princeton University Press. 1963. A Rand Research Study. 8-22 CHAPTER 9 PROGRAMMING LINEAR ACTIVITY 9-1 Scope This chapter reviews some of the characteristics of linear activities and how they may be represented mathematically. The characteristics of linear programs and their solutions are described, and the simplex method is described in some detail. 9-2 Linear Activities and Combinations 9-2.1 Characteristics of Linear Activities As defined earlier, a linear activity is one that can be multiplied by a scalar, and linear activities can be added. The physical quantities and scalars involved in activities will ordinarily be assumed to be non-negative. Even when the elements of an activity complex are not linear in the small, two statistical effects tend to produce linearity in the large, at least to first-order for purposes of planning: (1) Considerable repetition of the activity; (2) Aggregation or simultaneous replication of the activity at many places or by many people. These statistical effects tend to make any kind of activity potentially eligible for linear programming including: combat, training, research, purchasing, medicine, servicing, production, transportation, and others. Reference should be made to Chapter 2 for representations of activities by vectors, and especially by input-output vectors. Virtually any activity that can be treated by linear programming can be meaningfully represented in terms of input-output vectors of the form A = [f1, ... , ht, p~, ... , P.v] where inputs to the activity are consumed at rates f,, i = 1, ... ,M, and outputs of activity are produced at rates Pi> y" = 1, ... ,N. Such a vector may be regarded as a scalar activity if scaling the input vector f, by any feasible constant r scales the output vector Pi by the same constant. Then every level of operation (conduct, occurrence) of the activity is an operation characterizable as the vector A = rA*,A* = [f*1, ... f*M, p*~, ... , p*N] where A* is operation of the activity at unit level. A total linear activity may consist of many scalar activities, say Kin number. The level Ak of the kth of these activities in terms of the logical unions of all of the inputs and outputs is Ak = rk [f*k~, ... f*kM, p*kl, ... , p*kN] = h A *k A level of the total activity is thus representable as a vector A = [Ad of levels Ak of such activities, that have common input and output elements, by setting f's and p's equal to zero as needed. The representation may in an individual case be simplified; e.g., many activities are fully represented by their products alone, others by their inputs alone. If a single activity produces a single physical output product, so that N = 1, then when the unit vector A* is so chosen that Pi = 1, the quantity j; is sometimes termed a "technology" coefficient. 9-1 9-2.2 General Concept of Linear Programming Linear programming is concerned with scalar activities which, when conducted jointly, are additive in the inputs consumed and additive in the outputs produced. For each activity, the input needed (output produced) is thus proportional to the level of that activity. The total input (output) is additive in the amounts required for the separate activities. In other words, if A1, ... , Ak are the simultaneous (vector) levels of the individual activities, then 2)k f*k; is the total amount of input i that is consumed k :~::>k p*ki is the total amount of output j that is produced k Simply for purposes of computing total input and total output, the activity as a whole is then A = {A1 + A2 + ..........+ Ak r1A *1 + r2A *2 + ... + rkA *k i.e., A is a linear combination of the vectors A *k· Linear programming is thus concerned with the case in which joint conduct of the activities corresponds to adding the activities as vectors of inputs and outputs. The activities being scalar, the total activity is a linear combination of activities. 9-2.3 Examples of Linear Combinations of Activities (1) Target j is to be attacked from source i, the unit intensity being A *;i = [f;, Pi] in terms of ammunition consumption and expected units of fire delivered. If the attack rate is scalable, combined attack from all sources is a linear combination of activities. There are (M -1)N independent activities if there are M sources and N targets, and if total ammunition consumptions are fixed at each point. (The firing at the last target from each point is dependent on the firing at other targets.) If the total fire delivered at each point is also prescribed, there are M + N -1 independent activities. (2) A manufactured product is typically an assemblage of parts, each assembly requiring a certain number of each type of part, the set of these numbers being commonly termed the "parts explosion" of the assemblage. This explosion is then the composition of the input vector of parts, f; being the number of parts of type i required per unit of the product manufactured. The output may now be quite simple, namely just a single number p representing the amount of the product produced. The assembly operation is then quite scalar, for an input of r[f;] will produce an output of rp. The parts needed for assemblies are one input of the production activity. The linear analysis will extend at least roughly to other resources required to make the assemblies: the labor required, the amount of time required on special machines needed in the assembly operation, power, etc. The analysis can extend to the continued making of the product during several time periods, and to the simultaneous making of other parts and materials which may compete for the same resources. Linearity may be threatened when there are limits on resources, intermittency of production due to setup costs, or other losses of linearity of yield. However, if production is sufficiently aggregated, the total effect may nevertheless be sufficiently linear for approximate programming purposes. (3) An alloy of given specification is composed of specified amounts of a specified list of com ponent metals. This is the output, with Pi the amount of the jth metal per unit of the alloy. The alloy may be made by mixing in as input amounts of various materials, e.g., metals of other compositions. Each such input is a vector [f;] of metallic components. A given process for making the alloy from these materials will specify the amounts, in relation to each other, of each of the input materials, so as to achieve the required output 9-2 [p;]. The Tk are the amounts needed of each input material. Prices of materials may also be linear inputs. Reference 9-1 reports an early arsenal use of this. (4) The diet problem has long been used as a subject of linear optimization. A unit amount food, or feed, of type i contains J;; units of a vitamin, mineral or other nutrient of type j, j = 1, ... , N -1 and costs f;N per unit amount. If the nutrient value extracted and costs are linear, any diet and its costs can be described linearly. (5) Chemical and petroleum processes can afford striking examples of scalar activity. The input may be the joint feed rate of a certain mix of materials, and the output the joint production rate of separate chemicals. Chemical balances are carefully maintained. When it is not possible to alter the input proportions and maintain a useful output-i.e., by accumulation of performance of the activity or by replication of the process in replicate chemical facilities in the plant or another plant-the output is scalar over time. Some but not all of a product produced may be used in making other products. An excellent illustrative example is given in reference 9-2. (6) Search may be linear in input, although not in momentary output. A balanced search effort will typically require so much fuel, food, equipment, hours, etc., likely in proportion to one another. The information produced momentarily is not linear, but some linearity may be available through aggregation, especially in non-saturating search. (7) Network Flow designates a general category of linear activities that includes a number of the above cases. Flow occurs on the branches of a network, the amount of flow on the ijth branch being J;;. In a linear network, the corresponding effort (or effect, as the case may be) is e;J;h where e;; is a constant for the branch. In combat transportation, there is flow with gain (loss). The manufacturing of materiel in one period so that it will be available (by being held in inventory) for a future period is, like peacetime transportation, flow without gain. Reference 9-3 is an extensive development of detailed methods of formulating and solving such problems. Many other examples are cited, too numerous to mention, in reference 9-4, and early bibliography. 9-3 Linearly Programmable Activities 9-3.1 Definition A linearly programmable activity is a linear activity A whose values are subjected to inequality, or interval, limits. Its most important coordinates correspond to total resources consumed and total requirements produced. These totals are generated out of the fact that A is typically a linear combination of basic activities. The limits imposed are simply inequality restrictions on the coordinates of A; i.e., if a, is the ith coordinate of A then the restriction is of the form · so that the entire set of restrictions to be imposed can be written as where M = [m;], U = [u;] In practice, the most important restrictions (cf. para 8-2.4) will be of two types. (1) resource capacities, maximal amounts available of the resources to be consumed; (2) minimal requirements to be produced. In some cases, the exact amounts of resources to be consumed or requirements to be produced will be specified, and the restriction will be of an equality form. 9-3.2 Example A simple illustration of a linearly programmable activity can be made in the case of a machine which can do any one of N tasks, one at a time, and can do each task repetitively. Suppose that when the machine is doing task i, its output is a physical product, which the machine then makes at the rate of p; pieces per unit time. Suppose the machine is to be operated for a period of time of length T. By 9-3 Figure 9-1 the machine's activity will then be meant any possible combination of quantities of the various products that it can make in this time T, the basic resource input. Materials and other requirements for operation of the machine are ignored here, but they would be handled similarly. Lett*; be the time required to make one piece of type i. The unit-level ith activity A*; is A*, = t*1, a simple one-dimensional vector. If X; pieces of type i are to made in the length of time T, then the machine's activity may be represented as A= XtA*! + ... + XNA*N = Xtt*! + ... + XNt*N = tt + ... + tN where A*; = t*; and t; = x;t*; the prepresentation here being the canonical one of paragraph 9-2. Then the basic restriction posed by the limitation of time T is A ::; T, i.e., L:x;t*; ::; T, or equiva ; lently, L:t; ::; T. To say that the machine is linearly programmable, is to say that any combination of i quantities X; of products made which satisfied this last restriction can in fact be made on the machine in the amount of time T. The machine's activity is thus represented as being linear within the limit. For example, if only two products are involved, then the set of possible outputs of the machine consists of all the points inside the triangle in figure 9-1, each point corresponding to a possible schedule. Note that points which lie inside the triangle do not use the machine for the whole time T; they leave it idle for part of the time. The above illustration may perfectly straightforwardly be translated when the machine is replaced by a group of machines; the capacities simply increase in proprtion to the number of machines if they are identical in capability. If other resources are limited, and possibly requirements as well, then for each such limitation there will be an inequality of the type occurring here for T. 9-3.3 Possible Nonlinearities In fact, few machines are exactly linearly programmable. Linearity is lost, for example, if a significant amount of time is required to changeover the machine from one task to another, often referred to as "setup." A significant changeover time requirement would render the point C in the figure infeasible as a program of activity. An inequality restriction is of rather sharp nature. If, for example, the restriction corresponds to the presence of an upper limit on the amount of inventory that can be stored, the question may still be raised whether or not "a little more space" can be found. Similar questions may be raised with respect to other possible restrictions: Can the work be done any faster? Can any more cargo be carried? Can the requirements be shaded a little? It may thus be observed that non-linearities due to efforts required in setup, or changeover, tend to make an inequality restriction too generous while elasticity of capacities tends to make inequalities too conservative. Consequently the two effects can tend somewhat to counter-balance one another, 9-4 particularly if the bounds upon which the restrictions are based are drawn at a good safe average point. Quality may be strained by stretching a limit. (Note that very different speeds of operation of a machine should be treated as separate activities.) 9-3.4 Linear Inequalities In a linear program each restriction will be given in terms of a linear inequality so that there will be a simultaneous set of linear inequalities of the form where a*;i is the amount of the ith resource consumed per unit operation of the jth activity. If the restriction represents a minimal requirement, it will typically be of the form where the coefficients a*,i in such inequalities represent the amounts of the ith requirement produced per unit operation of the jth activity. An inequality of 2: type can be converted into one of ::; type bymultiplying both sides of the inequality by -1. A resource is thus a "negative" requirement, and conversely. This is, in fact, done in computational procedures. 9-4 Algebra and Geometry of Vectors and Inequalities 9-4.1 Purpose of Paragraph A linear activity is evidently a linear vector. Vector algebra consists of the purely formal properties of vectors, and these are useful for notation and for stating in mathematical terms the problems and equations of linear programming. Vector geometry contains useful intuitive terminology that provides a precise vocabulary in terms of which to discuss the operational aspects of linear activities. The solutions to linear programming problems involve the properties of linear inequalities in essential ways, and it is efficient for operational purposes to be able to identify these properties as purely formal characteristics of action that are decided upon by means of linear programming. Therefore, before proceeding to specific methods of linear programming, it is desirable to review some of the major principles of vector algebra and geometry, with specific attention to treatment of linear inequalities. 9-4.2 Linear Independence For any given dimension N, the following vectors, N in number, are termed the unit vectors of the vector space: ul = [1,o,o, ... ,OJ, u2 = ro,1,o, ... ,OJ, ... , uN= ro,o,o, ... ,1J Any N-dimensional vector x = [x~,x2, ... ,xv] may be written as the following linear combination of the unit vectors of the space: x = x1 ul + x2 U2 + ... + xNuN Any two unit vectors are perpendicular to each other since the product U; · Ui is always equal to 0. If a vector [x] can be represented by a linear combination of the vectors [y~,y2, ••• ,yK], then x is called linearly dependent on those vectors. If [x] cannot be written as any such linear combination, then [x] is termed linearly independent of the y's. Thus any vector of given dimension N is linearly dependent on the unit vectors of the space. The unit vectors of the space are linearly independent of each other. 9-4.3 Other Characteristics of N-Dimensional Space Although ordinary physical space stops intuitively at three dimensions, geometrical names can be assigned to characteristics of vectors of dimensionality higher than three that are merely algebraic extensions of characteristics of vectors of dimension three or less. (1) Dimensionality Basis. To find the dimensionality of a given vector space S find a set B of vectors b1,b2, ••• which are vectors of the space A and in terms of which every vector 9-5 of the. collection S can be expressed as a linear combination. The smallest number of b's which is sufficient is called the dimension of the space S. Any smallest set B of such vectors b1,b2, ... is called a basis or spanning set for the space S. (2) Axes. Position. Any n-dimensional vector [x1,x2, ... ,x,.] may be thought of as specifying a position in n-dimensional space with coordinates X1,X2, ... ,x". The collection of all scalings of a given unit vector U; defines a "line," which is the rectangular coordinate axis of the space that corresponds to the unit vector. Thus an n-dimensional vector space has n rectangular axes. (3) Parallel. If a vector x is a scaling ay of the vector y, then x and y are said to be parallel if a is positive, and opposed if a is negative (i.e., opposite in direction). (4) Oblique Axes. In two-dimensional space, for example, the vectors [1,0] and [1,1] could serve as unit vectors, even though the axes obtained from their scalings are not perpendicular. These two vectors are oblique to each other and are linearly independent. Accordingly, they suffice as a basis for the space of two-dimensional vectors, even though they may not often be a useful basis. The members of a basis do not need to be perpendicular to each other; independence should not be identified with perpendicularity. (5) Locus of Linear Combinations. Two-Dimensions. Let U = [u,mu] and V = [v,pv] be vectors graphed as line segments PQ and PR as in figure 9-2. Then any vector X that starts at P and terminates on the line that goes through the points Q and R can be written as the following linear combination of U and V: X = aU+ (1-a)V as follows: if a < 0 then X falls in the region of vector X 1 if 0 :::; a :::; 1 then X falls between Q and R, e.g., X2 if a > 1 then X falls in the region of vector X 3 (the reader should verify this by showing that if X1 = au + (1-a)v then X2 must be equal to amu + (1-a)pv). 9-4.4 Convex Mixtures If U and V are N-dimensional vectors, then in an analogous fashion the locus of the linear com bination vector aU + (1-a) V may be regarded as the "line" in N-dimensional space through the points U = [u1, ... ,uN] and V = [Vt, •. . ,v.vl "Between" U and V in N-dimensions now means that for every i, u; :::; au; + (1-a)v; :::; V; where 0 :::; a :::; 1. Each particular point aU+ (1-a)V, with 0 :::; a :::; 1, is a convex combination or mixture of the points U and V (cf. para 8-9). When U and V represent two different activities, then aU + (1-a)V is a mixture of the two activities if 0 :::; a :::; 1. Mixture and convex combination coincide. When U and V are mutually exclusive strategies, played in a game, then aU + (1-a)V is a mixed strategy, a mixture in which strategy U is employed with relative frequency a (measured in fraction of plays or probability of selection) and strategy V is employed with relative frequency 1-a. (See ch 10.) If a > 1 (or a < 0) then au1 + (1-a)v; is in a value beyond u; in the numerical direction from V; to u; (from u; to v;). Setting a = 0 or a = 1 adopts an extreme point, i.e., everything is committed to one activity and nothing to the other. If there are K points u~, ... uK in any space, then the combination a1u1 + ... aKuK is a convex combination if the a!s each lie between 0 and 1 and if their sum is 1. The resulting point is between all of the points. (Formally this happens to be the same as a probability mixture, whether the a/s happen to represent probabilities or not; a probability mixture is thus a convex combination.) For a given set of such points u1, ... ,uK, the set of all such convex combinations is termed the convex hull of the points. The points u; are the extreme points of the hull. 9-4.5 Simultaneous Linear Inequalities A single equality, of the form anx1 + a1zXz = r1 is represented by a line, R~, as in figure 9-3. The inequality a11 x1 + a12x2 :::; r1 consists of all the points (x1,x2) lying on the side of the line R1 in the direction 9-6 p Figure 9-2 \ .,.,.,., / \ / X2 ~......... (_Infeasible ~ .,.,"""" \ Inequality /c( \ ......... "" \ .,... .,....... \r / ~~ \ \ \-superfluous \ Inequality \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 Figure 9-3 indicated by the arrow; similarly for R2. R1 and R2 represent typical resource restrictions. Q1 represents a typical requirements restriction that the required values of Xt and X2, which will be the amounts of the two activities to be engaged in, are restricted from below, i.e., involve minima. Any tacit restrictions that the X; be ;::::: 0 as in the figure, are not necessarily present in every problem, depending upon how each activity happens to be numerically defined. They correspond to the regions on the appropriate sides of the axes. In the figure there are many feasible solutions. But if Qt occurred where the line q appears, there would be no feasible solution. 9-4.6 Convex Polyhedron The region enclosed by the solid line inequalities in the figure is convex since for any two points U and V in the area, any point aU + (1-a) V-with 0 ~ a ~ 1, which will be between them-is also in the 9-7 region. The region is thus a convex polygon. If more than two activities are involved, the region would be more than two-dimensional. In that case the region is termed a polyhedron, and in particular a convex polyhedron. The locus of an equality in N variables X1, ... ,xv (activity rates), i.e., one of the form a;1X1 + ... + a;vXN = r,, is then termed a hyperplane. The set of all N-dimensional points [x~, ... ,xv] satisfying an inequality of the form aax1 + ... + a,.vxv :::; r; is then termed a half space. The points at the corners of the convex polygon in the figure, and at the corners of a convex polyhedron in theN-dimensional case, are extreme points, or vertexes. They cannot be represented as being between other points of the convex set. The region in any triangle is a two-dimensional simplex. InN-dimensions, a convex polyhedron that has one or more vertexes than the number of dimensions is termed anN-dimensional simplex. The set of feasible solutions to a linear program is a convex polyhedron. Some of the inequalities may be superfluous, for example, an inequality corresponding to r in figure 9-3 is superfluous. If the feasible solution is one constrained by M inequalities inN variables, with M > N, then there will be(~) possible intersections, 1 each an N-dimensional point. Of these, many may be superfluous; the feasible region will consist of points lying inside a convex polyhedron formed between some (maybe all, but not likely) of these intersections. The feasible region lies within the convex hull of the (~) intersections. To determine its precise vertexes is in practice equivalent to solving the linear program numerically. 9-4.7 Gradient The locus of an inequality may be related to the properties of the gradient of the linear form involved. Consider an equality ax + by = R, in two variables x andy. If R is allowed to vary, this equality will represent various lines in the plane, parallel to each other, as in figure 9-4, where R1 and R2 indicate different possible values of R. A line perpendicular to the line ax + by = R, passing through the origin, can be regarded as a ray, and written as the locus of vectors P where Pis, for some m, equal to m[a,b]. The larger m, the farther P . R is from the origin. When P falls on the line ax + by = R, then m = V (compare the identical a2 + b2 formula of analytic geometry for the distance from a point to a line). m is thus the magnitude of the gradient at point P. The vector [a,b] is the gradient of the linear form ax + by. In theN-dimensional case, the corresponding "line" would be a 1x1 + ... + aNXN = R. The gradient of the linear form a1x1 + ... + avx.v R is then [a1, ••• ,a.v]. The magnitude of the gradient of this point is VI: an 2 • M! '(M) N = N! (M-N)! Figure 9-4 9-8 9-4.8 Location of the Linear Optimum Let the linear objective function to be optimized be V = v,x, + ... + VNXN, and the constraint set be Ax ~ B. If this linear program has a feasible solution, then the constraint set has a solution set x,. The optimum of V will then occur at some vertex of the convex polyhedron Xo or possibly along an edge (face) as well. As a matter of analysis, contour V (make parallel displacements of V). by setting V = C and varying the value of C. Consider the inequality V ?:: C. For each value of C add this inequality to the set Ax ~ B, forming an augmented set, and solve this augmented set simultaneously. Do this for every value of C. Then for some uninterrupted interval of increasing values of C, there will be a last value of C for which this augmented set of inequalities will have a feasible solution. This value is the optimal value of the linear program. Other more significant operational characteristics of the optimum are given in the paragraphs which follow. 9-5 Characteristics of t.he Solution to Linear Programs 9-5.1 Duality In linear programming, as in other optimization methods, the characteristic of duality appears, i.e., a problem of maximizing one linear function of a set may be regarded as equivalent to the dual problem of minimizing another linear function. The dual problem may be related to the primal by Lagrangian multipliers (see paragraph 8-6.2) as follows: Form the Lagrangian associated with the primal problem, namely: (A) The quantity C; -L: a;{Cj is the slack variableS, corresponding to the ith resource. It is the unused portion, if any, of that resource. S; occurs explicitly in the simplex algorithm as the slack variable for the ith resource. If, at the optimal solution, s;: > 0, then an increase in c; will not increase the value of the objective function V. Hence, if s;: > 0, x;: = 0. Thus in either case, x;:s; = 0. In (A) above, L can be written, by regrouping terms, as (B) L(x~, ... ,xN,X~, ... ,XM) = i:c;X; -f x;[ .f= a;;Xi -v;] •=I J=l t=l This expression may by inspection be regarded as the Lagrangian for an optimization problem that is dual to the primal, namely PRIMAL (A) DUAL (B) max vrx min crx Ax~ c Ax> v giVen given {{ X ?:: 0 X ?:: 0 9-5.2 Saddle Point Theorem A saddle point is defined in para. 8-5.3(4). The saddle point theorem can be useful in establishing the fundamental relation between the primal and dual problems as indicated below. Any pair of feasible solutions, x* = [xi, ... ,x»] for the primal and X* = [Xi, ... ,Xtrl for the dual, are optimal solutions if and only if L(x~, ... ,XN,Xi, ... ,X~) ~ L(xi, ... ,x,~,Xi, ... ,X~) ~ 9-9 The common optimal value of the primal and the dual problems is L* = L(x;, ... ,x~,x;, ...•x.l~) The central problem can thus be regarded as that of finding a saddle point for the function that is the common Lagrangian for both the primal and the dual problems. At the optimum, it should be noted: N either X; or C; -L a;jXj is zero i-1 M and either xi or L a;iX; -vi is zero. i=! 9-5.3 Interpretation of Lagrange Multipliers Suppose that an organization owns the resources, and that another organization engages in the activity and produces the objective. Then the Lagrange multipliers are the marginal or shadow prices at which the latter should pay the former per additional unit of the resources should the latter make them available. (See para 8-6.2). Conversely, the amounts X; of the activities engages in are the equivalent of shadow prices if the former organization should be able to vary the value coefficients vi in the objective function. 9-6 Simplex Method 9-6.1 Introduction A linear programming problem consists of a set S of simultaneous linear inequality constraints and an objective function. The objective function is the equation that is to be optimized. Algebraically, let the (non-negative) unknowns be: (A) Xt. x2, ... , XN 2:: 0. Let the number of inequalities in the set S be M. The ith inequality will be of the form (B) a;1X1 + a;2X2 + ... + a;NXN ~ C; where the quantities a;i and the C; are given constants. The objective function is a given linear combination of the unknown variables Xt, ... ,xN, namely V = v1x1 + Z2X2 + ... + VNXN, to be maximized, where the Vt = 1,2, ... ,n, are constants. Minimization of a function h can be replaced by maximization of the function -h. An inequality can be transformed to an equality by introducing a slack variable XN+i as follows: XN+i 2:: 0 (C) a;1X1 + ... + aiNXN + XN+i = C;. The sets of values of x = [x~, ... ,xN] for which (A) and (C) are satisfied are the same as those for which (A) and (B) are satisfied, namely, the space of feasible solutions. A linear program can then be formulated in the simplex format as follows: (A*) non-negative restrictions: X; 2:: 0 i = 1, ... ,N,N + 1, ... ,N + M (B*) equality constraints: 9-10 (C*) objective function: max V = V1X1 + ... + VNXN. In vector notation the problem can be conveniently summarized as max vTxiAx = c, x ~ 0 9-6.2 Basic Solutions If, in a set of M equations in M + N unknowns any set of linearly independent columns (i.e., a basis) is chosen and the remaining variables set equal to zero, the solution to this set of equations is called a basic solution. The variables which can be different from zero in the basis are called basic variables. The basic solution that satisfies (A*) is called a basic feasible solution. The Simplex procedure consists of finding a sequence of basic feasible solutions, each increasing the objective function. The set of variables xi selected to be the basic variables at any stage in the sequence, say the nth, is termed the basis at that stage. Denote it by B(n). To say that "xi is a basic variable at staten" we write XjeB(n), or just jeB(n) if more convenient. The basis B(n+1) will differ from the basis B(n) by exactly one variable. We will use R(n) to denote the subscript of the variable xu<" l that is a basic variable at stage n but is removed ("R" for "remove") from B(n) in forming B(n+1). We will use J(n+1) to denote the subscript of the variable Xr in the ith row). Similarly, when B(n+1) is formed, the variable xu of the problem variables. Thus, the value V (n), of the objective functions at the nth stage of computation is 9-11 M+N V(n) .ll v. x. (n) . J= J J Tableau Show in~ Basis at Nth Sta~e All Variables xl x2 Xl(n+l) ~ ~+1 X. xk XM+N J ~' 1 c(n) 1 ~1 a21 (n) a2.1 '(n+l) 1 c2 (n) 1-3 a2k (n) 'Oi 01 tl Cll Hl x. 0 1 t-j Ill s 4) CD 0. ..0 1 ...... ctJ x2 () •.-I 1-< < CD ctJ n :> rt .~ XR(n) 0 t-j Ill ctJ IXl X. 1 J cM(n) vl v2 Vl(n+l) VM VM+l vj vk VM+N Value of Variables in the Objective Function 9-6.3 Improving the Solution If the solution [xi(n)] does not maximize V, then at least one of the variables, say x~, can be changed in value from the value xr(n) so as to increase the value of V above the value V(n). Any change in value will have to consist in increasing the value of some non-basic variable from zero to some positive value, and at the same time in setting one of the basic variables equal to zero and converting it into a non-basic variable. To determine which is the best choice among the non-basic variables, calculate the quantity vk -zk = max[vj -L: vkakj(n)J where the value j ranges only over subscripts of non-basic variables Xj(n). If the maximum occurs for more than one value of j, choose I(n+l) to be any one. If Vk-Zk s 0, an optimum has been reached. If Vk -Zk = 0, there are alternate optima. In that case proceed to paragraph 9-6.4. Otherwise, proceed as follows: As Xr is now changed, say by amount 6xr• all of the solution values of the variables Xj(n) have tobe kept non-negative (to satisfy (A*)), and the equalities (B*) must remain satisfied. If only xr is changed as much as possible, until one or more of the other basic variables becomes zero. No 9-12 further change in x1 tn+ll that will increase the value of V can then be made without XRtnl becoming negative and thereby violating (A*). · To determination of the variable that leaves the basis is made by calculating the quantity o = min c;(n) ail(n) > 0 i ail(n) The variable that corresponds to this minimum ratio leaves the basis. 9-6.4 New Stage To execute the advance, solve the equation for Xrtn+ll· Then eliminate Xrtn+ll from every other equation. As a result (1) for any basic variable x;(n+1) and any non-basic variable Xj(n+1). a;;(n+1) = aii(n) -ail(n) a11 (n)a11r(n) 1 (2) for any basic variable x;(n +1), fCR(n)/aRI(n) i = I 0 ::::; X;(n+1) = c;(n+1) = ( ai(n) i ,c. I c; 1 9-6.6 Simplex Tableau The computational work of the previous paragraph can for manual computation be helpfully ar 9-13 ranged as a sequence of tableaus of the coefficients and the calculations, one for each computational stage n. In the tableau, each quantity is entered at the place where its symbol appears. For simplicity in the following schematic illustration of the tableau, the computational-state numbers " (n)" have been suppressed; thus a;i refers to a;i(n). v. vl VI VN J VN+l VN+R VN+M v. Sol. . . . c . 0 ~ xl . . . XI ~ ~+1 . . . X.N+R . . . ~+M J v X all. . . ali. . . 1 . . . 0 . . . 0 cl alN c/anN+l N+l . XN'+2 a21" . . a2I" . . a2N 0 . . . 0 . . . 0 c2 cz!a2I . . . . . . . . . . . . . . . . . . . . . . . 0 . . . 1 . . . 0 1HR aRl" aRN CR lcR7aRpn X . . ·& . . • . . . . . . . . . . . . v aMl. . . . . aMN 0 . . . 0 . . . 1 N+M ~+M ~· eM ciaMI V.-Z. . v -z vl-zl. . vi-z!. J J n n 9-7 Examples Simplex Solution Maximize: 3xi + 4x2 Constraints: 2xi + Sx2 ::::; 10 4xi + x2 ::::; 8 XI ;:::: 1 Adding slack variables (xa and x4) to the first two equations and surplus (xs) and artificial (xs) variables to the third equation, we have: 2xi + Sx2 + Xa = 10 4xi + x2 8 XI 1 The objective function is augmented as follows: Max V = 3xi + 4x2 -100xi First Tableau v. J 3 4 0 0 0 I -100 v i Sol. xl x2 x3 x4 x5 x6 c. J 0 0 0 I x3 x4 2 4 5 1 I I I 1 1 10 8 5 4 -100 I xl 0 -1 1 1 1 ~Outgoing Variable z. J -100 I I 0 0 0 100 -100 -100 (x6) ' v. J -z. J 103 I 4 I 0 0 -100 0 100 tincoming Variable (x ) 1 9-14 Then use the steps in paragraph 9-6.4 to find the next tableau. Second Tableau v. 3 4 0 0 0 -100 J v. Sol. c. 0 xl x2 x3 x4 x5 x6 ~ J 0 x3 0 0) 1 +2 -2 8 8/5 ~Outgoing, Variable 0 0 1 1 4 -4 4 4 (x3) x4 3 1 -1 1 1 xl z. 3 0 0 0 -3 3 3 J v. J -z. J 0 I 4 ! 0 I 0 i 3 -103 I -3 ' trncoming Variable (x ) 2 Iterate again using paragraph 9-6.4. I i v. 3 4 0 ' 0 0 -100 i I J t-·--+ v. Sol. c. 0 xl x2 x3 x4 x5 x6 ~ J 4 0 1 1/5 0 2/5 -2/5 8/5 4 x2 0 0 0 -1/5 1 8 -18/5 12/5 2/3 40utgoing x4 I Variable I 3 xl 1 i 0 I 0 0 -1 1 1 (x4) i [ z. 3 4 4/5 ! 0 -7/5 7/5 47/5 I J ! i v. -z. 0 0 ! -4/5 0 +7/5 1 -507/5 ! -47/5 J J 1' Incoming Variable (x) 5Iterate again using paragraph 9-6.4. v. 3 4 0 0 0 -100 J v. Sol. c. 0 ~ xl x2 x3 x4 x5 x6 J 4 x2 0 1 10/45 -1/9 0 0 4/3 0 x5+1 0 0 -1/18 5/18 1 -1 2/3 '".·· 3 xl 1 0 -1/18 5/18 0 0 5/3 z. 3 4 39/64 +7/15 0 0 31/3 J X. -z. 0 0 -39/64 -7/15 0 -100 -31/3 i J J ! 9-15 Since all Vj -z1 ::; 0, the optimum solution has been obtained. Thus Xz = 4/3, x5 = 2/3, and x1 5/3. Max V = 3(5/3) + 4(4/3) = 31/3 9-8 Transportation and Network Flows Several linear programs have come to be known by special names. These include~ (1) the transportation problem (2) network flow (3) the assignment problem (4) critical path scheduling, also known as PERT (Program Evaluation and Review Technique) Many other special types of linear programming problems have received names, for example the "caterer problem", production scheduling. However, for each of the four listed above a special computational algorithm has been developed that is particularly effective for the problem. References to these are provided at the end of the chapter. References 9-1 AD 227 086 Linear Programming Applied to a Foundry Cost Problem by Gideon I. Gartner December 1958 Watertown Arsenal. 9-2 Linear Programming and Economic Analysis by Robert A. Dorfman, Paul A. Samuelson and Robert M. Solow. McGraw Hill 1958. A RAND Research Study. 9-3 Flows in Networks By L. R. Ford, Jr. and D. R. Fulkerson. Princeton University Press 1962. 9-4 Linear Programming and Assoctuted Techniques, A Comprehensive Bibliography on Linear, Nonlinear and Dynamic Programming, by Vera Riley and Saul I. Gass. Johns Hopkins Press 1958. 9-5 Readings in Linear Programming. S. Vajda. Wiley.J958. 9-6 Linear Programming and Extensions. George B. Dantzig. Princeton University. 1964. 9-7 Critical Path Planning and Scheduling: Mathematical Basis. James E. Keeley, Jr. Opns. Res. 9 (1961). 9-16 CHAPTER 10 THE SIMPLEST MILITARY GAMES 10-1 Background of Games as an Analytical Technique A games occurs when each participant in a situation has an objective which may not coincide with the objectives of other participants, and when each of the participants controls some, but not all, of the controllable variables of the action and the outputs. In principle, most if not all human activity is game-like. Even when the action of military personnel is programmed in terms of a common objective, not all of their individual activities may be fully specified by the resulting program; they may, consequently, find game opportunities on the side. Conflict between opposing military forces is the outstanding instance of a game. Reference 10-1 contends that the minimax principle of the mathematical theory of games that has developed in this century coincides with the official United States military doctrine of basing decisions upon enemy capabilities. The French mathematician Borel noted the quantitative aspects of games in the early 1920's, including the distinction between pure and mixed strategies. The first general theory is due to von Neumann, who proved the minimax theorem. The book The Theory of Games nnd Economic Behnv'ior, Reference 10-2, co-authored by von Neumann and 0. Morgenstern, was a historic volume which established the subject \Vithout stressing its military applications. Much of the military application has been developed under the sponsorship of the Rand Corporation since World War II. Most of what has been developed has use far beyond Air Force operations, extending into every phase of military operations. While games may be played involving more than two contestants, only two-person games are considered here. 10-2 Types of Two-Person Games 10-2.1 Matrix Single Game This type of game occurs when the contestants each must choose one of a discrete set of alternatives or strategies. As an example, let us suppose that the commander of Blue forces must choose one from a number of possible routes to pass through an area defended by Red. Red, in turn, can defend only one route. It is obvious that the ''payoff" to Blue will be greater if the route he selects is not the same as the one that Red has selected to defend. The payoff for any choice of route also may be influenced by such factors as the length of the route or obstacles to movement. The joint choice of moves or strategies by Blue and Red can be represented by a square matrix in which V;j is the payoff to Blue if he chooses Route i (i =-= 1, ... , n) and Red chooses to defend Route .i (.i = 1, ... ,n). It should be noted that the payoff to Blue can in some cases have negative values. 10-2.2 Allocation Gmne In the allocation game, each opponent may simultaneously allocate his forces (or other resom·ces) among more than one course of action. 10-1 For example, Blue can attack a Red Force, from either of two directions, or, by dividing his force, can attack from both directions. Red knows the directions and has the option of dividing his force so as to defend in both directions. If Blue allocates x units to direction 1, and Red allocates y units to direction 1, then the payoff to Blue (or loss to Red), v (x,y), may be evident to both opponents. Similarly for other allocations, the payoffs may be known, and Red and Blue will each select a strategy. If Red's defensive design must be established first, and will become known to Blue, the game is the minorant type. If both sides must allocate simultaneously in ignorance of the other's allocation, the game is the mctjorant type. 10-2.3 Sequential Game The most complex type of two-person game is one in which there is a sequence or continuum of strategies to be followed by each opponent. Pursuit, evasion, and continued conflict are typified by this type of game. Differential games are also included in this category. Sequential games will not be discussed in this chapter. 10-3 Solution to Game Problems 10-3.1 Graphical Solution of a 2 X 2 Matrix 8-ingle Game Suppose that Blue is capable of successfully defending either of two of his installations, but not both. Blue then has two "pure" strategies, Blue 1 and Blue 2, corresponding to the number of the installation he decides to defend. Red is capable of attacking either installation, but not both, and will also have two strategies. Three general types of this 2 X 2 game situation are shown in figure 10-1. Case (a). In this case, the minimum payoff to Blue is at point d if he follows his Strategy 1 and at point a if he follows Strategy 2. The maximum of the minimum payoffs (the maximin) is at point a. As for Red, the maximum payoff he must give is at point a if he follows his Strategy 1 and at point c if he follows Strategy 2. The minimum of his maximum payoffs to Blue (the mini-. max) is at point a. When the maximin and the minimax are at the same point, this is called the "saddle point" of the game, and represents the best strategy for both Blue and Red. Blue's Strategy 2 is a "dominant strategy"; i.e., no matter what Red does, this strategy always has a greater payoff than Strategy 1. Case (b). This is no different from case (a), except that if Red uses Strategy 1, it will make no difference which strategy Blue employes. The point e now represents a saddle point. Case (c). This case is the only difficult one to decide. It is the more likely case in practice since it is typically better for Blue to have defended the installation that Red chooses to attack. Payoff to Blue e Blue 2 -----f ~saddle point---...... ~lue 1 ....... _...... __ , Blue _2-"M----', Blue l ........ ' ........, Minimax g ' 0 1 -;----------------~~ Red (a) Red Red (b) Red Red (c} Red I 2 I 2 r 2 Figu1·e 10-1 10-2 For case (c) a policy is needed. Suppose that Blue and Red are individuals, units, or armies that will engage in repeated games with each other. Very likely a number of these repetitions might consist in (other) games in which the payoff relationships were numerically like (c). If so, then each would gradually acquire knowledge of the other's strategy. For example, if Blue always defended the more valuable installation, this would gradually become apparent to Red. One way to investigate the question of the best strategy, therefore, is to suppose that this particular game is played over and over. On repetitions of the game, it is not always necessary for either player to make the same choice every time. The figure indicates that it would be best to each to vary his choice. The graph of the matrix itself suggests what would be a good long-run policy for Red. Consider the distance from 0 to the point labeled 1 to be of unit length. For any point on this axis, let r be its distance from the point 0. Then regard r as the fraction of the time that Red chooses his alternative 2. Because of the particular numercial values of the V;h r will have some particular numerical value at the point M, which value will be denoted by r*. If Red chose his alternative 2 in a fraction of the cases greater than r*, then Blue, by favoring his alternative 2 the more, would have an average payoff higher than if Red used alternative 2 in the fraction r* of the cas~s. Similarly Red should not choose his alternative 2 in a fraction less than r-*. To choose his alternative 2 in a fraction r* of the plays is thus the best strategy for Red. The strategy is termed a mixed strategy; and a problem is how to achieve the mixture in fact, over time. One solution is to randomize the choice on each play of the game, setting the probability of choosing R" at r. An exactly similar reasoning will establish the fact that Blue should also use a mixed strategy. A separate figure showing the argument above for Blue instead of for Red may be visualized. 10-3.2 Numerical Solution of a 2 X 2 Game As a numerical illustration, suppose that installation 1 is worth three times as much to Blue as is 2. Then the solution obtained by applying the above procedure is as follows: Blue's use Red 1 2 row minimum probabilities BLUE-1 3 2 1 D column max 4 4 Red's use 14 %. probabilities The value of the game is the average payoff to Blue if both opponents use their best strategies. If there is a saddle point, the value of the game is the value of this point. In the case of a mixed strategy, the value is determined by calculating the payoff for either the Red or Blue mixed strategy against either of the opponent's pure strategies. Thus, using Blue's mixed strategy against Red Strategy 2, the value of the game is determined to be 3(3) + 1(4) ---4--= ~.25 . 10-3.3 Gr-aphical Solution of n 2 X m Game • When one opponent has only two choices, but the other has m possible choices (m > 2), the game becomes a 2 X m game. A graphical method is useful in evaluating the m alternatives so that all but two are eliminated and the problem is reduced to a 2 X 2 game. 10-3 Assume the following matrix of payoffs to Blue Red Strategies (j) 1 2 3 4 Blue 1 -4 7 3 -5 Strategies 2 3 -4 2 6 These values are plotted as shown in figure 10-2. The lowest linear path represents the minimum payoff to Blue for any Red strategy. Of the lines that make up this path, the two that intersect at the maximum value (labeled M on the chart), are the two strategies (in this case, 1 and 2) that should be followed by Red. Considering only these two strategies, this is now a 2 X 2 game, from which we can proceed to determine best pure strategy (if there is a saddle point) or mixed strategy. 10-3.4 Basic Relationships in Determining the Value of a Game If Blue plays his strategy 1 with probability b and if Red plays his strategy 1 with probability r, then the value V(b,r) of the game will be V(b,r) = brvn + b(1-r) v,~ + (1-b)rv21 + (1-b)(1-r)Vee This can be put into the standard form V(b,r) = -A(b-b*) (r-r*) + V(b*,r*) where A is a constant. This shows that for all b and r, V(b,r*) :S; V* = V(b*,r*) :S; V(b*,r). +5 5 0 0 -5 -5 Blue Blue I 2 Figure 10-2 Identifying coefficients, j b* = (Vzz-V21)/A A = V 11 -V12 + V22 -V21 ) * ( )!A \ r· = V22 -V12 V11Vzz -V12V21 IPI V* = V(b*,r*) = A =A where !PI is the determinant of the payoff matrix. 10-4 Development of a Game in an Actual Military Situation Reference 10-1 traces two examples from World War II records of how military situations that required a critical decision as to the disposition of forces and assignment of missions developed into the essence of a game. (1) In the Pacific campaign in February 1943 intelligence reports indicated that a Japanese (Red) convoy would soon go from Rabaul at the east end of New Britain, westward along New Britain to Lae (beyond the west end of New Britain on the mainland of New Guinea). The convoy would have a choice of two routes, each requiring about three days of sailing. One route was the north coast of New Britain, for which bad weather was forecast for the first two days. The other was the south coast, for which the forecast was for clear weather. The U. S. Air Force command (Blue) was ordered to intercept and inflict as much damage as possible on the convoy. Blue's choices were between which of the two routes to concentrate most of his reconnaissance aircraft. Once the Red convoy was spotted, on either route, bombers could strike it. The Blue commander estimated that the following outcomes would occur: Red North Red South Min. Row Blue Convoy would be discovered Convoy would be discovered North by the second day, permitting by the second day, permitting 2 an estimated two days of an estimated two days of bombing. bombing. Blue Convoy would be discovered Convoy would be discovered South on the third day, permitting immediately, permitting 1 one day of bombing. three days of bombing. Col. Max. 2 3 Using days of bombing as the measure of payoff, we have a 2 X 2 games, in which there is a saddle point (Red North, Blue North), with a value of the game of two days of bombing. In fact, Blue and Red selected these strategies. Reference 10-1 points out that the game was correctly considered to be zero-sum game, since the outcome judged good by one commander was judged bad by the other in corresponding amounts. (2) According to General Bradley's report, the following situation developed when the Allies (Blue) had just broken out of their beachhead on the French coast through a narrow gap by the sea at Avranches, exposing the west flank of the German 9th Army (Red). This Army was already being contained by a frontal attack by the U. S. 1st Army. Red had two choices: (1) attack westward and try to cut off the gap; (2) withdraw substantially eastward to a better defensive position. Blue's key decision concerned what to do with an uncommitted reserve of four divisions located just south of the gap. Three mutually exclusive courses of action were considered: (1) order the reserve back to defend the gap; (2) send the reserve east 10-5 ward to press Red; (3) wait one day. Blue estimated that if Red attacked but the gap held for one day, later reenforcement would be unnecessary. The probable outcomes are not quantitatively as simple as those in example (1) above. Bradley develops them at some length (with no explicit reference to game theory or its terminology) pointing out the importance, as events subsequently proved, of the decision to the Red force. The row minima are: for Blue (1), the gap is not cut for Blue (2), the gap is cut for Blue (3), weak pressure on Red's withdrawal, gap may be cut The column maxima are: for Red (1), the gap is not cut and Red may be defeated for Red (2), strong pressure on Red's withdrawal General Bradley (Blue) adopted course of action (1), the maximin. The German (Red) commander (von Kluge) is reported to have adopted course of action (2), the minimax, but to have been countermanded by Hitler. The Germany Army attacked westward and was badly defeated after which von Kluge committed suicide. References 10-1 "Military Decision and Game Theory" by 0. G. Haywood, Jr. Opns. Res. 2 1954 pp. 365-385. 10-2 Theory of Games and Economic Behavior. John von Neumann and Oskar Morgenstern. Princeton. 1953. 10-6 CHAPTER 11 SYSTEMS 11-1 Scope A unit or organization of units that is engaged in routinely carrying out its mission (s) is in this chapter regarded as a working system, or operating system. The term system is in rather broad common use to refer to such units in their "systematic" operation. A gun while it is firing is a system. The telephone system handles telephone traffic. A heating control system is a system that is different from a telephone system only in emphasis. Similarly for highway systems, fire alarm systems, weapon systems in general, and even systems that are actually mostly procedural --for example, promotion systems. This chapter deals with some common elements and with a common logic for certain kinds of systems. The chapters which follow deal with special emphasis of certain such systems. In chapter 12, emphasis is on the performance of stochastic service systems; in chapter 13 emphasis is upon the support of a system that can fail and need maintenance; in chapter 14 emphasis is · upon the particular formal nature of systems like, and including, supply systems; in chapter 15 emphasis is on system performance in combat. 11-2 Some Considerations in System Evaluation 11-2.1 The Need for System Evaluation Careful methods of evaluating proposed systems have become indispensable as technological development has speeded up. The scale of probable use and of typical cost of proposed systems is often so great that there is a great value in avoiding having to guess about the following and other related questions: (1) What utilization the system may have; (2) Whether the characteristic detailed time-patterns of demand will overload it; (3) How the typical frequencies of occurrence and duration of down-times for repair will affect its response; ( 4) How large a supply reserve of spare parts should be maintained to cover unpredictable failures of the system; ( 5) What net shift in cost-effectiveness will result from a given increment of redundancy. 11-2.2 The Place of Concepts As noted in chapter 7, it is frequently necessary to establish a concept of activity as a prerequisite for its evaluation. One type of military activity is apt to be so different from another that in order for any particular system concept to have widespread scope, it seems likely that the concept would of necessity have to be too abstract to be useful. But much greater formal similarity exists between military activities of apparent dissimilarity than is always recognized. Careful distinction must be made between a system conce7Jt and a particular recdization of a given system concept. Formal logics of operating systems can be helpful in the following ways: • (1) To command, in the intuitive management of the operation of any system; 11-1 (2) In standardizing ·the nomenclature and measures of performance and of support of operating systems of all types; (3) In the automating of systems, e.g., in the introduction of computers to program and control systems, to route demand, and supplies; (4) In the operational design of operating systems, in integrating considerations of performance and reliability, and in obtaining an economic balance between maintenance and redundancy in design; ( 5) In the planning of larger military structures, for which it is necessary to have as simple and completely parameterized representations of systems as possible; (6) For on-line operational analysis and trouble-shooting of systems. The more formal concept of a system developed in this volume includes standard measures for the input and the output of operating systems, measures of the reliability, availability, response-delay and other measures of system performance, and measures for the effort expended in supporting the operation of systems. With these general concepts and measures, overall representations are possible for rather complicated military systems such as repair shops, medical centers, command networks, and even perhaps an entire task force on assigned duty. Appropriately identified, the above concepts may be applied in principle to all types of military systems other than weapon systems. To make the identification, it is often necessary to focus on form of operation, rather than upon structural content. To cite an example dealt with more fully below, the demand for weapons, service and supply systems alike tends to be compounded of the occurrence of requests for the system's output (demand times) and the individual amounts of output requested or required per demand (demand size). Again, concepts of work load, overloading, backlog, downtime, etc., tend to be quite common to all performing systems, and the typical dynamical and statistical details tend to be just as similar. Measures of such quantities are often much more highly developed for certain types of systems than for others, illustrating often unevenness of methodological progress more than real difference between the systems involved. Quantitative measures whose use has been developed only in very recent years represent realistically the variety of experience which the operation of the system is bound to include. 11-2.3 Cornp1·ehens·iveness ·in System Concepts It will be evident from the following chapters that the subject of systems is developing, but by no means completely developed. For example, the theoretical subject of queues has been actively developed in recent years to the exclusion of service systems that break down. Again, the theory of reliability as so far developed has not combined considerations of the effect of traffic load with unreliability upon total performance and support. The theory of weapon systems analysis, as treated in the open literature, appears to emphasize the performance of systems in combat, even their performance in some supposedly single combat encounter, to the exclusion of considerations of the support of such systems as supply, maintenance, repair, and return to service. Yet the fact is perfectly well known that the logistic effort is many times the combat effort which it supports. Similiarly, inventory theories have to date largely ignored the options of repair and maintenance as alternative strategies for maintaining a supply of field equipment. Systematic application of the principles of this chapter may be of use in putting together more comprehensive concepts of system operation, from design to retirement. 11-3 General Examples of Operating Systems 11-3.1 Combat Systems ( 1) A gun is designed to fire when triggered, triggering being part of an external input which will normally occur time and again during the weapon's working life. Input to the weapon may include abuse and damage by enemy action. Maintenance is justified by the high value of reliability. 11-2 (2) On active alert or patrol, the weapon and its user are a weapon system (not so automatic), which processes sensory inputs, perhaps identifies the occasional signal of a target that is embedded in that input, and fires, perhaps hitting the target. Too many targets may saturate the system and kill it. The system may seek support. The exact time-occurrence of targets will be difficult to predict, and may be represented by a stochastic model, particularly in the short term for purposes of computing the traffic load, i.e., ammunition required and need for service. (3) A combat reserve or support force held in readiness is a complex "system" whose reliability and versatility are important. If the occurrence of demands has a statistical composition, most demands (like fires to be fought in a community) may be small while a few are very large. The force will need maintaining, renewing, careful design to match the input of requirements. (4) An automatic fixed-installation detection and/or defense system will have a certain reliability of performance and total effectiveness in its total mission. It must be kept ready. A complex of such units will need to be replenished between attacks. Attackers that arrive in too great numbers may saturate the system and overwhelm it. 11-3.2 Service Systems (1) A maintenance man is a server who responds to a typically random pattern of demand (requests for service) from a typically large and usually only institutionally-known source. The busier he is, the longer a demand waits to be serviced. Unmet demands do not threaten his existence as seriously as they do a weapon system but if they overflow the system which he constitutes, they are apt to result in complaint. (2) A machine is to do some tasks, some long, some short, that are needed from time to time and are requested at unexpected instants. Policies of optimal maintenance, and best strategies of machine repair and replacement, must be regarded as in effect part of the degree of service which the machine achieves. (3) A service center keeps ready a capability (typically complexly structured in the form of certain skilled persons, instruments, machines, etc.) in definite numbers to fill a time pattern of requests that it may be expected to receive. The possibility of illness or machine breakdown can be provided for by redundancy of capability, and redundancy can be increased through individual ,versatility. 11-3.3 Supply Systems An inventory is a stock of items kept in supply in order to meet a demand that is expected to occur recurrently from a population of anticipated users (customers). Demand is compounded of requisition times and requisition quantities, and is typically statistical in pattern of occurrence and amount. The inventory has to be controlled and replenished. Availability of stock for stochastic demand will not be 100%, but availability is a simple measure of performance. A customer will regard the system as reliable according to availability he encounters. The inventory will typically consists of many different types of items including spare parts for other systems. Some are in frequent demand and others in very infrequent demand (but the value of availability then may nevertheless be high-the "horseshoe nail"). Some items may be substitutable for others. If stock is depleted, a demand may be referred to another part of a supply system or complex, to a production point "upstream" from the inventory, or it may be backordered and become part of a queue awaiting service. The number of pieces of a given item that are kept typically in stock (including on reorder) corresponds to the redundancy of the system, which is momentarily reduced as demand occurs. Reviewing stocks and obtaining replenishments maintains the system, and system effectiveness is mucl: dependent on the maintenance policy. 11-3 A combat inventory is subject to enemy attack, and a peacetime inventory is subject to insidious or catastrophic effects that reduce its redundancy and-in the absence of maintenance, restoration, and replacement-ultimately destroy its capability. 11-4 General Representation A comprehensive representation of an operating system will include the following fundamental considerations : (1) The typical time-pattern (s) of external input to the system must be described. The input includes the frequency, magnitudes, and variety of demands for ihe system's output. It also includes potential attack and environmental intrusions upon the system's capability, which may destroy its ability to perform. (2) The mission of the system will be to respond to this external input in a prescribed fashion (e.g., upon call) to the extent of which it is capable. Measures of this response will be measures of its effectiveness, e.g., standby readiness can be the principal achievement of a warning system. (3) A useful output of the system will be included in the input to another system "downstream" from the system. The relationship between the systems must be established sufficiently to determine any feedback from its output. ( 4) Continual responding to demand will decrease the response capability of nonadaptive systems and components, e.g., as ammunition supplies decrease or as the system wears physically. A representation is required of the relationship between system output and the decline of system capability. (5) When the system fails, it normally becomes a demand upon some other system to have its capability restored. A representation is needed of the typical response of that support system to demands upon it in order to establish time for repair. (6) In terms of the above measures, a representation may be developed of the system's typical cycles of operability, of inoperability, of being busy with a backlog of work in progress, of being idle for lack of work input, of intervals of reliability prior to failure, of intervals of survival before being put out of operation by enemy attack. (7) When the system has failed or is busy, demands upon it which cannot be backordered or queued until it resumes serving them, may be referred to other cooperating systems in a supersystem or system-complex of which it is possibly a somewhat .redundant component; or the demands may be referred to a competitive system. The demand pattern may thereby be somewhat modified, especially that portion which consists of such "overflow" traffic. (8) Operational strategies for managing the system will include alternatives of design, of maintenance, and of employment. (9) So far as the design, performance, and support of the system can be quantitatively parameterized, the task of fitting the system into a larger military structure may be correspondingly easier for the efforts of design of planning, and of operation itself. The paragraphs which follow develop some of these points. · 11-5 External Input 11-5.1 Categories Significant external input to a system can be classified into two categories (1) Demand for the system's capability. (2) Attack on the system's capability. Demand and attack on a system may occur simultaneously. For a weapon system, much of the demand for the system's capability may be precisely in defense against an attack on its capability; and much of the attack will occur as demand. Service, production, and inventory systems 11-4 also may be attacked with weapons. Under attack may be included all accidental damage originating from sources external to the system. 11-5.2 Compound Demand Processes Demand and attack will occur in discrete increments, although sometimes at such rapid rate as to appear continuous. The time at which an increment in demand (or an arrival) occurs is termed a demand time. The interval between two consecutive arrivals is termed the inter-arrival interval. The amount of effort required of the system to respond to the demand is termed the demand size. The quantitative sum of all the demand over a given period of time is termed the total demand for the period. The set of all possible demand histories is termed the demand spectrum. Thus demand is likely a compound process, compounded of a process of demand times and demand sizes. The pattern of demand sizes may vary during the passage of time in a way different from the simultaneous time-development of the distribution of demand times-e.g., demand times may become less frequent if the customer population of users at a given rate is decreasing; demand sizes may decrease if usage, but not the size of the customer population, is decreasing. Both processes may deserve probabilistic representation. Demand at high rate may be approxi mated as a continuous flow if convenient. 11-5.3 Examples of Compound Demand Processes (1) The time pattern of fire-bursts of the weapon will depend on the time-pattern of occurrence of targets, and the lengths of bursts upon the nature of the targets and effectiveness of fire, which will fluctuate from burst to burst. (2) Requests at a service center will typically arrive in unsynchronized sequence, and the variety in the amounts of service requested will reflect the variety of requirements of the customer population to be serviced by the center. For example, the lengths of telephone calls tend to be exponentially distributed. • (3) The randomness of the times of occurrence of requisitions is well established. Demand sizes-i.e., customer order sizes-will typically range from quite small to very large, in decreasing frequency of occurrence as the order size increases. 11-5.4 SpecialForms of Demand Supply and service centers and systems are designed to respond to prior demand processes. Often the demand has been referred from the point and time of failure to the location of a supply service center at a later time. During such referral the demand may join other demands so that a "traffic process" of demand occurs at the center. Demand frequently originates with the occurrence of a change in state of some system or population of systems. The form and time-patterns which such demand may take, consequently, depend upon the way in which the system originally may change state. Demand that originates may induce a secondary demand. The form of the secondary demand process will then depend upon the mechanism of induction. 11-5.5 Statistical Representation of Demand By demand from a sector is meant demand of which (1) The demand times tend in the short term to be random and collectively independent (so that occurrence of demand is a Bernoulli or Poisson process); (2) The demand sizes are random variables of some probability distribution characteristic of the demand in question, and are statistically independent individually of each other and of the demand times. The number of demands in a given period of time thus has a Poisson distribution, so that an alternative name for this type of demand is compound Poisson. The term "public" is a more familiar term to suggest the characteristic independence of the increments of demand in non-military in stances. While military demand from a given sector upon an authorized supply point may tend to be supposed as not so unpredictable, no history kept of the complete details of the occurrence of such demand has failed to display a Poisson char '-~ 11-5 acter of the occurrence of demand. Figure 14-1 and 14-2 show some observed data on demand for items of ordnance supply that confirm this fact. The compound Poisson demand process has received considerable development as an ideal input process not merely for inventories, but also to queues and service centers generally. Insurance records have been reported to indicate that the process describes the occurrence of accidents, the frequency of their occurrence being Poisson and the demand size being the magnitude of the individual accident. It seems likely that the occurrence and duration of emergencies of any kind might well be represented by this stochastic process, including combat alerts and minor engagements during extended combat. This type of input is equivalent to simultaneous input from a great many sources in parallel. It is to be contrasted with serial input, an increment of which usually blocks the occurrence of further input for an immediate interval such as, for example, the following of each other at safe distances by the vehicles in a single lane of traffic. Aircraft converging on an airport from several directions and at several approach altitudes, it has been found, constitute input in parallel. The traffic characteristics of compound Poisson processes are simple and linear. The parallel composition of input means that the merging of two independent compound Poisson processes will be a compound Poisson process. If a given compound Poisson process is randomly divided or forked into two processes-each demand going entirely into one of the two dividing streamsthen each of the two processes will be compound Poisson, and they will be statistically independ ent of each other. Further, if the individual demand sizes of a compound Poisson demand process are scaled or transformed, not necessarily linearly, either randomly or by a constant factor, then the result is a compound Poisson process. The compound Poisson process is conceptually the same whether the demand be stationary or nonstationary (whether the statistics of demand change systematically with time or not). The demand times constitute a continuous Bernoulli process with rate a(t), where a(t) will be a constant a if the demand is stationary. The mean number N of demand in a time-interval (Tl,Tz) will be N = rT2a(t)dt in the nonstationary case jTl N = a(Tz-T1) in the stationary case A demand of size S(t) occurring at time t has a probability function s(x;t) = prob f{S(t) = x}. The average probability function of demand size is thus Attention can thus be focused on this time-averaged demand size, itself a random variable, which can be denoted by S(Tl,Tz), or just by S if no imprecision threatens. The mean demand in an interval (T1,T2) symbolized by q(T1,T) is mean number of demands in (Tt.Tz) X time-average of the mean demand-size NXS When the demand-size is stationary, the variance is CT2Tt,1', = N X 82. Additional statistics are given in tabular form in paragraph 14-5.1. 11-6 Disposition of Demand Here we are concerned merely with the formal disposition of demand that cannot be immediately met, not with the physical way in which the disposition may be executed. The following are common methods by which unfilled demand may be treated: 11-6 ( 1) The demand may be modified to a form acceptable to the system for immediate response-e.g., substitution of stock, spare part, food, weapon. Such modification may require a substantial effort. Modification may range from complete evaporation (e.g., via cannibalization) through all degrees of fractionization. (2) Demand may be stored by the system until capability becomes free for assignment. In a supply system such demand is said to be backordered; in a service system, it is queued. The actual storage capacity of a queue will be part of the capability of the system, since the larger the capacity the less demand is refused (see Erlang's loss formula, para 12-11). Storage may be a significant phase, amounting to the work of a special system or subsystem performing only that function. While waiting, demand may become impatient, may alter in priority, or may modify. The waiting demand may be invisible as in the case of automatic telephone systems. (3) Demand may be routed, or route itself, elsewhere. A given system may be a redundant component of a larger system, e.g., one depot in a military supply system. If the system is a cooperative one, the selection of one among several desirable alternative systems may be a significant decision for a central routing control. This control is a switching system. If the components are mutually competitive (e.g., commercial inventories), the demand will typically be lost by the system at which it originally arrived. 11-7 Maintaining Capability 11-7.1 Maintenance of a System The processing of a demand usually reduces in some way the capability of the system to process further demands. A machine wears a little from being used. An inventory is reduced by supplying a demand. A human server may become fatigued although, unlike the machine and the inventory, its capability to perform may be increased by performance due to its adaptability. From time to time it will be necessary to replenish the system's capability, to restore at least to some extent the accumulated degradation. Such restoration of capability may include maintenance, repair, replenishment, and renewal. Maintenance is action taken in advance of breakdown of the system. Renewal may vary from partial-i.e., the replacement of a component with a new component-to complete replacement. The measure of renewal is not in terms of what is done, but in terms of the level to which the system's capability is restored and in terms of the set of factors which the system's future behavior is now made to depend on. In the simplest cases, system capability can be regarded as a (vector) function of time, c(t), which is at a maximum when the system is new. With use, wear, damage, and time, c (t) declines. The rate of decline is dependent on use, and also on the offsetting effects of maintenance. The rate m (t) of maintenance should depend upon present and forecast c(t). The greater the maintenance rate, the less the excess of decline over some minimal decline. Both re pair and replacement raise c(t) abruptly. The strategy for maintenance is partly based upon knowledge of use of the system, and the effect and cost of a maintenance program. Maintenance may be intermittent because of economies, i.e., continuous maintenance would increase capability only at an uneconomical effort. In complex types of machines, there may be two or more primary components of decline: (1) short-term rapid decline of repairable or replaceable parts that increases the probability of system failure. Preventive maintenance can restore such declines by repairing or replacing the parts, by lubrication, etc. and (2) long-term slower and inexorable system decline. Preventive maintenance conducted to oppose short-term declines at best merely assures that the system de clines no faster in the long-term than at the long-term rate. The system cannot decline more slowly. Both declines are representable as the momentary increasing of the probability of failure even though the machine's performance may still be representable as binary, i.e., as satisfactory or not. • Long-term decline can be represented as a decline in output rate although it is sometimes 11-7 taken only as an operating cost (which may thus be an inaccurate representation of all that is happening) especially when the main loss of quality is an increase of the high frequency component consisting of the failure of replaceable parts. The decline in output rate may be quite complex to represent, in that the versatility of the machine decreases and different capabilities decline at different rates. Any given system or type of system can thus be regarded as a particular set of decline trajectories, any one of which can be selected by a maintenance policy. 11-7.2 New or Replacement Systems A given capability can be restored by a new or replacement system. Any competitive advantage constituted by the machine's special capability when it was first technologically newe.g., radar, any new invention or process-declines probabilistically in time. The enemy learns of the capability, often by the very fact of being threatened with it. The machine's obsolescence rate thus increases with time. A choice between two alternatives of new or replacement systems is, so far as preserving effect-capability goes, a choice between two alternative sets of capability-decline trajectories, and for each a further choice of maintenance policies. 11-8 Cycles At any given point in time during its operating life, a system will be found in one of a number of states. The notations introduced here are used throughout the following chapters. Measures of these states are not developed here, but in those chapters. Figure 11-1 shows the relationships of the system states that are considered: (1) At any given time the system is either going (Gin fig. 11-1) or is not operable (U in fig. 11-1). By going is meant either performing its function(s), or able to do so. By not operable is meant not performing any such function and not able to do so. (2) The state going (G) thus divides into two substates: either (a) idle, meaning available on demand,-corresponding to the interval A in the figure or (b) busy, already committed to some piece of work, corresponding to the interval B. The extent of the committment is measured in Chapter 12. If demand arrives during this interval, it cannot be immediately and fully responded to by the system. The system may cycle from B to A and back to B any number of times within the interval G. The interval G will terminate while an interval B is in progress, and an interval U will begin, if the system has failed from wear, overload or attack. The interval G may terminate as an interval A only if the system is deliberately pulled off-line for scheduled maintenance, repair, modification or replacement. (3) The interval U (unoperable) has three subintervals: (a) an interval W of waiting for service (maintenance, repair, etc.) W may be 0 on some cycles ; (b) an interval S of being serviced; (c) a storage interval I. If a testing of the system at the end of the interval S proves the system not yet operable, it may face another interval W. When restored to serviceability, a system may first have to go through the interval I, of being "in inventory" ready for issue. This interval I is part of an interval A for the larger system of which this inventory is a component. I may be 0. ( 4) The system's operating life begins typically with an interval I and ends with an interval G. 11-8 Princ1pal System States --------·~--+•---------+•----~·--,~\ J\~~·--------~·~~·~~-- G u G U ~ V V U G time The Interva I G (going) G ends with a B G ends with an A I I I • A B I I • I I A B B A B A Failure Deliberate Termination Termination The Interval U (not operab1e) u u u u "'-A A A. r 1 I r '"\ : 1 c w s • w=O s' •I w S I=O w=O s 1-=0 Figure '11-1 •11-9 Measures of Effectiveness and of Cost In each of the three following chapters, measures are presented of the performance of the system in question (service inch 12, support inch 13, and supply inch 14), and also of the effort involved in performance. In addition, special solutions are presented for certain tactical situations, solutions which involve the use of the measures presented to determine an optimal strategy of action. It is emphasized here, however, that there is no single measure of effectiveness or of cost which should be employed in every situation involving any such system. As described in chapter 8, restrictions upon resources available or restrictions of requirements are based upon the value of these elements of the situation for other purposes, in other situations. Much optimization is necessarily approximate. When these values change sufficiently, what was a restriction may become an objective instead of a mandatory matter. Consequently, in every case the measures chosen must be carefully selected. In addition, measures must be selected with a view to their tractability in the task of optimizing; a good approximation to a best strategy that is arrived at by a hasty if imperfect model of effectiveness may be operationally much preferable to a more accurate model the optimization of which may require more computational effort than is available. Considerable expertness in the operation, in modelling and in command may be required, in coordination, and in selecting the measures to be used in any given case. 11-9 CHAPTER 12 STOCHASTIC SERVICE SYSTEMS 12-1 Scope Although much of the subject matter of this chapter has historically been associated only with the topic of queues and only with kinds of work that tend to be characterizable as the rendering of service, it is nevertheless important for discovering and exploiting applications to recognize that the topics of the chapter can apply to the performing of any kind of work or the exerting of any kind of effort by any kind of organization, from support to combat, from administrative to fire-control. The general interpretation of queue is backlog of work; and, of ser'liice is work. The length of the queue, or of the wait, is a direct measure of effectiveness. The general interpretation of server is worker, working organization, or facility. The general nature of the flow ofwork to, through, and from the facility is traffic, whether it be arriving, internal, or departing. The length of the queue. or of the wait, that will typically be the result of such traffic is a direct measure of the effectiveness of a working unit of given capability for units of work. The specific topics of the chapter are the traffic and stochastic characteristics of (1) The momentary work flow (as opposed to the general planning of flow, assuming that it will be smooth in flow, which is the subject of linear programming); (2) The momentary delay, not merely a side feature of performance, but also in the way it may affect the efficiency of work-loading; (3) The stochastic characteristics of the way in which the system is preoccupied with work, its utilization, idleness, workload, etc.; (4) The work dispatched, or the work-overload referred, to other units. In order to determine these characteristics, a priority statistical approach is seldom adequate; instead, one must identify the structure of the particular activity, working unit, its capabilities, the source of demand, the on-going routing and control of work, etc. 12-2 Summary of Operational Aspects Many military activities of unrelated identity can be regarded as constituting a random flow of units through some operating point at which a service of some sort is performed. The means for performing the service is a stochastic service system, at which there arrives a stochastic demand for some capability possessed by the system. The amount of a given type of work required of the system by a demand is in the simplest case a service-time for the system. More generally the service may be a vector of service times or a schedule of support that may be even intermittent. If (as is typical) the system is committed when a unit of demand arrives, the demand may have to wait unless it has priority over work in progress, and join a queue. The arriving demand may refuse to wait (balk) or may bypass because the queue is full or cannot be maintained. The demand may renege after waiting, out of impatience; or it may evaporate. The capability of the service point is typically multiple. Itmay be organized into a number of servers, fed by a common queue, or by separate queues. Groups of arrivals may be serviced in bulk. The servers may gang to service a demand or work separately as individual channels. The servers may be common in capability or be differentiated into special purpose units. The queue may be organized to provide a definite order of service among waiting units. Priorities 12-1 may be employed, sometimes in very sophisticated modes, for the purpose of minimizing the value lost in waiting. The wait represents a loss of effectiveness, but the economic wait time in some systems is several orders of service time in magnitude. The presence of a queue may constitute a pressure tending to reduce the service time below a relaxed performance-level of lightly loaded service. Queues of excessive size may cause hypercongestion, reducing the efficiency of service. Times during which some or all of the service capacity is idle represent necessary non-utilization. Because of the typical randomness of arrivals, the traffic volume of work that can be assigned to the system must be less than its capacity. Either a backlog of work or idleness is the normal situation for the service unit. Measures of performance and effort are thus dependent on random value. Design and operational questions include: what volume of traffic to "aim" at the system (assign to it routinely); how many servers will be needed; should servers be specialized in capability; what priorities and order of service should be used? 12-3 Characteristics of Service Systems Capability to perform work is associated with a machine or piece of equipment, with a person and his particular talents, with a facility based on some design of nature, such as a harbor, but above all with systems composed of facilities and individual and joint capabilities. The facility may be stationary or mobile. The facility may be directed remotely (e.g., a fire-support unit or a maintenance pool reached by radio or telephone) or it may be locally commanded (e.g., an infantry squad on patrol). As a matter of abstract terminology, any such system will be referred to as a service system even though its work may be other than what is sometimes implied by the term" service." The term suggests, quite appropriately in this day of cooperative activity, that the system does its work mainly in response to demands from other cooperating systems. The work may be individually requested or it may be automatically induced by the larger organization and assignment of unit missions. Typically the capability of any such system is multiple, to perform many different kinds of work (types of demands that can be responded to, weapons that can be used, tools that can be employed, etc.), but the work required at any particular moment is a momentary selection of its capabilities. This availability of multiple capability in the system makes it an effective strategy to give it a general assignment to respond to any one of a general class of demands. These may come from any one of a general group of military units in some assigned sector of the military community. This strategy is effective in spite of the fact that the exact time, duration, and nature of the work that will be demanded of the unit cannot be predicted in advance; and, as a result, each demand typically has to wait for service. In other words, the availability of help at slight delay is preferable to the non-availability of help. The following tend to be random: (1) Time interval for arrivals; (2) Whether a particular service point is busy or not; (3) Lengths of queues; (4) Waiting time to service. Service times may also vary, depending on the nature of the service performed. In this case, the times of leaving a service system may also be random. Consequently, it will be impossible to design any measure of performance of the service operationor of the entire operation, including the original assignment of missions-the momentary increments of which will not also be random. In short, any evaluation of the operation will of necessity have to be statistical and probabilistic in concept. Similarly in the design of service systems, it is necessary to recognize that the queue sizes, waits, and service loads actually experienced during any intended period of operation will differ probabilistically from their expected values. 12-4 Examples 12-4.1 Combat (1) A helicopter support group is called upon to fly support missions that arise as a result of battle. The missions will not occur in regular order and will be unequal in amount of effort 12-2 required. Demands may exceed capabilities over a given period of time, and not all demands will be able to wait. (2) At a combat-support vehicle-repair shop, equipment arrives for repair. A tool crib, from which needed tools must be drawn, is an internal stochastic service system within the stochastic service system represented by the entire shop. 12-4.2 Traffic (1) Vehicles seeking to cross a main road must queue up at a stop sign. Service time (for the front unit crossing the main stream) includes the interval until in the main road traffic there occurs a gap long enough to permit crossing. (2) At an airport runway, aircraft waiting to take off and land are service demands on runways. A specially designed queuing representation of this runway operation is employed by the Federal Aviation Agency to compute effective traffic capacities and economic delays for commercial airports and to serve as a criterion for runway design. (3) When vehicles merge from two lanes to one on a highway (e.g., around a wreck) the passageway in the remaining lane acts as a service-point through which each vehicle must pass. The faster a vehicle travels through this point, the shorter its service-time. Consequently, two queues should not be allowed to form right at the merge point or to merge at slow speed. Instead merging should be forced to occur at high speed "upstream," and the traffic velocity past the blockage kept high. 12-4.3 Public Facilities (1) The telephone industry has a long history of systematically employing queuing theory to determine its requirements for circuits during the peak hours of telephone use. Much of the early development of queuing theory occurred within the telephone industry in the first decades of this century. (2) Each copy of a library book is a server, the service-time being the length of time for which the borrower keeps it. A library is a collection of rather special purpose servers. The probability distribution of books in frequency of demand may remain more stable through time than may the demand rate for any one title. Consequently, it may be possible to calculate the total budget needed to provide service at a specified level for each server from the distribution of demand-rate alone. 12-4.4 Manufacturing A multi-purpose machine does one kind of work at a time, typically for some extended duration. For example, in mass production the machine may be used to produce an economic lot size of some object, consisting of a number of like pieces. Upon completion of one such run, or job, or task, the machine is switched to some other task. This task may have resulted from the fact that inventory of a product has just dropped below a predetermined level. If the machine can make different kinds of objects and if the demand for these is independent between them, then some demands will arrive while others are still being met. In such cases the new demands will have to wait. Note that the wait will increase the amount of safety-stock of the object that would otherwise be required to prevent depletion of inventory from occurring during the wait. 12-5 Measures of Performance (1) For compound demand, the arriving traffic arrives at an arrival rate of a arrivals per unit time, and each arrival requires one unit of the service capacity of the system for a time interval S, the service time. The quantity s = (S)-1 is the average service rate per server. (2) The traffic intensity q is the product of the traffic offered the system and the probability that the system is open. q thus equals the volume of traffic handled by the system if no uncontrolled backlog develops. The traffic intensity q may be measured as the average amount of work, measured in units of time that will be required of the system to perform it, that arrive at the system per unit time. The unit of traffic intensity is called the Erlang. For example, if demand is a compound-Poisson process, at rate a and service timeS, then the 12-3 traffic intensity is the product of aS. For parallel servers, the intensity must be less than the number of servers. (3) The wait W of an arriving unit. The wait of a unit arriving at time t is a random process W(t), typically with considerable autocorrelation. The average wait will be symbolized by W(t); the probability that W(t) s x by W(x;t); and the probability density that W(t) = x by, w(x;t). (4) Service-availability is the probability that the wait is 0. This availability is symbolized by A (t) at time t, and in steady-state representations by A, i.e., A = W (0, co). (5) The service backlog is the amount of time that will be required to complete all work that is in process or is waiting. When arrivals are a Poisson process, the wait W of an arriving unit coincides with the backlog in the case of a single server. (6) Queue-size is the number B of units that are waiting for service. It is not operationally equivalent to wait since physical limitations of space in which to hold a queue can limit queue-size, and impatience of arriving units at the wait that will be required of them may separately affect the arrival rate and thereby queue-size. The size of the queue at time t is a random variable (process) B(t), with average B(t). The quantity prob (B(t) s nj will be symbolized by B(n;t) and the probability function, that B(t) = n, by b(n;t). Queue-size is related to wait as follows: B = (average arrival rate) X W = aW (cf. para 12-8) (7) The number of units present Pis the number of units waiting plus the number being serviced. It is often termed "system size." In the simplest arrangements, P = B + E, where E is the number of servers engaged or busy. The value of Pat timet is symbolized by P(t), and its distribution function by P(n;t), the probability that P(t) s n. The probability-function is symbolized by p(n;t). (8) Output consists of discharges upon completion of service. 12-6 Measures of Effort The principal measures of effort are (1) The maximum service capacity c provided at the service facility. For parallel servers (channels), cis the number of servers, and for compound demand cS-1 = cs is the average system service rate. The service rate per server is then s. (2) The number E of servers (or units of service capacity) that are busy (unavailable) at any moment. The value of E at time t is symbolized by E(t); the probability that E(t) s n, by E(n ;t); and the probability that E(t) = n, by e(n ;t). The average value of E at time t is symbolized by E(t) and in the steady-state by E. In the case of c parallel servers, P(t) = B(t) + E(t). (3) The utilization of the system, denoted by u in the steady-state, and by u(t) under nonsteady-state conditions. This is the fraction of time that each unit of service capacity (e.g., server) is busy, on the average. The utilization is thus equal to the ratios traffic intensity q E n = service capacity c c In the case of c parallel servers and compound demand, u = ajcs. In the case of only one server, u = 1-A. (4) The effective processing capacity h'* of the system will be less than c, perhaps substantially so, and corresponds to a utilization u * that is acceptable or optimal, a utilization which gives acceptable and/or feasible delays and operation. Thus in practice, !"'* = u*c < c (5) During a busy period, the full capacity of the service unit (e.g., all servers) is busy without interruption, and an arriving unit will have to wait. The length C of a busy period is a 12-4 random variable, typically of larger variance. Various kinds of busy periods can be identified, depending upon the number of servers busy. (6) The period alternate to a busy period is an idle period, and this also may be further specified in terms of the number of servers idle. (In an inventory, stock is available to meet demand. Each unit of stock corresponds to an idle server; if stock is exhausted, demand will have to wait.) Thus the probability of no wait A coincides with the probability that some server is idle. In the case c = 1, the average length I of an idle period is related to the average length C of a busy period by c u = =---=,and A = 1-u I+C (7) The maximum queue capacity, symbolized as Bmax· 12-7 Classifications The usual method of classifying the general nature of a queued service operation is by the type of arrival process, type of service time, and number of servers. Symbols employed are- Types of arrival Symbol Poisson M Renewal GI Erlang Inter-arrivals Ek Constant Inter-arrivals D Exponential M General G Erlang, type K Ek Constant D Thus a M /G /1 queueing problem is a single server queue with a Poisson arrival process and general service time. If nothing else is specified, the simplest ideal case is assumed, namely an infinite queue capacity Qmax, service in order of arrival, and steady-state conditions. By use of this classification, numerous references to books and papers on queues will be found in reference 12-1. The above classification applies only to arrivals that in effect come from an infinite source. Additional classification is required to specify the type of priority, queue-discipline (e.g., queue-shifting), whether service is bulk or not, whether arrivals are hatched or not, whether arrivals always join, never leave, etc. Of special importance, but difficult to classify, is the effective nature of the access of arrivals to servers. 12-8 Additional Operating Characteristics Specific operating characteristics of service systems will vary considerably depending upon the type of operation since the tactical management of queues requires considerable comprehension of the particular random characteristics of queues. Much of the general behavior can be illustrated by the M /G/1 queue, which consequently serves as a useful conceptual reference. Some operating characteristics of .queues are as follows: (1) Queues that are caused by the statistical independence of arrivals and service times are distinguished by a surprising property analogous to the sensitivity of potentially explosive phenomena, namely, over the long term, the average arrival rate of demand must be definitely less than the average servicing rate, otherwise a queue of larger and larger size will surely gradually accumulate. The explanation is simple because time lost when the servicing operation is idle for want of arrivals to serve is never ultimately regained due to the statistical independence of arrivals and service rates. Thus, for steady state, u < 1. (2) The average wait is not determined entirely by the utilization of the service facility, but depends separately and strongly upon the coefficient of variation of the service time, 12-5 (3) (4) (5) (6) (7) i.e., upon u(S)/S. The dependence is simple in the M/G/1 case. The stationary average wait W of an arriving unit, measured in units of time equal to the mean service time, is given by the formula . I . u 1 (u(S))2 ] _ 1 _ u) L1 + S W = average wmt arnva = ( 21 Thus of all possible service-time probability distributions, constant service time for all units will occasion the least wait because u(S) = 0. Note in particular that u may be very low and yet the average wait W may be very long if the coefficient of variation of service time is sufficiently large. Thus eliminating the occurrence of any excessively long service times may most sharply improve a queued operation. At any moment the units in the queue are generating" waiting" at a total rate proportional to the queue size. This total waiting rate can also be generated as the product (average number of units arriving per unit time) X (average waiting per unit arriving). Hence the equation B = aW. The lengths of busy periods fluctuate with sharply increasing variance as the utilization u increases. In theM/G/1 case, the timet required to reduce P (units waiting and being serviced) to 0 is the sum of P busy periods if service has just begun. The variance of C is then proprtional to (1-u)-3• Obviously the extremes of queue-fluctuation, and of wait, can be avoided if service rate is queue-responsive. Tendency to queue can be markedly aggravated if the presence of a queue actually impedes service. A familiar practical instance occurs in traffic when two traffic lines are required to merge into one. If traffic is light or if the merging is tightly controlled and executed well upstream of the blockade, merging may be achieved at no loss in flow along the highway nor any loss perhaps even of velocity of individual vehicle. But if merging is delayed until the blockade point is reached, then sooner or later randomness will result in two cars arriving there simultaneously, having to slow down to resolve their positions. Other vehicles arriving may continue the merging at slow speed. This drop in merging speed constitutes the drop in highway flow at the point, and the lower flow (service) rate tends to be maintained as long as a queue continues. Queues of this sort, once started, may even persist after the clogging obstacle is removed because of the fact that the flow rate obtained at a point on a traffic route acts like a servicing rate at the point, arrivals being the approaching vehicles. Sometimes the head of the queue will now drift upstream or downstream, depending upon the characteristic propagation speed and direction of velocity waves along the traffic stream. Only an ultimate chance cavitation of sufficient length in the arrival input will dissolve the queue. This phenomenon is not confined to vehicular flow. Sometimes it is physically possible to observe and measure only the occurrences of output from service. For example, in the M jG/1 case, the time interval T between two successive completions of service has the complementary probability function prob IT> tl = prob IS> tl + (1-u)e-atj' eaxs(x)dx 0 If service time has a maximum Smax, then a plot of this function based upon the observed values can be used to estimate sln:IX• The output of an M/M/c queue with Bm,.x = ro is Poisson in the distribution of the number of discharges in any given period of time. For compound demand, the sets M1 of times at which units arrive, and M2 of times at which servicings are completed, constitute event-processes that are "embedded" in the total continuous-time process. So does the union of M1 and Mz. This latter embedded process is the natural one to simulate if possible. For Poisson arrivals, the value of P just before times of the set M1 has the same 12-6 distribution as at arbitrary times under steady state conditions, and has also the same distribution as the value of P just after times of the set M2. ·(8) Efficiencies of size. For the same utilization u, a service unit consisting of many servers will have a shorter average queue than a service unit of few servers. For example, suppose that M machines-each with exponentially distributed running times of average length a-1-are serviced when they stop by one of a group of c servers, the service time being exponential with mean equal to 0.1a-1• By use of the formulas given in paragraph 12-11 (machine repair), the following results are easily calculated for comparison with each other. The probabilities are given for the various possible values of P, the number waiting or being serviced. p 0 1 2 3 4 5 6 7 8 9 10 11 12 P/M Bjc M=4} .65 .26 .08 .01 .002 (P does not exceed 4) .116 .32 C=1 M=12} .32 .38 .21 .07 .02 .006 values are negligible .085 .36 C=3 P/M, the fraction of time a machine is out of operation (being serviced) decreases and B/c, the fraction of time a repairman is busy, increases as M increases for the same utilization rate. 12-9 Tactical Aspects 12-9.1 General Tactical aspects of service systems include alternatives of design and of operation, Alternatives of design include number of servers, traffic intensity, maximum feasible queue size, specialization, and excess. Alternatives of operation include priority, routing of overflow, any possible scheduling. The paragraphs which follow are illustrative of the considerations in choosing between alternatives. 12-9.2 Capacity of Service and of Queue The traffic intensity q should correspond to the system's effective capacity E*. Because of the sharp increase in queue size and wait when the utilization u is not significantly less than 1 in value, E* may be much less than c. For example, the practical capacity of an airport will be one in which Bmax is seldom exceeded, for when B > Bmax the queue cannot be controlled and for safety reasons incoming traffic has to be diverted. In many cases the formula in paragraph 12-8(2) above can be employed to approximate E*. Formulas in paragraph 12-10 will be useful in determining the number of servers required to handle a given traffic volume at given utilization. 12-9.3 Costs: Wait vs. Idle Capacity. Maintenance M machines are to be serviced by c repairmen when they fail. Each repairman has a value per unit time of v, and the value of a unit of machine running time is m. The objective is then to determine the value of c which maximizes m(M -P) -vc, P being the average number of machines not running because they are either waiting or being serviced. Note that it is not equivalent to sub~titute Qfor P. Alternatively, the objective may be to keep the output from M machines up to a fraction f per machine per unit time. Thus, f = 1-PjM. How many repairmen care required? Paragraph 12-10 (machine repair) provides formulas. For a given traffic input, determining the value of c which minimizes the sum of the value lost in idle waiting and the value lost by under-utilization, is an instance of provisioning in the face of randomness of the type identified in paragraph 8-1.3(5). It is analogous, as a problem of optimal structuring to the problems of optimal redundancy in paragraph 13-12 and of optimal inventory in paragraph 14-7. 12-9.4 Specialization of Servers Delay may be used to measure the effectiveness of specialization of service. For example suppose that six different kinds of jobs occur as the Poisson arrivals to a center, one of each type arriving every 12-7 two hours, and that each job takes 1% hours on the average, the actual time being exponentially distributed. The following table summarizes the alternatives of assignment and the corresponding delays. Number oj server types Jobs Iserver type Averagedelay (hr) 1 6 1.0 2 3 2.3 3 2 3.8 6 1 7.8 Thus complete specialization is, in this case, nearly eight times worse (measured only in terms of delay) than complete non-specialization. 12-9.5 Access to the Queue The variety of kinds of human and mechanical service includes a great variety of forms of physical access and approach of a given free server to the queue or queue(s) of waiting units. Many of these have been analyzed in studies conducted with the telephone industry, mostly for exponential or Erlang service times. For example, a typieal subscriber has access only to certain of the circuits of a telephone exchange. Many details will be found in reference 12-2. The design of the queue and of Qmax are related questions. 12-9.6 Priorities Priority is order of service based upon the identity of the arrival, or its priority class. Order of service may also be based upon known length of service time. Priorities are employed either to deerease the total value of all waiting, or, in some cases, to establish a feasible and reliable order of service. Priority based on service time can affect the average wait of all units. This fact can be easily seen in case some arrivals have a service time of 0. Permitting these arrivals to go first will not increase the wait of other arrivals, but will decrease the wait of these 0-service-time arrivals. When service time is precisely predictable, service in order of least service time will minimize the average wait in the ease of Poisson arrivals. If service time is not exactly known, but if there is a set of classes numbered i = 1, ... , finite or infinite in number, such that each arriving unit belongs to exactly one of these classes, and if the class of an arrival is identifiable when the arrival arrives, and if the service time of a unit of the ~"th class is a random variables; with means;, and if the value (cost) per unit time incurred per unit of class i that is waiting is equal to v;; then the total cost V of all waiting that is incurred can be minimized under Poisson arrivals by assigning priorities to the classes i = 1, ... so that the higher the quantity v;s;, the higher the priority that should be assigned to class i. This rule may be seen from the fact that if two units are ready for service, one of class i and one of class i +1, the total cost experieneed with respect to these two units alone during the period of time required only to serve them both will be v1+1S; if the unit of class i goes first, but would be visi+l if the unit of class i +1 went first. Thus the higher the value of V;S;, the lower the cost incurred. In case all of the v/s here are equal to 1, the quantity V is just the total waiting so that waiting may be minimized by service in the order of lower service time. Conversely the assigning of priorities, if proper, is equivalent to imputing a cost of waiting in relation to service time and to the service time of other units that are being served at the same time. Service in order of arrival or "head of the line" priority may be the only physically feasible or advisable order, as for example the order of handling of road traffic or of transport of cargo. An example is the landing order of high-velocity aircraft at airports. A special type of priority is preemptive priority in which a unit of higher priority upon arrival dis places a unit of lower priority from service. When the displaced unit returns to service, the service may be resumable from the point at which it was stopped or it may be necessary to repeat the service from the beginning. In the latter case, when variability of service time is due primarily to the server rather than to the arrivals, the service time may be chosen all over independently of the length of any previous choice of it. 12-8 12-9.7 Scheduling If it might seem that scheduling arrivals and service times would relieve a given queue, then first it must be determined that there is a feasible schedule, which may not be likely. Second, it must be determined that the effect of scheduling is not merely to conceal the queue instead of revealing it; for example, the schedule of appointments may simply itself be the queue. Third, the effort required to schedule must be compared with the cost of waiting. A more likely resort may be to schedule service capacity and to adjust capacity currently to the current queue size. When service is performed simultaneously by several statistically identical servers, the capacity may be increased by increasing the number of servers; this procedure is familiar at toll booths and store checkout counters. Sometimes the servicing rate per server is observably elastic, increasing in response to a large queue and decreasing (for rest or recovery) when the queue is small. This is the normal pattern of human effort, the rate of which has some elasticity. 12-9.8 Networks The design and operation of networks of service centers is a subject of current research in the communications field and in the design of automatic conveyors. In communications networks, alternate routing is a principal design topic (ref. 12-3). 12-9.9 Special Considerations of Size In planning large centers, the efficiencies of size illustrated in paragraph 12-8(8) and of specialization in paragraph 12-9.4 are balanced against geometries of access, against vulnerability, and against the travel time required by a unit to reach the center. 12-9.10 Statistical Independence The assumption of independence may be a conservative tactic in design. If the arrival and service processes are statistically dependent-tending to minimize the extremes of queue-size and wait (e.g., the longer a unit waits the shorter its service, the longer the queue the faster the current servicing)then mathematical estimates of the operational probabilities tend to become more difficult even than if independence of arrival and service processes can be assumed. Since the effect of the dependence is to smooth the queue-size and to decrease the wait without changing the utilization of the server (e.g., if the latter now tends to serve more slowly when the queue is small-"resting up"), then estimates obtained by assuming independence will tend to be conservative, i.e., they will overestimate queue size and the service capacity required to handle the arrival traffic. In general, to assume such independence may be a good tactic if it (1) makes rough lower bounds easily calculable, (2) serves to offer an incentive to the operation by providing a standard which mere random conduct should at least attain (e.g., it may be possible for the servers to schedule the service to reduce delays in ways too complicated to model), and if (3) the mission-assignment does not provide authority to schedule arrivals. 12-10 Computation Methods 12-10.1 General Methods The computing of solutions to queuing problems has attracted a great deal of applied mathematical effort in the last decade. Much of the literature deals with the generating functions of the probability distributions of queue sizes and the Laplace transforms of the probability distributions of delay and of length of server busy-periods. These results are often needed as preliminaries to computation, but not always. Their principal usefulness is to group apparently diverse types of problems, and also in the calculating of the moments (means, mean squares, variances, etc.) of operational quantities. But in any given case, considerable computational effort may be necessary to invert such transforms to find numerical values. For practical needs one has three main alternatives which should be compared in each case: (1) Find and use available formulas in attempting an analytic solution (para. 12-11); (2) Estimate the answer by Monte Carlo simulation; (3) Bound the solution using known formulas for cases that approximate or bracket the operational specifications of the problem. The remainder of this paragraph describes several analytic computational methods as well as some '~, resorts of approximation in modelling that can make available formulas applicable. 12-9 12-10.2 Methods of Treating Nonstationarity of Demand In most cases the demand upon a service center is not stationary. Except in the case of constant service time, the transient probabilities become quite complicated to calculate if the arrival process is nonstationary. When service time is constant, and arrivals Poisson, the number of units present at time intervals of length equal to the service time forms a simple Markov chain; reference 12-4 reports how the probability distribution of the number of aircraft in the queue at a commercial airport at each minute of a 24-hour day was quickly computed on an electronic machine by simple multiplication of the transition matrices of the chain. Arrival rates of aircraft were 25 times as great at the peak hour as at the slack hour of any date. To overcome computational difficulties, resort may be had to: (1) Monte Carlo simulation; but for high accuracy this may require large samples. (2) Assuming that the demand is stationary at or near the peak rate. This approach has been employed for years by the telephone industry in calculating the number of circuits that should be provided in an exchange. (3) Assuming that demand is stationary at a rate equal to the time-average. Especially when there is elasticity in service capability, i.e., service times tend to shorten as the queue increases. This assumption may give good results. Resorts such as these that make computation easy can facilitate prompt engineering when the alternative may be no systematic engineering. 12-10.3 Methods When There Are Several Servers Except when service time is exponential, analytic calculations involving several servers can be difficult. No simple formulas for the wait are known in the M /G Ic case. The following approximations may be useful. (1) It may be possible to imitate a c-server unit by a single server one, at least asymptotically for large queue, with a service time that has a mean of Sjc and a coefficient of variation also equal to c-1 times the coefficient of variation of S. Thus for Poisson arrivals an approximation to the average wait is w 2c2[1 -aS/c] (2) The M/G/c case may also be approximated by M/M/c. In this case, the variable P(t) is a continuous-time Markov process with transition rates P-.P+1=adt P --. P -1 = g min[c,P]dt Steady-state formulas are given in paragraph 12-11 (5). (3) Approximation by M/D/c. An embedded Markov chain at intervals of the service time can be readily calculated by computer matrix multiplication if P does not become too large, as described in paragraph 12-10.1. (4) Approximation by MjGjc, Bmnx = 0. (Paragraph 12-11 (1)). 12-10.4 Erlang's Phases Suppose that a service time can be approximated by an Erlang type k random variable with density function (kg)ktk-le-k"t -----· (k -1)! Service time is then the sum of k statistically independent values of an exponential random variable with density function kge-k" 1Let each exponential correspond to a phase of work. A service begins by • 12-10 passing into the 1st phase, from there to the 2nd, etc., and is completed upon leaving the kth phase. These phases were identified by Erlang. For a service in progress, the number of the phase currently in progress is a state variable for completion of service; so is the number of phases remaining to be completed. The state of any service in progress is thus an integer from 1 to k, with transition rate kgdt. An arrival adds k phases to the server backlog. The method of phases may be extended to phases not merely in series, but also in parallel, and even to networks of such phases. Phases in parallel correspond to hyperexponential distributions which have coefficient of variation greater than 1. Use of such exponential phases eliminates the differential-equation in service time in favor of difference-equations in the discrete variables that represent the phases of service. The result will not always be a computationally simple scheme. For the M/Ek/c case, (kt~~1) -k+2 simultaneous equations have to be solved. One of the most effective uses of phases is in Monte Carlo simulations. The exponential distribution ce-ct can be easily Monte Carloed, namely, merely by calculating the product of c-1 and the logarithm of a rectangularly distributed (pseudo) random number. The scheme of exponentials~series, seriesparallel, network, etc.~can be carried simply in any computer program. Phases can be readily used to study dependence, reneging, jockeying, and other non-elementary queuing operations. The topic is covered extensively in reference 12-5. 12-10.5 Representation of P by Differential and Integral Equations When service time is not exponential or when arrivals are not Poisson, P~the number in the system ~is not a state variable. Either for analysis or simulation, supplementary variables will be needed to provide a state vector. The necessary methods can be illustrated in theMjG/1 case. Arrivals are Poisson, but service time S has a general distribution function S(x) with probability density function s(x) = pd {S=x}. For a state-vector at time t, if P(t) > 0, the state-vector can be the vector [P(t), Y(t)], where Y(t) can be chosen to be the length of time for which the service in progress at timet has either been in progress or will yet have to be in progress until the service is completed. In the former case, service commenced at t-Y(t); in the latter case service will terminate at t+ Y(t). The latter is simpler since Y(t) is the total service time when service begins, and after that Y (t) decays constantly and nonrandomly with time. When Y(t) is the remaining service time, then the sum of Y(t) and the service times of all of the units waiting in the queue gives the server backlog, measured in time units, and is the wait W (t) which a unit arriving at time t would experience. For analysis alone, a system of difference-differential equations represents the forward ChapmanKolmogorovscheme,asfollows: Letj(O;t) = prob (P(t) = Ol andletj(n,y;t) = p {P(t) = n, Y(t) = y} for n ;:::: 1. The function j(n,y;t) is a mixed probability, probability density function. For the case in which S(O) = 0 (i.e., service times of 0 length do not occur), the following are the equations from which the corresponding differential equations are readily obtained: J.(O;t + dt) = j(O;t)(l-a·dt) + j(1,0;t)dt j(1,y;t + dt) = j(O;t)a·dt·s(y) + j(l,y + dt;t)(1-a·dt) + j(2,0;t)s(y)dt n > 1: j(n,y;t + dt) = j(n-l,y;t)a·dt + j(n,y + dt;t)(l-a·dt) + j(n + 1,0;t)s(y)dt An explicit solution for any time t can be obtained in terms of the distributions of P(O) and Y(0) using standard methods, i.e., the generating function on nand the Laplace transform on y. The solution as t-> ro is given in par. 12-11 (3). For simulation and possibly for analysis, the above differential scheme is not necessarily the most 12-11 useful. For analysis, the system of integral equations below is of renewal nature. Note that it "integrates" the above completion of service to the amount t: j(O;t) = f j(l,O;t -x)e-axdx + j(O;O)e-at j(l,y;t) = j(1,y + t;O)e-at + a [ j(O;t -x)s(x + y)e-a·"dx (ax)n-1e-ax j(n,y;t) = J(n,y + t;O)e-at + )o a)(O,t-x) (n-1)! dx • { 1 • 1 n (ax)n-1-ie-ax + 1o t; aj(i,O,t -x) (n-)! s(x + y)dx 1 This system can be solved explicitly by first letting y = 0. 12-10.6 Lagrangian Simulation A Lagrangian type of simulation is usually most effective. In it, one proceeds directly from one event to the next, where an event is either an arrival or a completion of service, or of a phase of either. At each arrival the time-interval until the next arrival can be determined if the input is of renewal type (a Poisson input is of this type). If arrivals are impatient, the time-interval until the arrival will next "review his progress" can be used instead. When a service (phase) starts, the time-interval until its completion can be determined. The relative times of events so determined can be put into a future-event table in order of occurrence. Sample history is then advanced from event to event. The values of P, W, E, etc., so generated are recorded. Their averages can be quickly estimated by exponential smoothing, as can also the occurrence of values in given portions (e.g., percentiles) of their distributions. 12-11 Formulas The categorization of queues given in paragraph 12-7 is employed here to specify the type of queue and types of probability distributions to which the formulas apply. (1) MjGjc, Bmax = 0. In this case there is no waiting. P = E. qo = aS is the traffic intensity offered, and q = qo(1-p(c)) is the traffic intensity handled. Pis a truncated Poisson random variable with parameter equal to Qo, i.e. 1 q0 ne-11o c Qo'je-IJo p(n) = K --,-, K = L --· n. n~n n! System availability is thus 1 ',--p(c). p(c) is known as Erlang's loss formula, and was first established by Erlang for the fraction of telephone calls failing to find a free circuit. (2) M jGjc, Bmn > 0. No general formula is known which does not depend on the form of the service time distribution. The case Bmax = 0 can provide a useful approximation when waiting is quite undesirable. One then sets c high enough to keep p(c) in (1) above sufficiently low. Examples: hospital beds, aircraft gates, harbor berths. 00. (3) M /G/1, Bmax = Service in order of arrival. u =aS A = P(O) 1-u I= a-1 c S/(1-u) I + C = a-1/(1-u) tl dt 1 - ax(e) f'e-8xprob {W > xl = ~ f"c:er+1 prob {P > nl (1-z) y(a(1-z)) E{zBl = E{e-aO-z>Wl = (1-u)-'y(O) = E{e-ss:.y(a(1-x)) -x E{e-8cl = y(e + a[1 -E{e-8clJ) d dt W(t) = u(t) -[1 -A(t)] Special Service Times: 2 Erlang K: W -u (1 + 1) -2a(1-u) k 2a(1-u) = ~ the wait when service is exponential (4) Birth-Death Queue This is si_mply any continuous time Markov process for the variable P in which the transition rates are an, n+l = an fwhere a,,n ;::: 0, and s,,n ;::: 1 are given an, n-1 = Sn [_positive constants The steady-state distribution of P is then IL-l a· 00 p(n) = p(O) II r;, where r; = -', L p(n) 1 ~0 s~l n= This is easily computed by hand if Pmax is not too large. Formulas for a number of special cases involving exponential distributions can be obtained byreference to this birth-death scheme. Examples: (a) c=1, and each arrival joins with probability a/(n+1) if it finds P = n. Then P has a Poisson distribution with mean of q. (b) C=1,Bmax = B un(1-u) p(n) 1-u8 H (5) M/M/c a q =aS u = q/c I s 12-13 Let En = min[c,n]. In the birth-death scheme put an = a, Sn = sEn. Then "" I: p(n) = 1 n=o The probabilities p(n) start out like a Poisson distribution, but for n > c they decline geometrically. W(O) = P(c-1) = 1 -P(c) 1-u W(t) = [1 -P(c-1)](cs -a) e- x l termed the reliability function or survival function, perhaps a density function r(x), a mean or expected valueR, and a variance u 2 {Rj. The reciprocal of the mean, "R-r, will be denoted as h. By keeping records, from operating logs or of laboratory life-testing, it may be possible to observe or estimate the distribution of F. Figure 13-1 presents several such distributions. It is based upon data in reference 13-1. 13-3.3 Reliability When the specifications of a system's performance require it to operate without failing for a specified interval of time of length T, the reliability of the system is defined as the probability that such a system (when manufactured according to manufacturing specifications, and when put into specified operation) will in fact operate for the interval T without failing. 13-3.4 Readiness The interval G of continuous system operation or availability will be random because it is sometimes unexpectedly terminated by failure. When the maintenance policy being used terminates G at a definite time, probability will be concentrated there. In the formulas below, g(t) is used as the probability function for G, but it is emphasized as a particular detail of computational requirements, that a Stieltjes integral will be required to cover concentrations of probability. When the system fails or when it is withdrawn for programmed maintenance, an interval of unavail ability U commences. If repeated repair is typical during time, then the ratio -G = is termed the G+U readiness factor. Over a longer period of time this ratio is the average fraction of the time during which the system is in operation or available for operation. Availability may be defined as the probability that F(t) = 1, denoted as A(t). The readiness factor for an interval (a,a+T) is~ J."+TA(t)dt, the "interval availability." If the process F(t) is ergodic, the interval-readiness factor, the availability, and the readiness factor will coincide in numerical value as t ---+ oo. The interval U is an" effort-interval," G being an" effectiveness interval." U is composed of three subintervals (cf. para 11-8): (1) W, an interval of wait for repair or service (in some queue, or until depleted stocks of repair parts are replenished). (2) S, the service interval, service here including the performing of whatever programmed maintenance, unprogrammed maintenance, repair, replacement, etc. is called for at the time. (3) /, an inventory or storage interval. 13-3.5 Maintenance Rates In the case of independence, or approximate independence, the occurrence of failure and restoration are renewal processes, and several of the above quantities satisfy renewal-type equations. Let u(t) denote -~ ', 13-3 ...... ~ I ... Exponential Distribution (i'lotting the ordinate of 1.0~~this graph on a log scale 0.81-·,u~~ ] produces a straight line.) ANALYSIS OF OBSERVED 100~----------------~ ~~~ RELIABILITIES 0 ·6 f-... ~ RADAR B .., •• ~R:8UADRON I ...J 0.4 \ ~x._~ (Abscissas are Reliability Intervals) 80 5 ~ ·~ xx en 0 3 e, o,c)' ..J . 'e 'X.:"...... ~ 0 ' RADAR B ~% . -- " \ "' Ill 60 .=_.0.2 'e SQUADRON 2 'o ~15 TIME 1.&1 RADAR A ~ a: z :::J ...J e, ...., \ ~ ~ :::J OBSERVED 40 •\ DISTRIBUTION &... z 0 O.l 5 10 15 2 a: I 1.&1 OPERATING TIME -HOURS z Cll 20 DIFFERING FAILURE RATES ~ ~10 :::J (ALL EXPONENTIAL) z ::t v < THEORETICAL 1.&1 0 DISTRIBUTION 100 200 300 400 500 600 700 800 z OPERATING TIME TO FAILURE -HOURS Ill RADAR VACUUM TUBES : FAILURE 1.&1 a: < In the Failure Rates plot, constant failure probabilityj 5 is indicated by the linear relationship between the logarithm of the fraction surviving, and the reliability &... intervals (time). The two different squadrons were deployed to widely separated areas, one in combat and the"" 0 other in training; the close agreement of data indicates failure rate to be more characteristic of equipment and a:.., its immediate environment than of the operational Ill situation. :l: The same plots agree closely with the values predicted :::J by an equation of the form rtotal=rtT+rpP, where rt z and r are the average failure rate constants for vacuum 600 700 800 900 tubespand other parts, and T and P are the number of tubes and other parts. TIME .... MINUTES FLASHLIGHT DRY CELLS : WEAR-OUT Figure 13-1 the probability density that U = t; s(t) denote the probability density that S = t; and let dW(t) denote the (Stieltjes) differential of the probability W(t) that the wait is ~t (the Stieltjes notation simply covers at once the case where W = 0, as may frequently happen). Let a(t) denote the average time rate of onset of the interval U, at time t, and let m(t) be the average time rate of completion of the interval U, and thus onset of the interval G. Denoting G + U by C, let C(t) = prob (C~x} and let c(t) = pd{C=x}. If at time 0 any one of the above intervals is specified as being in progress, denote the specified initial distribution of its length by subscripting the corresponding symbolism. thus: Go, f!o(t), etc. Then under independence, interval U u(t) = { s(t-x)dW(x) cycle length c(t) = { g(t-x)u(x)dx maintenance rate: m(t) = U 0 (t) + { m(t -x)c(x)dx = Ua(t) + { a(t-x)u(x)dx at arrival rate: a(t) = aa(t) + { a(t-x)c(x)dx time t = aa(t) + { m(t-x)g(x)dx availability: A(t) = prob{Go > t} + { m(t-x) prob{G > x}dx 13~3.6 Amount of Time System is Unoperable in a Given Interval Let u(x;t) denote the probability function that in the interval (O,t) the amount of time for which the system is unoperable equals x, assuming that the system is put into service at time 0. Then simply counting possibilities: u(x;t) = g(t-x) {" u(y)dy + t g 1, a oo, the amount of down time becomes normal with a mean 13~3.7 Residual Lifetime When a system's lifetime has survived to age x, the conditional or residual lifetime, can be denoted by R!x. r(x+t) R(t+x) pd{Rix} = t} = R(x) prob{Rix 2:: t} R(x) 'd ll'f . . !."' R(t)dt The mean res1 ua 1et1me 1s c..::·r___ I R(x) 13-5 13-4 Failure Rate and Hazard Function 13-4.1 Types of Hazard Functions For systems that are renewed by repair or maintenance, complete insight into the behavior, and into the best of certain types of support strategies, is provided by hazard function h( ) associated with the interval R. This term is variously called the failure rate, instantaneous failure rate, mortality rate, or unreliability rate. For a given time-to-failure R, the rate is r(x)h(x) R(x) when the system has survived for a time x (and thus when the ultimate lifetime R will be at least x) This rate is thus given as age-dependent. Three cases stand out: (1) decreasing hazard rate. For all x, h(x) decreases as x increases. Then system support action can only increase the failure rate if it lowers the age; leaving the system alone will lower the failure rate. (2) constant hazard rate. Only one probability distribution, the exponential, in which r(x) = he-hx where h = R 1, has a constant hazard rate. No support action can affect reliability for -the future; present reliability is independent of past support action. (3) increasing hazard rate. For all x, h(x) increases as x increases. In this case support action may sooner or later be an economical strategy, depending upon the costs entailed in failure and in support. 13-4.2 Standard Representation of a Lifetime Distribution in Terms of its Hazard Function Recall (cf. renewal processes in ch 6) that any complementary cumulative distribution function R(t) and its density function r(t) can be represented in standard form in terms of the associated hazard rate function h(t) by R(t) = exp [-{ h(x)dxJ and r(t) = h(t) exp [-f h(x)dxJ i.e., as an age-dependent exponential. The quantity H(t) = { h(x)dx may be regarded as the total threat, or stress that the system receives in time-interval (0-t). For example, if a system has a history oft hours of operation, s hours in storage, and k tests; then the total threat might be representable as 13-4.3 Hypothesizing a Hazard Rate In the absence of other criteria for hypothesizing a distribution of lifetime for a system whose structure is not a known synthesis of components, the following considerations are pertinent: (1) Through prior damage or defect or by not being the specified system, a unit may fail immediately or "initially." Experience reminds that a definite probability attaches to this kind of mistake. The hazard rate is then h(t) = ro for t = 0 and h(t) = 0 for t > 0. (2) Wearout may effect failure in many ways. By definition (as opposed to wearin) it cannot increase the hazard rate. A system having nonrandom lifetime R will have a hazard function h(R) = ro, otherwise h(t) = 0. Wearout may not be ultimately entirely separable from chance failure, as indicated in (3) below. (3) If this is particularly environmental in source of cause of failure, it may be representable as the occurrence of an input that is "too much" for the system. The process of occurrence of inputs of sufficient such size may well be Poisson, except that the threshold or vulnerability level defining too much may change systematically as the system wears out. (Human 13-6 susceptibility to disease changes with age at rates and directions dependent on the disease and the human.) The hazard rate h(t) may be regarded as the average rate of occurrence of inputs of such lethal magnitude in such a (nonstationary) Poisson process. Thus, the probability that the system survives to age t is precisely Note that the preponderance of abstract considerations does not ordinarily favor a decreasing hazard rate, although to design a system having such a rate might be an excellent objective considering the support cost. The subject of hazard rate is continued in paragraph 13-6. 13-5 Some Lifetime-to-Failure Distributions 13-5.1 Exponential (cf fig. 13-1) The exponential distribution r(t) = he-h 1 , h Tl-1 may characterize the lifetime to failure for one or more of the following reasons: (1) The principal cause of failure is a chance effect from the environment. (2) A large serial system-i.e., one which fails when any part fails-will have exponential lifetime to failure if the parts and their failures are independent and if repair times are negligibly short. (3) There may be many independent external possible cause of failure that tend simultaneously and continuously to threaten the system. (4) If failure can be associated with environmental forces exceeding a particular level (of strength, e.g., turbulence, strain, etc.) and if the value of the level impinging upon the sytem through time is a Gaussian process, then the first passage times are approximately exponential in probability distribution, first passage corresponding to first crossing of the level that produces failure. (5) Since the distribution is simple, for computation, it is thus useful whenever the hazard rate is approximately constant. Design data can be combined in a simple form. The combined effect of (1), (2), and (3) above can be summarized as follows. If an operating system has N components, and if N is very large, and if the components are sufficiently independent; then "failure of some component" is a Bernoulli process. Consequently if R.rv is the average operating time until failure of each of N nearly identical such components, then is the probability that the system's operating time to failure exceeds t. This decreases rapidly, exponentially, with N, t, and R-;/. 13-5.2 Weibull The Weibull distribution may be regarded as the special age-dependent exponential, in which h(t) = avtv-1 fincreasi_ng i! v > 1, and then unboundedldecreasmg If v < 1 so that r(t) = h(t) exp [-{ h(x)dxJ= avtv-1e-atv Weibull probability paper is available for plotting data. The integral { h(x)dx may as usual be regarded as the total expected threat, measured in expected number of lethal shock(s) up to time t. If lethality is determined by some vulnerability threshold of the system, then if v = 1 this threshold remains constant through time (and the distribution is exponential), whereas if v < 1 (v > 1) the threshold drops (rises) with time. Instead of time, some other measure of accumulating threat may be employed, for -~ \ example, physical stress of given magnitude in strength-testing in reliability studies of materials-fatigue. 13-7 Under the random variable transformation x = tv or t = vtia, the Weibull becomes r(x) = ae-ax, so that lifetime is exponential in t to the power v. This transformation may be required, or may be particularly appropriate, when geometry is involved in the failure mechanism or process. 13-5.3 Gamma The gamma distributionj(t) = bktk-te-btj(k-1)! includes the Erlang when k is an integer. A gamma lifetime has increasing hazard rate if k > 1, decreasing if k < 1. The gamma may be regarded as the sum of k independent random variables (k may be discrete or continuous) each of which has the exponential distribution be-bt. (Thus if k = 1, the gamma is the exponential.) The distribution may consequently seem appropriate for failure which is caused by a definite sum of "exponential shocks." The failure rate is not a simple expression, but tables of the incomplete gamma function exist and so do standard computer routines for tabulating the gamma function for nonintegral values of k, and thus for the gamma distribution. The gamma has simpler moments in terms of the parameters band k than does the Weibull. 13-5.4 Comparisons All of the above distributions may be regarded as firstpassage type of random variables and, in that way, especially appropriate to failure. If the primary cause of failure were an accumulation of shocks and if there were a compound Poisson exposure of the system to shocks (the increments of the compoundPoisson process being the sizes of the shocks), such a process might be considered, especially where the individual shock-size is to be considered. Of course, an inventory is depleted (fails) as the first passage of a demand process that is in reality often compound Poisson in structure. The above distributions offer structures to which it may be possible to fit observed data on components or systems (cf. fig. 13-1). Paragraph 13-7 discusses some simple system-structures in which the above distributions may serve as elements. 13-5.5 Predicting Lifetime Obviously, much of the study of reliability consists not merely in discovering the lifetime processes that occur, but rather in aggressively designing the lifetime processes that are to be achieved as a result not only of the design but also of the policy of maintenance support. At the present time, the knowledge of such deliberately designed processes is in a state of development. The complete task of predicting the numerical values of parameters of these distributions in any given case-e.g., in the design of a missile-involves several considerations: (1) prediction of input. The relevant paragraphs of chapters 11, 12, and 15 contain some schemes for representing the total input complex and spectrum. (2) structural synthesis. Paragraph 13-7 discusses some simple structures in which the above distributions may serve as elements. (3) predictive synthesis. The number of combinations possible when (1) and (2) are combined may be very large. It will consequently be very important to parameterize the space of possibilities as much as possible. (4) empirical correlation. Field reliability may correlate better with itself and with laboratory observed reliability than with reliability predicted by (3). But (3) may serve as the best regression curve for observed field reliability. Reference 13-2 presents a method of estimating reliability growth. 13-6 Monotonic Hazard Rate If the hazard rate is monotonic i.e., if for all t either h(f) increases or decreases---then various specific and interesting facts result that are useful in determining maintenance strategies. Many of these follow the equations in paragraph 13 4.2. A number of those listed below are established and summarized in reference 13-3. This recent reference includes numerical tables of bounds on monotonic hazard rates, and is the first available summary of relevant mathematical knowledge about reliability. (1) The residual time to failure, R[x, of a lifetime that has survived to age x, has hazard rate of the same sign (+,-)as the unconditional lifetime R. 13-8 _t;-) {decreases} 'f H(t) . t . {increasing} (2) v R(t . I , IS mono omc d . . mcreases ecreasmg (3) If h(t) is {increasi~g} then (JCJ') is {:::; 1} decreasmg ' R ~ 1 (4) An important lower bound, established in reference 13-3, has several useful numerical consequences. If h(t) is monotonic increasing, then for any integer k, for t :::; hk = (Rk) 11 k, R(t) ~ e-11 "k(f(k+1)) ~ Thus, for example, R(t) ~ e-t!R -= e-~ R, where 1 -R y(t) = e-t u eM,G* will satisfy the relationship (Eq. 1) h(G*) GiG* + R(G*) For this age G*, G* 2: R eE/eM, and the support rate is e(G*) = = h(G*)(eE -eM). (Eq. 2) GIG* 13-9.2 Maximum-Availability Age-Renewal • In the case considered in paragraph 13-9.1 the support efforts are applied at equal time rates.However, if the average durations UE and UM of the intervals U of inoperability encountered for emer 13-11 gency and preventive renewal, respectively, are appreciable, then the average length of an interval U will be 7 R(T)] -UMR(T) U(T) = i , U Er(t)dt + UMR(T) = U E[1 l 'l b'l' . ffi . . . h . U(T) Th . . . h ces to mm1m1ze t e ratiO -=. e expressiOn T o maximize t e ong term average ava1 a 1 Ity, It su UIT for U(T) here is formally identical with that in Equation 1 for e(T). Hence the optimal value G* of T is obtained by substituting UE for eE and VM for eM in the Equations 1 and 2. 13-9.3 Least-Support Rate Periodic Renewal If preventive renewal is done at intervals of length T regardless of what failures occur, then the average total support effort per interval will be e(T) = eM + eEME(T) MB(T) being the average total number of emergency supports in the interval T. The value T* which minimizes the efl'ort time-rate satisfies T*ME(T*) -ME(T*) = eM eE The optimal effort rate is then eEmE(T*). Here mE(T) is the time rate at which emergency supports occur. More details of the policies in paragraphs 13-9.1 through 13-9.3 are given in reference 13-3. 13-10 Variations of Supporting Action The policies of paragraph 13-9 assume that a support action simply effects renewal. For a complicated system, kinds of maintaining (supporting) must be represented corresponding to numerous possible state-values of the system, from simple service to complete replacement of the system. When the system state can be represented as a Markov process, dynamic Markov programming may be used to decide upon specific maintenance actions, whether to repair and/or replace, etc. Methods of this type are currently under development and trial. If the decrease in capability is in each time period a random decrease, then it is as if the operating age, or effective age, of the system is not the same as its chronological age. In the simplest case effective age may constitute a Markov process, corresponding to unexpected use and effects upon the system's state. For purposes of computation, time may now be measured discretely. The effective age of the system at time n, denoted by T ,, will have a transition probability dependent upon T" and upon the maintaining (support) policy that is followed. The effectiveness of the system will depend upon the age (it may even be used as an index of age) and the age will depend upon the support policy followed. Formulated in this way, the problem is one of dynamic programming against uncertainty. In the simplest type of system replacement model, a system of"age" T since renewal will have a net effectiveness rate E(T) expressible as the difference between the value of yield and the cost of maintenance at that age. The system can be replaced at age T, at a cost (effort equivalent) V(T), equal to the cost of a new unit minus the salvage value of the replaced unit. If age T is clock age, the best replacement age can be determined by maximizing the ratio t1' E(x)dx -V(T) --~-----~ T V (T) will typically increase corresponding to the least decline in system capability, and to maintain this least decline will require the cost element in the quantity E(x). 13-12 13-11 Optimal Support Capacity, Crews and Spares 13-11.1 Support Servers The models of paragraph 13-9 assume that fJ is not a decision variable. From chapter 12, the wait W, within the interval U, will depend upon the amount of support capacity scheduled; e.g., the number of servers, proximity of support station, etc. The optimal value of W may, consequently, be jointly programmed with the optimal support frequency. 13-11.2 Replenishment of Spare Parts An inventory of spare parts maintained to support a system may be designed and itself supported in ways described in chapter 14. Two primary classes occur: one-time provisioning and continuous provisioning. In one-time provisioning, a system is to be provided with spares for an interval of time T, during which replenishment of the spares will be impossible. This case is sometimes termed "standby" redun dancy. (cf. para. 13-12). For a system composed of N identical components with exponential lifetimes to failure, reference 13-6 gives selected tables and graphs for the number k of spares that minimize a cost function of the form c1(N + k) + c2 [the expected number of failures] + c3 [1 -R(T)]. Continuous provisioning of spares is desirable if repair of the system will be recurrent over an indefinitely long period of time, and if the spares inventory can be replenished according to design. The size of inventory and its frequency of replenishment should be programmed jointly with the frequencyof system support. Design of the inventory, for given support rate, is a special case of the topics of paragraph 14-16. 13-12 Programming of Redundancy in System Design In a parallel system, the redundancy corresponds directly to the number of parallel components. Redundancy is constrained, typically, by cost, weight, bulk, etc. If inadvertent component operation can cause system failure, then effectiveness may not continue to increase with indefinite increase in redundancy. In a series-parallel system of independent components, there can be (parallel) redundancy, with N 1 ;:::: 1 being the number of redundant components at stagej of the series, j = 1, ... , K. In the simplest case, let Ci be the unit cost of a single component at stage j and let W i be its weight, etc. The optimal strategy is a vector N* = [Nil. The reliability, R = RK([N1]) will be of the form K K When stage redundancy in a series system is subject to a set of I constraints each of the form K C;: L C;JNj s C;, i~I then the optimization problem is typically either of the form max R I [C;] (Eq. 3) or else one of the constrained sums is .selected as objective, and the problem is minimize L CakNk I [C; ,i=O], R;:::: R*1 k where R* is a required reliability. In simple cases the optimum redundancy may be easily computed using dynamic programming, 13-13 sequentially over the number of stages. Reference 13-7 reports that only five minutes of computer time was required to solve the following program forK = 5, I = 2, c1 = 100, Cz = 104: j eli Czi Ri N*-. was forced to satisfy the first constraint. The column N*k: is the optimal solution, the optimum reliability R* is .984. As usual, the method of dynamic programming is most effective for parametric studies, varying c1 and Cz. Reference 13-8 contains detailed computation procedures for solving Equation 3 by hand when K = 1, using dynamic programming. This reference reports that the procedures have been applied to nonrepetitive analyses of the major components of large radar, communications, and data-processing systems. For Poisson demand caused by failures of various components of various systems the above computation-e.g., an inventory or budget constraint-can be readily performed on a desk calculator if 1=1. References 13-1 Naval Operations (Operations Analysis). Command .Department, U. S. Naval Academy. Annapolis, 1963. 13-2 "Estimation of Reliability Growth in a Complex System with a Poisson-Type Failure", by Herbert K. Weiss. Opns. Res. 4 (1956) pp. 532-545. 13-3 Mathematical Theory of Reliability. Richard E. Barlow, Frank Proschan, and Larry C. Hunter. Wiley. 1965. 13-4 "An Optimum Policy for Detecting a Fault in a Complex System" by Brian Gluss. Opns. Res. 7, (1959) pp. 468-477. 13-5 "Optimum Search Routines for Automatic Fault Location" by Sidney I. Firstman and Brian Gluss. Opns. Res. 8 (1960) pp. 512-523. 13-6 "Cost Functions for Systems with Space Components" by Donald F. Morrison. Opns. Res. 9 (1961) pp. 688-694. 13-7 "Dynamic Programming and the Reliability of Multicomponent Devices" by Richard Bellman and Stuart Dreyfus. Opns. Res. 6 (1958) pp. 200-206. 13-8 "Least-Cost Allocations of Reliability Investment" by John D. Kettelle, Jr. Opns. Res. 10 (1962) pp. 249-265. 13-9 "On Optimal Redundancy" by Gus Black and Frank Proschan. Opns. Res. 7 (1959) pp. 581-588. 13-14 CHAPTER 14 SUPPLY SYSTEMS 14-1 Scope Abstractly, any activity is "production," any requirement for the result of an activity is "demand," and anything which has been produced in advance or excess of requirement is "inventory." Thus, while most of the topics of this chapter refer specifically to inventories of militarymateriel-nevertheless, for the purpose of identifying and exploiting applications-it is important to recognize that, abstractly, any inventory is simply the tangible mismatch not just betweensupply and demand, but between activity and requirement for activity. Thus inventory may bepositive or negative, with negative inventory commonly represented as "backordered" demand.The essential function of an inventory is to act as a buffering control device between unpredicted developments in demand and requirements on the one hand, and irregularities of production on the other. Inventories accomplish their special function at the cost of the special furthereffort required to maintain any given inventory in good condition for its intended purpose. Thiscost is termed the holding effort. · Peacetime inventories may be characterized by the fact that it is usually optimal to exertfficient holding effort so that physical losses from inventory are negligible. For wartime invenries, especially combat inventories, no amount of holding effort is apt to be sufficient to preventsignificant physical loss of stock from inventory, especially by enemy attack. A much sharperrise in the optimum holding effort per unit inventory will accordingly tend to occur.Economies of supply effort tend to vary inversely with the size of inventory. Economies ofholding effort tend to vary directly with inventory size. Supply effor-t includes the effort connected with obtaining materials, entering them into storage, and taking them out. If inventories are not adequate, there are two types of loss to· the total system. First, thereis the loss of system effectiveness through the system's unreadiness. Second, there is is the costof uneconomical demands on the production system. These costs are indirectly part of the supplyeffor·t. 14-2 Development of Analytical Methods of Evaluating Supply Systems 14-2.1 Some Uses of Analytical Methods Since World War II, a considerable amount of operations research has been directed to theconduct of supply operations, especially to the effective and economic control of the availabilityof stocks of materiel, and to determining optimal policies for stock replenishment and production. Much of this research has been directly stimulated by problems of the peacetime management of military inventories. At the same time that electronic data-processing machinery hascome into being, providing an unprecedented capability for maintaining supply records, the entiresupply situation has grown tremendously in complexity and size as measured by the number ofitems supplied, the intricacy of use-interrelationships of items in the fabricating of equipment ---,\ 14-1 complexes out of component parts, the increased extent of international deployment of miltary forces and supplies, and the general intensity of supply activity. Because of these developments, the need has been accentuated forwhich, by exploiting computer data (1) statistical methods of demand analysis capability, would simplify and even improve supply availability, (2) quantitative representation of the relationships between supply effort (e.g., cost), supply effectiveness (performance), and the variables of supply action. Such quantitative representation would make it posible mathematically to identify optimal strategies of suppy action-inventory sizes, controls, locations, schedules and policies of replenishment and production-readily and with precision, whether with computer assistance or manually with graphs and tables. Of the developments in supply methodology, this volume is not concerned with those improvements that are due merely to the increase in data capability, except insofar as computers make it feasible to employ more sophisticated strategies of supply and of supply-control. Some im provements have been of this sort. For example, stochastic models of demand that have been developed can be used to calculate the probabilistic components of requirements with greater accuracy if the statistics of actual demand are currently "tracked" in greater detail than would be manually possible. This includes tracking individual items of supply order by order, at individual locations, and by different categories of types of customers. More important for the design of supply operations, whether they be computer-assisted or not, has been the development of realistic measures of supply performance and effort. When fluctuations in demand from its forecast can be probabilistically characterized, then the most costeffective schedules of production and of replenishment can be calculated as expected-value strategies. Strategies may then be specified and uncontroversially evaluated in terms of the quantitative representations of effort and of performance. This is of particular importance in military supply because a single supply operation typically involves the participation at one time or another of so many contributions of effort by so many organizations that there are systematic dangers of suboptimization. In formal operational respects, the problems of peacetime military supply are not greatly different from those of nonmilitary supply. Military peacetime supply emphasis is upon (1) satisfying peacetime demand without endangering supplies reserved for major war; (2) dollar economy; and (3) being able to supply immediately the occasional requirements constituted by unexpected military alerts and international incidents. Formal nonmilitary analogs of such requirements include fire-protection, disaster-protection, and efficiency in the extensive geographic supply networks that arise in these and even in the commercial distribution of products. 14-2.2 Special Considerations in Wartime Supply Wartime economy is measurable more in terms of the values of labor and resources in alternative uses than in universal terms like dollars. The "price" of something in war is its greatest value for some alternative use, and the price may fluctuate sharply. Since the basic elements of supply effort occur as much in wartime supply as in peacetime supply and the same measures of supply performance exist, the methods that have been developed-some of which are presented here-may be more generally useful in analyzing and designing wartime supply operations than they may seem at first glance. Major war usually poses unprecedented supply shortages. Feasibility of production sched ules and of delivery schedules then tends to outrank economy as a management objective. The advent of computers has greatly increased supply management capability to design and imple ment such schedules (cf. para 14-18). 14-2 14-3 Summary of Operational Considerations 14-3.1 Types of Inventories There are various ways in which inventories may be classified. These include in-process inventories, production and manufacturing inventories, distribution inventories, mobilization reserves, supplies carried on the back, inventories of capability, and of money. An inventory to be managed may consist of stock at one location or of the sum of stocks at several locations in parallel, in series, or both, i.e., a system of inventories. The stock of an inventory may be of a single item. More typically, it will consist of a large number of items of which most are low in usage rate. In connection with the control of an inventory, various "stock accounts" are identifiable, the sizes of which correspond to the state variables of the inventory including stock on hand; stock due in (on order) from suppliers; stock due out (to customer), i.e., back orders; and certain combinations of these. 14-3.2 Demand The claim that "a production schedule has no more worth than the demand forecast upon which it is based" has some truth. Understanding of the characteristics of demand also is essential to effective inventory planning. Of the principal operational considerations concerning demand, the following are covered in later paragraphs: (1) uncertainty; represented by probability distributions and random processes. (2) source-structure or the distribution of demand among the customer population and the effect of this distribution upon inventory safety-stock requirements. (3) the expected time-pattern of future demand including durations, stationarity, obsolescence. ( 4) the demand transmission effected upwards through the echelons of a supply sys tem by the methods of forecasting and the policies of replenishment that are employed at the middle echelons. (5) useful statistical methods of forecasting demand. 14-3.3 Effect It was noted earlier that both holding and supply effort are needed in connection with inventories. Because they often occur at different supply points or echelons, even in connection with a single supply operation, serious suboptimizations can occur. Functional relationships for a representing the elements of effort are covered in paragraphs 14-7 and 14-8. 14-3.4 Storage: Reliability and Losses Materials in inventory will be subject to damage and loss, to deterioration, to obsolescence, and in wartime to attack. The rates at which such losses occur can depend upon the countereffort expended (invested) in the storage operation. Methods are presented in paragraph 14-8 for representing the time-occurrence of such losses, and their amounts, as well as for relating functionally the rate of loss to the rate of supply (counter) effort invested. 14-3.5 Performance or Effectiveness Numerous measures of performance of an inventory may be identified, ranging from the relatively simple ones of availability (the probability that there is stock on hand) and average back-orders (or equivalently the average length of wait per unit of materiel demand) to the more complicated one of the time-function of response, by actual delivery, of the inventory to a given requested schedule of future deliveries. The size of inventory itself is a special aspect of performance because of physical considerations and because of the investment that it represents in space and funds. The various measures of inventory performance tend to be somewhat inde pendent of each other and to be associated with separate efforts. (Cf. para 14-9.) 14-3 14-3.6 Elements and Alternatives of Inventory Design (1) The principal structural elements in design of inventory policies are (a) Choice of item to be stocked; where; when; (b) Choice of supplier and assignment of customer(s). Typical instabilities of this structure are noted in the case of low-usage items, due to relatively large statistical fluctuations in usage rate. (2) The principal elements of control are (a) Whether to review the situation continuously-e.g., when any activity occurs -or at discrete intervals of time and, in the latter case, how frequently to review; (b) Whether to replenish at discrete times or continuously; (c) At what level to set the maximum size(s) of inventory; (d) At what value to set the reorder level for replenishing inventory, and equivalently, how frequently to replenish the inventory and what average stock to try to maintain; (e) How much to smooth demand history in basing a forecast upon recent usage. Choice among the alternatives of design and control will be made in terms of the relationship between the effort required and the performance obtained. Best designs are based upon providing given relationships between performance and effort. Complicated objectives and requirements may be expressible mathematically, for example, maintaining a given probability of availability of a distribution inventory at an outlying supply point that is replenished at discrete times, and maintaining the availability at least effort. (3) Design problems tend to group according to the total nature of the inventory operation. The following designs are dealt with individually later in this chapter: (a) One-time provisioning. Newsboy and flyaway-kit types of problems. (b) Discrete replenishment. Continuous vs periodic review control designs. (c) Unreliable demand. The use of discounting in establishing supply control designs intended to accumulate a long term objective. These are illustrated· in various designs, including situations that involve decaying demand or demand possessing a high threat of obsolescence. (d) Multi-time-period scheduling of multi-product inventories. Linear programs, particularly the transportation type, offer a means of representing alternatives of action when production and transportation are linear activities, and for selecting the best course of action when effort and performance are linear or can be represented as linearities. 14-4 General Characteristics of Demand 14-4.1 Uncertainty Uncertainty is the principal problem connected with demand. Probabilistic methods afford a way of overcoming the effects of the unpredictability of demand. Chapter 4 contains a discussion of the use of a random process to imitate programmed demand of great complexity of time-pattern, at least the operational aspects consisting of the exact quantities and time of their occurrences during the replenishment leadtime. This procedure is presently in use at the depot level in the Army (cf. para 15-5.2). Items of low usage rate tend to oscillate on and off supply lists on which membership is based upon recent demand history because of the relative fluctuation of demand around its average. Demand at low rate tends to be Poisson. As an example, an item whose average demand rate is one piece per time period will have a probability of nearly 0.4 of showing a history of no demand at all during another time period of the same length. 14-4 14-4.2 Demand Source Structure The distribution of activity among the customer population will have a direct effect upon the distribution of customer order-size for an item supplied from an inventory, and hence upon the required inventory safety-levels. Most customers will likely be small and order in small quantities, while a few will order in large quantities. A statistical law may be discoverable, dis tributing the order-size among the customer ordering-frequency. 14-4.3 Expected Time-Pattern Examples later in the chapter illustrate how unexpected obsolescence may be represented as a chance event, and how economic lot-sizes may be calculated when the demand of an item being exhausted from supply is decaying over time. Demand which will be for the one occasion only gives rise to one-time provisioning, typified by the so-called "flyway kit" situation ( cf. para 14-17.3). Other demands may be continuous or periodic. 14-4.4 Compound Demand Processes In a compound process there are two distinguishable component processes: (1) Demand-times, the times at which the need for the thing occurs (equipment fails, usage takes place, etc.) or at which the thing is requested (ordered, requisitioned, etc.). Occurrence of these times may be nonstationary. The rate of occurrence of these times is termed the customer-arrival rate, or demanding rate, and is to be sharply distinguished from the total rate at which the thing is used or requested, which is ordinarily termed the demand rate and is measured in units of the thing demanded. (2) Demand-sizes, the amount of the thing needed per occurrence of demand, e.g., the "order-size." the size S (t) of a demand at time t is assumed to be an independent process, stationary or not, with a specified probability distribution for S (t). 14-5 Compound-Poisson Demand 14-5.1 Properties Formulas used in representing compound-Poisson demands are given in table 14-1. The following additional properties, using the notation of the table, are important to note: (1) A demand may typically be for more than one thing at a time, in which case S (t) may simply be interpreted as a vector of random demand-sizes with independent components. (2) If n, (T,,TJ is large, then Q(T,,TJ is approximately normal. (3) Since the demand is an independent process, the variance-to-mean ratio V(t) is a constant V, independent of time, unless the demand-size is nonstationary. Its momentary value is independent of the customer arrival rate. Additional properties of compound-Poisson demands are (4) Any linear-combination of independent compound-Poisson demand processes is a Compound-Poisson demand process. For example: (a) Additive over independent items: The sum of demand for two different items if demand for each is independent of the other. (b) Geographically additive: The sum of demand from two independent sources for one item. (c) Scal1:ngs: The total demand for an item expressed in weight, or cubage, or dollar-value, provided the unit weights (cubages, dollar-values) are constant or an independent random process themselves. (d) Demand additive: The total value (weight, etc.) of all demand for all items. 14-5 Table 14-1. Formulas for compound-Poisson demand Stationary process Nonstationary process Tin length from time T 1 to time T 2Time interval of concern a Occurrence rate at time t Total number of demands N(T) (the random variable) E(N(T)) = n1 =aT E (N(Tt,Tz)) = n1(T1,Tz) Average total number of demands £~'a(t) dt Demand-size at time t s S(t) (the random variable) E(S) = St E(S(t)) = s1(t) Average (mean) demand-size E(S2 (t)) = S2 (t) Average square of demand-size E(S2 ) = Sz Q(T) Q(Tt,Tz) Total demand (random variable) q(i) = c~(t) s1(t) *Average (mean) demand rate E(Q(T1,Tz)) = f r ..-q(t)dt Average total demand E(Q(T)) = qT lr, {'' c~(t)sz(t)dt *Variance of total demand T, <72 = asz "-· B: • 0 0 • • > 0. 0 1&1 10 B: 0 en 0 ~ m m 0 o• 0 • 0 0 0 0 •• 0 0 0 0 0 0 • 0 0 • 0 0 • • 0 0 0 0 0 0 0 0 ~~~---~~----~o---~~---~~ ~~~--~~~--~~--~~~---0 4 8 12 16 20 24 28 32 36 10 20 30 40 50 60 70 80 90 100 t =TIME BETWEEN ORDERS IN DAYS s = ORDER SIZE IN UNITS Figure 14-1 Figure 14-2 This same study showed that any nonrandomizing effects of establishing scheduled days for each station to order (to level the requisitions-received load on a depot) are indiscernible. A special official category of demand studied at the time was "nonrecurring" demand, which concept was intended primarily to cover predictable demand-for example, for spare parts for issues-as opposed demand generated scheduled maintenance and repair programs, initial to by unpredicted failure of parts in the field. The latter was term "recurring" and was assumed to be of a random nature. But, in fact, the time-patterns of nonrecurring demand were quite indistinguishable in their irregularities and statistics from the time-patterns of recurring demand. Thus it appeared an easier supply strategy to allow spare stocks for nonrecurring de mand as if it had the variance of a random demand, than to attempt to keep such close track of its day by day occurrence (assuming it could be so closely scheduled) as to eliminate any unpredictedness in it. 14-5.3 Approximations to Compound-Poisson The compound-Poisson process is a likely candidate as the best representation of demand, especially at distribution inventories. As approximations to it in special cases, the following may be useful : (1) the Poisson, when the demand rate is low or the unit price is high; (2) the normal, also the gamma, when the demanding rate (customer ordering frequency) is very high; (3) the Poisson itself, in units of the mean order size. 14-6 Statistical Time-Series Methods of Forecasting and Control 14-6.1 Nature of the Problem It is not unusual for the VMR (variance-to-mean ratio, see table 14-1) of a compoundPoisson demand to be more than twice the mean demand-size, which is the value of the VMR when order-sizes are exponential in distribution. When demand can fluctuate to this extent statistically, this average may be fairly well concealed in a time-series of actual values of demand in successive aggregating periods (days, weeks, etc.). Such fluctuations may even have been aggravated by attempts to smooth them. (See para 14-6.5.) 14-6.2 Methods (1) Running Averages To estimate the average rate of a given demand, it is common practice to employ running averages of recently observed demand. Thus if x; denotes the total demand observed in period i, then at the end of period n a 1·unning average of base length B is [ ~n = [Xn + Xn-1 + ... + X~-B+l]/B. J If the hypothesis is that demand is stationary with mean x, then ;n is the estimate of x that has been made at the end of the nth period. (2) Regression Methods (a) General A more sophisticated (and laborious) estimate of x would be provided by, for example, a least-squares regression against past values of x;, perhaps against all of them; such regression being from time to time updated against additional subsequent observations Xn. Hypotheses other than of stationaritye.g., linear trends exponential growths and decays, seasonals, cycles,etc.-can be in principle straightforwardly fitted by regression. (b) Exponential Smoothing Any (infinite) time-series Yn,Yn-!, ... , going backwards into time, may as it is progressively generated at each new state n be exponentially smoothed by calculating 14-8 S1 (n;y) = byn + b(1-b)Yn-1 + b(1-b) 2Yn-2 + ... = byn + (1 -b)S1 (n-1;y). A kth order smoothing Sk (n;y) can be obtained by calculating Sk(n;y) = b L (1 -b)iSk-l(n-i;y) = bSk_,(n;y) + (1 -b)Sdn-1;y) i:::O This kth order smoothing has the value (verified by induction) 1 Sk(n;y) = bk ~(1-b)i ( ;::: )Yn-i. If the time-series is not infinite in history, initial values of the Sk may be provided. Exponential smoothing of order k requires that at each state only k numbers be stored, the k kth-order smoothings. (Ref. 14-2 is an extensive treatment of the topic.) (3) Polynomial Fitting by Geometrically Decaying Least Squares Given a time-series x,,Xn-h ... , suppose that it is desired to fit to it a polynomial Pn(i) = L ci(n) ii of degree n which minimizes the quantity L (1- b)i [Xn-1-p,(-i))2 i-o As the data x, are progressively generated, it will be desired to refit the polynomial at each new stage n. For each i, the time-series p, (i) may then be recalculated progressively by kth order smoothing of it, using the following equation: set Sk(n;p) = Sk(n;x), k = 1, ... , N + 1 Example: If N = 0, a straight line (stationary forecast) is then fitted to the data. p,. = C0 ( n) is then the estimate of the line's ordinate made at the nth stage. It given by Pn = bxn + (1 -b)p,_1 14-6.3 Comparison of Running Avemge and Regression Methods Running averages are smoothers with equal weights of finite past extent. The average weight is concentrated at the point corresponding to (B-1) /2 time-periods into the past. The average weight of an exponential smoother is concentrated at the point corresponding to (1-b) /b periods into the past. By equating these points, the value of b that in this way corresponds to a given value of B (that, for example, which had been previously in use) is b=2/(B+1). 14-6.4 Statistical Control The purpose of such averages and regressions is of course not to fit the past data, which are already known, but to estimate future values of the x.i. The aim is thus to track and predict. Note that the effect is also to exert control on the future, for action will be based on the predictions; e.g., how much to produce, or ship, or store, to meet the predicted demand. Thus, the methods are methods of statistical control. In practice a good predictor should meet the following tests: (1) convergence: if the hypothesized pattern of demand is correct-e.g., that E(x,) = a + en (linear trend)-then the predictions x should converge rapidly upon this estimate; (2) response: if the hypothesized pattern of command should change-a trend that did not previously exist suddenly develops -then the predictions Xn should respond rapidly to the change without excessive lag; (3) stability: the predictions should not develop increasing oscillations or departures from the actual 14-9 series being tracked (as can happen in over-control or under-control) ; and (4) smoothness: to the extent that the X; merely fluctuate randomly around a smooth average, the predictor should be designable to provide a given degree of smoothness of values of the Xn's. While often only the total quantity demanded-i.e., the increments in Q (t) -is tracked, this has been largely because no systematic procedures have been available for establishing safety stocks based upon explicit recognition of variance in demand. For maintaining these safety stocks, the variance rate of demand must be estimated so that a second time series must be tracked in addition to total demand. In the case of compound demand, especially compound-Poisson demand, the two processes to be tracked may be chosen to be (1) q, the total quantity demanded, as a rate per time period (2) S2 , the square of the order-size. The second process need only be tracked when a demand occurs. From the two, the variance then can be estimated for any time period using the formulas in table 14-1. As an alternative to (1) and (2), one may track: (1') q, as above (2') I q -the forecast of q I where I I symbolizes the absolute value of the quantity. Evidently, should it be necessary, an entire probability distribution which was hypothesized to be changing over time could be tracked by making estimates of its parameters from the data available in each time-period. Unless other than at most 2-parameter distributions are hypothesized for demand, such distribution tracking would not be necessary. 14-6.5 Supply System Effects Functionally, a multiechelon supply system is a network linking producers with consumers. The nodes represent major producing points such as factories and arsenals, major transfer points such as warehouses and depots, and major supply distribution points. Dispersed around these nodes are the consumers who do not perform enough important transfer of supplies to ~•be represented as nodes. In the network, particularly for military supply, there are two primary flows: (1) the flow of stock from producer towards consumer, and (2) the flow of information from consumers towards producers. Often the flow of each occurs along exactly the same branches of the network, in opposite directions. That this is not necessarily a good design for the flow may be seen from the analysis which follows. Consider a typical supply point S which is in the middle of the network. With respect to any given item of supply, the point S typically receives stock from one other node (at least most of the time for any consecutive period of time) which may be termed S's "supplier." In turn, S supplies perhaps a number of other nodes with the item, including all consumers assigned to S for supply. From the population that S supplies, it receives requisitions. When S in turn needs supply, it requisitions from its supplier. Thus, supply of the item flows in one direction, towards customers; while requisitions, or information concerning stock needs, flow in the opposite direction. Unfortunately, the order placed by S upon its supplier tends to be an inaccurate and exaggerating indicator of the demand which S receives. Specifically, suppose that S reviews its stocks, on-hand and due in, periodically at regular intervals. The total demand placed upon S by all of its customers is then a discrete stochastic process, and the total quantity demanded by these customers in the nth period of time can be denoted by xll which will fluctuate randomly in the typical case. We may, for illustration, assume that X11 is a stationary and independent series with a mean x and a variance a 2 • 14-10 At the end of a given period, say the nth, the supply point S will reforecast the future rate of the demand per period. If a running average of the demand in the last b periods is used to forecast, as is typical procedure, then the forecast made at the end of the nth period will be fn = (Xn + Xn-1 + • • • + Xn-b+1)/b. If the series Xn is stationary, then /, is just the estimate of Xn made at the nth period. Corresponding to any given forecast /11 of the demand rate, the pointS is typically authorized a requisitioning or asset maximum proportional to the forecast, say p/11 where p = 1 + the number of review periods in the lead time for the point S to be replenished by its supplier. The point S will now requisition from its supplier the quantity Yn = pf~~ -p/11 -+ s~~ to bring the quantity 1 at S which is on-hand-plus-due-in from its supplier less due-out to its customers, up to the level Pfn· Now let X~c denote Xm+l + Xm+c + ... + x,.,, = total demand on S in some k consecutive periods of time during continuing supply operations (i.e., m is any particular integer); let Y" = y,n+, + . . . + Ym+k = the quantity requisitioned by S from its supplier at the ends of these same periods. It is then easy to verify that the average value of Y" is equal to the average value of X,., i.e., that the point S transmits on the average just the demand that it receives. However, the variance of Y" is significantly greater than the variance of X". In fact, as may be verified with a= p/b, a 2 (Yk) ~ 1 + 2(1 + a)a fork :s; b l Ik = a 2 (Xd = /1 + 2(1 + a)p/k fork> b \ Thus h can be substantially greater than 1. For a typical example in which S is a post supply, p might be authorized to be 45 days; and b, 6 months. In that case !,. = 1.625 for periods as long as 6 months at a time. This represents a standard fluctuation of almost 30o/r in Yk around Xk. The matter does not improve by cascading the transmission. Let S* denote S's supplier. S*, of course, will receive a number of such input demand processes y,. from points in addition to S. Each such y,. is now not an independent random process, but dependent. If S* forecasts and orders from its supplier in a fashion similar to S, S''' will in turn transmit upwards a process that can be labeled as z". Let z,. denote the sum of k consecutive values of the Z 11 , and let a* denote the ratio p*/b* for the point S*. Typically, b* will be larger than b. For a time period of length k < b, b*, b* -b + 1, the increase in a" (Z1.) over a" (Xd is multiplicative, i.e.: ['~ = (Zk) = [1 + 2a(l +a)] [1 + 2a*(1 +a*)](X,J In order to prevent the worst effects of the above phenomenon, intermediate supply points should not revise their requisitioning objectives quickly based upon demand experience. The best procedure is to transmit demand not in return for receiving stock-i.e., not by means of the requisitioning process-but directly from field consumption to the production (or procurement) echelon, bypassing all intermediate echelons. Instead of establishing demand-responsive requisitioning objectives at each echelon, a single base stock for the entire system can be determined based upon the consumption demand forecast. Intermediate stocks are then simply allocated at echelons. Such a system is termed a base stock system; it treats the entire set of supply echelons as but a single supply stage between production and consumption. 14-6.6 Response.s of Predicto1·s The general subject of the response characteristics of predictors is an important new topic beyond the scope of the present volume. Nevertheless one obstacle to automatic smoothinge.g., using a computer to exponentially smooth data-is that the computer program, if it is not 14-11 critical, will smooth any data fed to it even if the data happen to be a transmission error. The equation for exponential smoothing can be written in the form: fn-fn-1 = b(x,.-fn-1) showing that the change made in the forecast is directly proportional to the difference between the old forecast and the new data. This responsiveness may not be sufficiently critical of possible data errors. A simple improvement can be made by making b itself be a matter of choice with a value dependent upon the difference between x" and the old forecast. For example, b could be set equal to c (p + [x" -f~~-1] 2), where the constants c and p are then chosen to make the response coincide with that of a predictor that uses the simple constant b when the values of the data Xn are in a reasonable range. 14-7 Supply Effort 14-7.1 Vertical Nature of Effort As noted, supply, including storage, is an exceedingly vertical kind of military operation. The various elements of effort typically are supplied by different commands and locations, at non simultaneous times, and, where expenditure of money is required, are covered by different financial accounts and budgets. As a consequence, the optimizing of all considerations that affect the supply and storing of any item will appear for each participant in the effort to be nonoptimal. 14-7.2 Effect of Frequency of Supply on Effort Required; "Set-up" The principal considerations in economizing supply effort is frequency of supply. In recurrent military procurement, economy occurs as a quantity discount plus the fixed administrative costs of procurement actions. When the supply rate is continuous and constant, economy is lost by acceleration of demand, which amounts to a change in supply frequency at given rate of supply. Most such economies of scale trace back to economies of the intermittence of activity and of the learning that is produced by continuance of an activity. For intermittent supply, the linear case is that in which, when the quantity supplied at a given time is Q, the cost incurred (effort required) at that time is representable as E + eQ The element represented by E corresponds in manufacturing to the setting up or changing over of a machine or the organization of a production team. In transport, it corresponds to the cost of transporting the conveyance itself. For some physical processes, it represents loss of physical yield of product while production comes up to quality, e.g., through adjustment periods of tool-settings on machinery. It may represent production lost by humans during the learning or remembering that occurs when a job must be done once again. A more specific representation can in such cases be obtained by modelling the machine or productive facility that is being set up or changed over, modelling the time required as time lost by that facility. For meeting capacity restrictions, this modelling is not optional, but necessary. When the element E corresponds to gradual learning, it may represent the area over a function of the form p(t) = [p-p(O)] e-bt + p(O), between the function and the t-asymptote p, where p (0) is the production rate immediately after set-up, p is the production rate after a long time (of learning), b is the learning rate, and p(t) is the production rate at time t. That integral then represents lost production, the loss in effect being incurred not when production is started but when it is interrupted. 14-7.3 Acceleration of Effort When production is at a fairly continuous rate but with fluctuations, the cost of the fluctuations corresponds to an acceleration cost. If this is symmetric in the actual rate around an aver 14-12 age, the cost or effort structure is termed quadratic costs. For linear programming, which is the customary form to which the computation of such production programs reduces, quadratic approximation may be replaced by any convex production cost ( cf. para 8-9). But note that linear programming cannot be employed to handle set-up or changeover costs since these are concave cost functions for the effort. 14-7.4 Effort When Storage Is Bypassed-Unavailability Costs Bypassing of storage occurs frequently in the direct supplying of demand when inventory is 0 in physical magnitude (and thus likely is negative if the demand has had to wait). Supply points that merely take orders for items, but do not stock the items, operate entirely out of negative inventory. Thus only that fraction of demand which corresponds to availablility of inventory is supplied out of inventory and therefore enters inventory. Consequently the effort of supplying demand directly from replenishment, bypassing inventory, may be grouped with any imputed costs of unavailability. 14-8 Inventory Losses and Holding Effort 14-8.1 General Stock held (stored) to be available for demand is formally analogous to a system whose ini tial condition is to be maintained if possible, or whose decay in condition is to be minimized so far as possible. Three developments must be distinguished: ( 1) physical loss of stock dur ing storage, (2) decrease in the value of stock during storage, and (3) holding effort expended to restrain (1) and (2). Recent doctrines of inventory-analysis, possible because they have been more concerned with peacetime inventories than with wartime inventories, sometimes appear to lump all of these together into a cost of holding material in storage. For commercial inventories no doubt this is sufficient, but effort and physical stock may not be completely exchangeable in a combat inventory. 14-8.2 Physical Losses While stored, some stock may be destroyed by natural accident, through human error, by deliberate enemy attack, or may unaccountably disappear. Destruction and loss represent reduction in the amount of inventory at rates additional to the rate of demand. The rate will depend upon the holding effort. Instead of being destroyed, the stock may be merely damaged. In addition, the condition of items stored may deteriorate with time, at rates typically dependent upon environmental condi tions in storage and upon the kind of storage effort that has been devoted to packing, maintenance of stores, etc. Damage and deterioration represent some reduction in the usefulness of the material, as measured either in reduced useful lifetime after issue or reduced rate of effective ness in use after issue, or both. For some types of items, deterioration in storage is unavoidable -e.g., perishable stock-the deterioration rate varying sharply with the type of item (e.g., foodstuffs vs. batteries vs. tires). 14-8.3 Loss of Value-Surprise Obsolescence Sudden unexpected obsolescence of previously useful material is a constant, if intangible, risk for an inventory, and is the principal kind of unexpected loss of value of the inventory. (1) The main instances of obsolescence are of the following types: (a) change in (combat) mission of the inventory, especially the time and location it should possess. (b) technical innovation (i.e., a new product or design). Obsolescence causes a reduction in value that may warrant disposal of the inventory. The total loss of value due to obsolescence will increase with the size of the inventory. Because of the unpredictability of occurrence of obsolescence, the only defense against it is to keep inventories small. 14-13 (2) Losses due to obsolescence may be representable in two principal ways: (a) For national inventories of many items of materiel, losses due to obsolescence may be amortized as a part of the holding effort, i.e., in addition to all other holding effort. The amortization rates should be in direct proportion to the estimated obsolescence rate of the type of item stocked-e.g., high for missile and other items of new technology, low for automotive, etc. (b) For a particular inventory, the specific occurrence of obsolescence can be represented as a specific random future event. The event may be Bernoulli in type if the obsolescence rate is stationary, or may more generally be a renewal process. The obsolescence rate corresponds to the reciprocal of the average "lifetime to obsolescence." An analytic illustration occurs in paragraph 14-17.2. When obsolescence is represented as a specific event, the estimated salvage value of the stock that remains may be entered in specifically when programming an action to commit stock to inventory. The programming may then be structurable as a one time provisioning, as illustrated in paragraph 14-18. 14-8.4 Holding Effort (1) Elements The principal elements of holding effort are (a) guarding; fire-protection; inspection; control of heat, humidity, exposure; procurement of the space needed. (b) in combat, special site preparation, including camouflage and defensive action. Effort can always be measured in terms of the commitment required-of men, equipment, and the time for which they are committed-so as possibly to reveal the comparison of the value of alternative commitments of them. All such elements of effort should be carefully distinguished from elements that attend supplying but not holding. For programming the size of peacetime inventories, measures are needed of the effort required to minimize losses. Standard Army cost accounting may provide a record of peacetime holding cost effort; so does the cost experience of comparable nonmilitary activities. However, because of the universal tendency of accounting systems to pro-rate fixed costs over variable costs, standard costs have sometimes to be further analyzed before being employed. For example, the cost of entering an item into storage is often grouped by accounting with the cost of holding the item in storage. For programming the size of combat inventories, measures are also needed of the relationship between holding effort expended and the corresponding loss rate resulting. Extreme values of the relationship are well known in the advisability of camouflaging and importance of packaging. No general relationships appear to have been experimentally confirmed. If the effort rate required to restrain the loss rate to a given relative fraction of inventory is proportional to the size of the inventory, then the holding is termed l-inear. Loss to attack or because of lack of camouflage should be assumed to increase with the size of inventory, due to increased detectability. Thus survival should be exponential in form and at rates increasing with the holding effort. In fact, the struggle to maintain supplies that are under combat attack is equivalent to a problem in combat itself. (cf. Combat, ch 15) (2) Examples A transportation inventory occurs when cargo is loaded into carriers for transport. Reference 14-3 assumes that when such an inventory of size I is escorted 14-14 through a region of possible enemy attack, the total losses will be of the general form aiexp [-eji] where n is a direct measure of enemy effectiveness (proportional to route length, for example) and e is a direct measure of escort effort and effectiveness. Some stationary storage configurations may be similarly representable. Reference 14-4 develops detailed representations of the storage losses that could be expected to result when nuclear weapons are stored in hardened sites that are programmed against enemy attack (the attack being based upon the hardness programmed). Among the loss functions examined are the linear, (when supplies cannot be organized to their own defense) and, when supplies can be organized to their own defense and are stored at uniform areal density u-1 around a central launching site, a loss function of the form I--2 ) [ 1 -( 1 + a Vul) exp ( -a V ui) J ( Ua2 where a is a direct measure of enemy attack effort. If the density of launching sites is inversely proportional to the storage radius from launch, then the loss func tion becomes of the form I -c [1 -exp (-I I c)]. 14-9 Supply Effectiveness 14-9.1 Inventory as a Process To control inventory effectively can require action that takes time for its effects to be felt. As examples of time requirements may be cited the time-interval needed to establish a forecast base and the replenishment lead time. Cons~quently, it can be important to regard inventory explicitly as a continuing process tak ing place over a period of time. In fact, there are several essential element-processes to be dealt with individually: (1) H(t) the amount of stock physically on hand at time t (2) B(t) the amount of stock due out to customers (backordered) at timet (3) I(t) net stock, or inventory, = H (t) -B (t); I (t) may be negative (4) D(t) at time t, the amount of stock requested from replenishment sources but which has nvt yet been received into stock. (5) J (t) l(t) + D(t) =net assets at timet (6) P(t) cumulative receipts into inventory to time t (7) Q(t) cumulative demand (filled and backordered) to timet Thus I(t) = P(t) -Q(t). Usually B(t) H(t) = 0. Figure 14-3 portrays the theoretical fluctuation in discretely replenished inventory in response to a typical random compound demand pattern. Increases in inventory on hand H(t) correspond' to the arrival of replenishment quantities. Increases in inventory (on-hand plus onorder) J (t) occur when replenishments are requested. Decreases in each inventory occur when demand is received, the amount of each decrease corresponding to the quantity demanded in each demand (e.g., requisition). Replenishment lead time is L, and replenishment quantity is Q. = I(t) =net stock = J(t) = H(t) -B(t) + stock on order L =lead time c =maximum stock c-Q = 1st re-order level '~ c -2Q = 2d re-order level; etc. / 14-15 il r~,_ r~-~ ,__ I "1----,_ I L I I L,__ I ----., l -,__, Q I ""'-, I I I I I I I L I Ll : L,_J IIJIIIIIIIIIII I I II I I II I I I II I I II ' I I II I i I II I II I I I II I 1 1 1 II I II II II Ill II Time--... Figure 14-3. Inventory Processes. 14-9.2 Measures of Supply Effectiveness The principal measure of supply performance is the way demand is satisfied. The following quantitative measures, listed in approximate order of use, can be identified: ( 1) The quantity supplied. (2) Average delay in meeting demand, measured as the average length of time per unit of material demanded by which delivery exceeded the delivery requirement date, w (3) The quantity backordered at time t, B(t). For an inventory recurrently replenished to supply a (roughly) stationary demand rate q, the average quantity backordered B satisfies the relationshipip B= qW ( 4) Availability (A), defined as the fraction of the demand requirement that is met without delay. ( 5) Delay response spectrum. Subparagraphs (1) through (4) above are summary measures. More detailed measures can be had. A single demand is typically a request that delivery of various quantities be made at various requested times in the future. The order-size S is thus a schedule S (T) of quantities to be delivered, cumulatively after time t, and before time T. Let P(T) denote the actual delivery achieved. Then the ratio P(T)/S(T) represents the fractional response. For each T the probability distribution of P(T)/S(T) represents the probabilistic response; conversely, for each possible value of the ratio, the time T required to attain that ratio may be probabilistically represented. 14-10 Intermittent Replenishment of Inventory 14-10.1 Conditions Affecting Replenishment Quantity The importance of correctly balancing inventory holding effort and replenishment frequency may be very great. Combat effectiveness of the front may be seriously affected if combat inventories at the front are either insufficient or excessive. Excess inventory creates exposure, immobility, and excessive holding effort. Replenishment that is too infrequent can make it possible for enemy attack between replenishments to deplete supplies and gain victory. Some of the factors that influence the quantity of replenishment are discussed here. 14-16 Ships, aircraft, and other conveyances have minimum feasible sizes. Sizes above the minimum physically feasible size may be determined by considerations of obvious economy, the effects of nature being random. Often these considerations are determined not merely by the item whose frequency of supply is currently to be calculated, but rather by the great many other uses which the vehicle has in addition to supplying this particular force-unit. In wartime, the feasibility and the effectiveness of escort size may then dominate the factors that determine shipment size. For example, naval analyses in World War II (ref. 14-5) showed that the larger the size of a North Atlantic Convoy, the more effective it was in defense against submarine attack. (cf. combat inventory losses, para 14-7). Machines that. produce objects can usually profitably produce objects faster than the rate at which the objects are needed. The resulting intermittent production can result in intermittent replenishment of inventories. Obviously, if the machine is designed to be a multi-product machine, it can then supply many items, permitting a variety of demand to be satisfied. The amount of use of a machine influences inventory replenishment policy. If replacing a worn part in a machine will require stopping the machine, then in the design of the machine an attempt may be made to lengthen as much as possible the intervals between replenishment. Replenishment of the supply of fuel or of lubrication material of a machine (motor, reactor, etc.) is in many cases most economically designed to be done at discrete intervals. Sometimes the machine carries the fuel or material around with it like an inventory. Procurement typically affords a common example of intermittency. Other factors besides production speed capacity now affect the frequency of replenishment. One of these is the amount of effort which the government is required to exert to advertise its need for supply, to evaluate competing bids, and thus to "set up" the occurrence of production by some manufacturer. 14-10.2 Replenishment Quantity and Frequency For the quantity by which the inventory is then .replenished, various names are in use depending upon the type of inventories. For inventories created directly out of production, the terms lot-size, batch, run-length may be natural names of the replenishment quantity. For distribution inventories, the term order-quantity, or reorder quantity, is common. The symbol EOQ is widely used to denote economic order quantity. For national procurement to replenish military stocks, the term procurement cycle is common. In this paragraph the replenishment quantity will be symbolized by Q. Note that it may very likely be a vector, composed of amounts of each of various types of items. This is particularly likely for distribution inventories, the replenishment vehicle carrying replenishment supplies of more than one item. For recurrent production, the length of the period of time between two successive replenishments is then termed the replenishment cycle, and the reciprocal of its average length is the replenishment ft·equency. The product of the replenishment frequency and the replenishment quantity (Q) equals the stationary demand rate q. The cycle typically has two phases, a period of production or actual increase of inventory during replenishment, followed by a period of decrease. Sometimes the replenishment rate is so high compared to the demand or drawdown rate of inventory that for practical purposes it can be approximated as being infinite, and this phase of the cycle can be suppressed. Being replenished at discrete intervals, inventories have an oscillating or "saw-tooth" component, called the working or cycle inventory, which is higher just after replenishment ( cf. fig. 14-3). For stationary recurrent production, average cycle inventory is proportional to average cycle length and inversely proportional to cycle frequency. In general, q demand a~~~~;e ) ( replenishment ) rate 2 ( inventory frequency 14-17 Consequently, factors in effectiveness that favor low inventories oppose factors that favor infrequent cycles, and conversely. Optimal frequencies are often a balance of costs that increases with cycle frequency and inventories. Production may constitute an intermittent demand upon its own resources, tending to induce an oscillation in the inventory of resource materials. In fact, the terminology and concepts of intermittent productive activity are most highly developed in the case of the production or supply of physical material. 14-10.3 Replenishment Lead Time If replenishment is desired at some time t, it may be necessary to initiate replenishment no later than time t-L, in which case Lis termed the replenishment lead time (see fig. 14-3). The greatest increases in replenishment effectiveness can be obtained by shortening replenishment lead times. They are the time-equivalents of supply lines. In most cases discrete replenishment does not mean instantaneous replenishment. For example, a production inventory increases gradually during replenishment as illustrated in figure 14-3. For a distribution inventory, replenishment is more nearly instantaneous unless time re quired to unload, unpack, and arrange for disposition of stock is substantial. In national procurement, deliveries from a national manufacturer will typically be spaced considerably over time, so much so that replenishment is nearly continuous. However, accounting inventories-e.g., stock on hand, capital-tend to be replenished instantaneously, as for example when the signing of a procurement contract obligates a large sum of money all at once. 14-11 Inventory Review Of the policies by which an inventory can be reviewed and replenished at discrete times, two different procedures, each effective under appropriate conditions, can be examined analytically. (1) Periodic review and replenishment, the replenishment quantity requested (reordered) at each review being approximately equal to the demand experienced since the last reorder. (2) Continuous review, with replenishment, by a constant quantity Q triggered not by the clock but by inventory's reaching a reorder level. Both of these schemes employ a roughly constant value c for the maximum amount of inventory that could be on hand. This maximum setting is very much part of the control. As the demand rate changes, it may be advisable to adjust this maximum. For continuous review, the corresponding variables Q the replenishment quantity, and the maximum inventory c, are the two fundamental independent design variables of any policy of control of the inventory. The quantity c -Q is customarily termed a reorder level or, more generally, replenishment level. Other terms for it include "reorder point" and "reorder point warning level." The level is not a physically significant level of stock, and the policy requires that a perpetual record of inventory be kept in order to detect the fact that the level has been reached. The effort required may well offset the other advantages of the continuous review policy over periodic review (mainly, somewhat less inventory for the same supply effectiveness). The quantity (c -Q -qL) is customarily termed sn[ety stock, especially when its func tion is explicitly to protect against unexpected excesses of demand above the average demand during the lead time L. It is not an action level, but is a useful concept for analyzing the behavior of inventory, especially as the proper value depends upon the typical rate of variance of demand. Unfortunately, in many inventories the variance of demand is not carefully measured or estimated. In periodic review, review is made (and replenishment initiated if needed) at the end of a review interval of length T. This length, plus the value of c, the maximum inventory, constitute the two fundamental independent design variables of any periodic policy of review. A periodic review policy may be complicated by the addition of a minimum replenishment quantity Q,,;" to avoid the inefficient reordering of small quantities. The resulting policy is then 14-18 · ~ characterized in the open technical literature on methods of inventory control as being of "(s,S)" type, the replenishment rule being "reorder if stock level J (t) (on hand and on order) is below s, in which case reorder enough so that J (t) is made equal to S". In terms of the symbols used here, the policy would be symbolized as (c -Qmi,,c). The paragraphs which follow go into the details of control of the inventory using either a periodic or a continuous control. These details provide the precise computational base to be used in control. 14-12 Periodic Review and Replenishment under Constant Leadtime 14-12.1 Definitions The value of inventory (net stock) is examined at equally spaced times t1, t2 , ••• , and at each such review a quantity S, is reordered sufficient to raise the assets stock J (t) to an accepted maximum value c. When c does not change in time, S, is just the additional demand. received since the last review. The reorder arrives at time t, + L. If the length of the time interval between successive reviews is a constant T, then in general L will be equal to kT + x for some nonnegative integer n and for some number x between 0 and T. 14-12.2 Time Behavior of Inventory For any value of t, between 0 and T and for any integer n, the basic inventory equation can be seen to be I(nT + L + t1) = J(nT) -LQ(nT,nT + L + t1) where 6Q (x,y) symbolizes the total demand received between timex and time y. When lmax is stationary at the value c, the inventory is thus Averaging over the values of t1 p{c -I(t) = x} =-1 h1' p{LQ (t-L-t1,t) = x}dt1 T o Thus For stationary demand with mean rate q and with variance rate a 2 i=c-q(L+ ~) and T) (qT) 2 a2 (J) = a2 L + _ + __ ( 2 12 14-12.3 Approximate Control Spec·ijically of Avcdlnbildy, Backorders When control specifically of availability is desired, especially a specified high degree of avail ability, the simplest approximation is obtained by considering the inventory at t~ and t~, the times just before and after replenishment respectively; t" etc., t" being (n + k) T + u for some n = 0,1, .... , The basic relationships are I(t;,) = c -LQ (tn-L,t,) and I (t~) = c-LQ (t" -L-T,t,) 14-19 To illustrate, for stationary demand iet q(x;y) denote p{Q(t-y,t) = x} and let Q(> x;y) denote pr{l'-,Q(t-y,t) > x}. Then p{B (t~) = x} = q (c + x;L + T) and B(t~) = J (x-c) q(x;L + T)dx = JQ(>x;L + T)dx c r· and the availability at this is (t:) A(t~) =pr{I(tn) ;?:0}=1-Q(>c;L+T) Analogous expressions for times (t;,) just after replenishment are obtained replacing L + T in the above by L. Example 2 When demand is normally distributed with rate q and variance rate a , the value of ccan be found which will bring about a specified value A of the availability by referring to the normal tables for the value of c-q(L+T) . fc 1 -y2/2 x = forwh1ch e =A. aV L + T __ , v 217" Finding a value c+ also for which x = [c+-qL]/VLthe values of c-and c+ can be simply averaged. The calculation of optimal backorders is illustrated in paragraph 14-14 since it covers continuous review as well. 14-13 Continuous Review-Constant Leadtime and Replenishment Quantity Under this type of control, the stock J (t) is kept from going below a reorder level of c-Q, by reordering a replenishment quantity Q each time that J (t) reaches the level c-Q, thereby instantly raising J(t) by the amount Q to the level c. Equivalently, if at the time that a reorder is placed total demand Q(t) has reached some value x, then the next reorder is placed when Q(t) reaches x + Q. Equivalently in the case that Q is very small, the drawdown upon inventory caused by a unit of demand is replaced into inventory at a constant timelag of L, the leadtime. In compound demand, a single customer order-size that causes J (t) to reach the level c-Q will typically cause J (t) instantaneously to overshoot the level downwards. In the following analysis it is assumed that stock is not reordered at the time of overshoot unless it exceeds another multiple of Q. Note that continuous review is especially likely to be warranted for a critical item in very short or difficult supply. Q is then typically equal to 1. Two equations describe the inventory as a process: (1) net stock l(t) = J(t-L) -!:,Q(t-L,t) (2) slack assets c-J(t) = [c-J(t-L)] + L-,Q(t-L,t), modQ where the expression a + b means that the sum a + b as ordinarily computed is then to be mod Q reduced to the nonnegative remainder over the largest multiple of Q that is less than the ordinary sum a + b; thus 3 + 4 = 2. By application of the initial conditions of inventory at time mod 5 t = 0, the above equations can be solved to yield c-J(t) = [c -J(O)] + L-,Q(O,t) modQ l(t) = c-{[ [c-J(O)] + L-,Q(O,t-L) J+ !:,Q(t-L,t)} ·-modQ permitting explicit computation of the probability distribution of inventory at time t. 14-20 c -J (t), slack assets, is analogous to the position of the pointer on a roulette wheel of circumference Q which is spun at random times (the demand times) by revolutions each equal in total arc to the customer order-size. For large t, c -J (t) thus becomes rectangular in probability of value. In the case of stationary demand, let J and I denote the limiting random variables J(t) and I(t), respectively, as t ~ oo. Then J (slack assets) is rectangularly distributed in the interval (c -Q,c) (net stock) = J-L:,Q(L) where L,Q(L) symbolizes the demand in any time interval of length L. As before, let q(x;L) stand for the probability density that 6 Q(L) = x. Then 1 c-.r pd {I= x} = -1 q(y;L) dy Q ('-£-Q -Q -. Q J = c --and I = c ---qL 2 2 Referring to paragraph 14-12.2, the variance of inventory is less under continuous .review than under periodic review, but the means are the same when the average replenishment quantities, qT and Q, are equated. This is as expected since less control effort is expended in the periodic review case, with the result of less control. If Q is specified, then an approximate control of availability and/or backorde.rs can be readily calculated in a fashion that uses the procedures of paragraph 14-12 for periodic review. The analogous relationship is the fact that r = c-Q -l::.Q (L) it3 , only one cycle of production is the optimum.. 27 Table 14-2 Fraction of Total Requirement to Make at Beginning of Each of the n-Cycles n a,. -an+l b,. 1 2 3 4 5 6 7 8 9 10 11 12 1 .0741 0 1.00 2 .0280 .667 .56 .44 3 .0149 .783 .39 .34 .27 4 .0093 .838 .30 .27 .24 .19 5 .0063 .870 .24 .23 .21 .18 .14 6 .0046 .892 .20 .19 .18 .16 .14 .12 7 .0035 .907 .18 .17 .16 .15 .14 .12 .09 8 .0028 .919 .16 .15 .14 .13 .12 .11 .10 .09 9 .0022 .928 .14 .13 .13 .12 .11 .11 .10 .09 .07 10 .0018 .935 .13 .12 .12 .11 .11 .10 .09 .08 .07 .06 11 .0015 .941 .11 .11 .11 .10 .10 .10 .09 .08 .08 .07 .05 12 .0013 .946 .11 .10 .10 .10 .09 .09 .09 .08 .07 .06 .06 .05 The methods of the last example may be readily extended to the case in which the demand rate is assumed to decay exponentially in time. The demand rate is initially assumed to be 1 (by 14-28 choosing a suitable unit-time scale), and time is now measured forward into the future since the demand rate itself becomes the decision variable, indexing the time. In this case the solution for given n is that En(t) = nF + iane-t with an = 1 -e--an<-. + an+l• '/, '/, 14-17 One-time Provisionings 14-17.1 Types In some types of provisioning the stock provided to support a given military action or activity will be relatively useless once the requirement for stock has passed. Such provisionings are termed one-time provisionings. Examples include (1) provisioning of perishable supplies which, because of their deterioration, will be useless as future stock; (2) terminal supply actions, i.e., the item supplied will not be supplied again; (3) expeditions in which the relative value of the supply to the expedition far outweighs consideration of the value recoverable from disposing of amounts of the material supplied that may be left over at the end of the expedition; ( 4) analogous to expeditions, actions in which a negligible physical usefulness attaches to any inventory that may remain at the end of the period of time for which the supply action is intended to provision; (5) the military action being supplied will not be repeated. 14-17.2 The "Newsboy" PToblem In such a supply action, total demand for the provisiOning period may vary from forecast and may be representable as a random variable. A frequently cited illustration is that of the newsboy on the corner who can only get one supply of papers. In this type of problem, because of the unpredictability of demand, demand may fall short of supply or exceed it. If demand falls short of supply, the excess stock at the end may have some nominal salvage value (e.g., reduction to scrap); but if demand exceeds supply, the excess will not be supplied, and effectiveness is computable only upon the demand that can actually be supplied out of the stock provided. Formally, the attendant decision problem is of the uncertain requirements type. Let e(Q) be the alternative-effectiveness-equivalent of the effort required to provide the quantity Q. Let V(y;Q) be the effectiveness if demand amounts toy when Q is provisioned (supplied). Let f(y) be the probability function that demand does amount toy. Then V(Q) =the expected effectiveness if Q is supplied (provisioned) = -e(Q) + f' v(y;Q)f(y)dy. The terminal inventory IT at the end of the operation will be 0 if ~ Q, but will be Q -y if y < Q. The case when demand can be approximated by a continuous random variable and when efforts and effectiveness are linear may be characterized by the following: (1) e(Q) = F + eQ V1Y + 1J~[Q-y] ify < Q (2) v (y;Q) = v,y + v, [y -Q] ify ~ Q 14-29 Here v, is the effectiveness coefficient for the amount of the requirement that is met, v2 is the equivalent in effort of the salvage value of any amount of the output provided that exceeds the requirement that emerges, and v, is the coefficient of the ineffectiveness penalty that attaches to the amount of the requirement that emerged but was not satisfied by the output, being in excess of it. Then v(Q) = -F-eQ + laQ [(vl-V2)y + V2Q]f(y)dy + h"' [(v, + Va)Q-VaY]f(y)dy. Let q denote the average requirement, and let G (y) denote the probability that the requirement is greater than y, then V (Q) may be simplified to V(Q) = -F-(e-V2)Q-v"q + (v, -v2 + v") hQ G(y)dy. This may be analyzed as follows: V'(Q) -(e-v2) + (v,-V2 + Va)G(Q) and V'(O) -e + v, + Va V'(Q) -(v,-V2 + Va)f(Q). Thus if there is a definite optimal strategy it will be to supply that quantity Q* for which net supply efort total effectiveness at stake This simple solution can be readily found from a probabilistic forecast of the demand. The resulting value of V (Q*) may or may not be negative. In the nonmilitary economic case, expected profit may be negative even though a break-even analysis, based on the average demand rate q being certain, indicates a profit. Nonrandom break-even analysis is thus inadequate to the case of random demand. 14-17.3 Flyaway Kit In the prototype problem of this name, not just one type of item as in paragraph 14-17.1, ~ but N items are to be provisioned. The strategy is then a vector Q = [Q1, ... ,Qm] in which the • quantity Q.i of the ith item is provisioned. In addition, there are constraints which restrict the set of feasible selections of the Q; so that they cannot be chosen independently of one another. The constraints may express restrictions on total weight, total volume, total cost. The general problem is one of the general type identified in paragraph 13-12. Many variants can occur. A major category consists of the case in which demand for the items will be statistically independent between items and in which Lagrange methods can be used to introduce the constraints. In that case the total problem can be expressed as V;(Q;) = F; -(e,-02d Q; -O:~Q; + (g,.i -02i + Oa;) laQi G;(y)dy max V[Q;]) = L V;(Q;) + L,\k(ck-L ck;Q;) j k i where c1.; is the amount of the kth constraint consumed per unit of the ith item provisioned. 14-18 Planning Production Using J;.inear Programming When production is sufficiently linear, the linear programming may be used in a straightforward way to schedule the production and storage of a number of products that compete for resources. The standard problem that submits to this treatment can have the following specifications: (1) requirements for the ith of a set of items: A quantity (jij is required to be produced during or before the fth time period. (2) resource ca}Jacities convex in cost: A unit quantity p,.1 of the kth resource can be made available in the fth time period at the following increasing rate of effort: 14-30 Rate Effort Rate Pki :::;: Ckjl ekil Pki 0 :::;: Pki -Ckil :::;: Ckj2 eki2 (Pki -c,,;l) etc. etc. (3) storage, ctt a cost. A quantity P;i of the ith item that is made in the jth time period in excess of the cumulative requirement for the item through the end of that time period can be stored in inventory until needed. Storage capacity itself can be treated as a resource with convex cost if needed. (4) objective: Meet the schedule of requirements at least effort. The transportation linear program (ch 9) can be used, with sources corresponding to the levels of production cost and the time periods in which made, and with destinations correspond ing to the time periods in which the increments in the requirements occur. The availability of efficient computer routines has made it possible to handle large production plans and to discount future costs when desired. The formulation is not suitable where substantial nonlinearities of setup and change over are present. No good all-purpose computer routines are available for such nonlinearities in large problems. Dynamic programming, recursively over time, provides a systematic computational procedure for the nonlinear case. References 14-1 "Ordnance Logistics Studies-II, Secondary Item Supply Control" by Herbert P. Gal liher. August 1958. MIT Operations Research Center. 14-2 "Smoothing, Forecasting and Prediction" by Robert G. Brown. Prentice-Hall. 1963. 14-3 "A Game Theory Model of Convoy Routing" by J. M. Danskin. Opns. Res. 10 1962 pp. 774-785. 14-4 "Optimum Size and Hardness of Sites for Storage of Nuclear Weapons" by James M. Dobbie. Opns. Res. 8 1960 pp. 388-355. 14-5 Methods of Operations Research, by Philip M. Morse and George E. Kimball. MIT Press. 1962. 14-6 AD 268372. Recommendation on Implementation of M. I. T. Research in Secondary Item Supply Control. Office of Ordnan~e Research. March 1959. 14-7 Operations ReseaTch in Production and Inventory Control, by Fred Hanssmann. Wiley. 1962. 14-8 Studies in the Mathematical Theory of lnventoTy and Product'ion, by Kenneth J. Arrow, Samuel Karlin, and Herbert Scarf. Stanford University Press. 1958. 14-31 CHAPTER 15 COMBAT 15-1 Scope The activity of combat requires the putting together of the many activities discussed in the earlier chapters that support combat. In addition, combat consists of a particular process in which the objective of each side is to reduce the capability of the other. A great deal of material concerning many specialized topics of combat is available. The best of the methodology in the unclassified literature has been published in the international operations research literature. New material is continually being developed. To cover all of this material comprehensively has been beyond the time available for this pamphlet. It has been possible here only to touch on the outlines of the subject. The primary objective of this chapter has been to illustrate various basic quantitative methods that may be useful in connection with combat processes and actions, and to suggest by references to the literature where further detailed studies may be found. Even a moderate bibliography of this material has been beyond the resources of the volume. It would include many official reports and studies. Evidently the numerical values of the various constants and parameters,-hit and kill probabilities, vulnerabilities, firing-rates, etc.-that are referred to at various points in this chapter, are a matter of military security. The material touched upon in this chapter is the following: ~ (1) representations of effect, hit and kill probabilities, the randomness of performance and effort; (2) time patterns of firing, especially Bernoulli and Markov detection and fire; (3) single-attack survival-probabilities as a function of force structure and design; (4) Lanchester type models of the approximate average dynamics of combat; (5) probabilistic corrections to the Lanchester models based on (2). No complete tactical strategies are given in this chapter; the above elements of action are merely the basis for constructing strategies. Reference is made to the examples in chapters 8 and 10 of optimal tactics and strategies. The processes covered in this chapter are prerequisites to Monte Carlo methods of war gaming and explicit formulas can be produced for all quantities of interest. Monte Carlo war-gaming is simply a method of calculating by statistical sampling the numerical values of random processes of combat that cannot be quickly, explicitly, or as efficiently handled by formula. 15-2 Representations of Effect 15-2.1 Outcomes In combat fire, the simplest trial is an individual round. An entire attack upon a complex target can also constitute a trial and for macro-planning it is necessary to do so to economize planning effort. In general, the outcomes produced by a trial may vary enormously depending upon the nature of the target, the type of projectile-weapon, and many other factors not omitting psychological. To describe damage, there is needed the type of elementary units in terms of which to represent the kinds and amounts of damage. In addition, there is needed a concept of the occurrence of the fire that produced the damage. The ultimate total damage, an obvious example being prolonged area bombardment, may arise as some accumulation from these elements. 15-1 The following examples illustrate the variety needed in the concept of damage: (1) No effect. An outcome of this type in no way alters the state of the target. In "noisy" duels, of course, the target gains the knowledge that a shot has been made and a trial has occurred. Thus in a duel based upon time of reloading, for example, every shot may have some effect. Again, when a land mine might have detonated, but did not, a trial occurred. (2) A hit on a soldier that only wounds him changes him from an active fighting unit to a demand on the casualty-handling system. (3) A hit on a vehicle control system may reduce its fighting capability, the vehicle being still effective against certain types of force-units. In the case of a tank, the variety and grades of damage by which the tank's capability may be reduced by an antitank hit can be fairly well enumerated and related to the actual direction and other aspects of the hit. (4) A hit may destroy all capability of a combat unit, a nuclear hit on a metropolis may have an enormous variety of effects whose ultimate value may be difficult to imagine. To represent the effect upon the target of fire delivered at it "discretely" means to classify the effect into one of a set of distinct, mutually-exclusive values of which each unit of fire produces exactly one of the effects. One possible effect must obviously be no effect. Two obvious classifications of effect are "hit" and "kill." Here, the symbol e(J) is used to denote the probability that a unit of fire produces the effect e and an impact point I. If the effect is classified as a hit, the symbol h(I) will be employed. If the effect is classified as a kill, the symbol k(I) will be employed. The point I is located in a coordinate scheme that includes the target's location and extent, if any. 15-2.2 Effect on a Single Target For a given effect, the function e(l) is commonly termed the damage function, or the effect function. For a given target, the region v of points I for which e(J) > 0 is termed the vulnerable region. The integral v = J. e(l)di, is termed the effective vulnerable area (or volume, for 3-dimensional representations) of the target for the effect. In the case of k(I), the area (volume) is called the lethal area (volume). If T denotes the target area (volume), then the ratio vjT is the average target vulnerability. For example, in World War II operations analysis, a torpedo hit on a merchant ship had a probability of ~ about }1 of sinking the ship, reference 15-1, so that the effective vulnerable area was }1 of the ship and • the average target vulnerability was 3i. 15-2.3 Effect on Composite Targets A typical composite target is an area or a force of units. The area itself may he an index of the amount of damage that can be done, in which case the fraction of the target area destroyed provides a continuous measure of damage. But the meaning of "destroyed" may need to be defined precisely in terms of the way that military capability is affected. A target may be made up of area subtargets, or discrete points, or both. The subtargets may belong to different vulnerability classes. A city under nuclear attack affords almost the extreme in compositeness of target. The weapon itself contains several separate effects, including radiation, blast pressure, and thermal ignition, which operate to very different radii from the burst point. The city will contain elementary targets of great variety, so much so that a statistic description of the city appears necessary. Reference 15-2 quotes reports that for many U. S. cities the density p(r) of population at radial distance r from the center of the city, in persons per unit area, is given by p(r) = p(O) e-br For a city with circular symmetry, the number of persons P(r) within a radius r of the center can be approximated by P(r) = P(m)[1 -(1 + br)e-br] where P(m) is the population of the "standard metropolitan area" surrounding the center point. Over a period of time, the values of p(O) and b have tended to decrease gradually. If a city is not circular, P(r) may be corrected by not integrating over 360°. For nuclear warheads, reference 15-3 uses (0.5)""213 /"bE2 for the probability that a target will survive a nuclear burst of yield y, where his the target's hardness (in pressure units), a and bare constants, and E is the circular probable area of impact in nautical miles. 15-2 Evidently the systems analysis of large nuclear targets can be a substantial investigation. Subtargets of various vulnerability classes have to be identified and measured, and their capabilities evaluated. There are a great many attack variables, including the attack height, weapon characteristics, etc. that will all contribute to the representation of the effect of the attack. Effects of nuclear attack are difficult to predict. For example, mass fires that may occur in metropolitan areas may have unpredictable consequences. The distribution of the location of the populace will change during time, and meteorological condictions may significantly randomize the outcome. 15-2.4 Effect on Complex Targets A typical complex target is a communications network with alternate routing or any redundant system. Damage changes the state of structure of the system; e.g., how much redundancy is left, at each stage of a serial system, by a hit. Markov models will aft"ord practical representations. For example, the operating components of a tank (driver, commander, gun, track, etc.) can be individually treated in compiling damage probabilities and in modelling the outcome of a hit. This is done in war-gaming micro-models of combat. Representations of effects on complex targets fall into two categories: (1) single-trial outcomes. For example, an entire attack upon a missile complex, a defensive position, an objective, or upon any coordinated force may be treated as a single attack. (2) dynamic sequences of effects, e.g., in a long battle, struggle or campaign. In these, what survives one attack are the fighting units of the next trial. The important effect is the final outcome of the sequence. 15-3 Representation of Survival 15-3.1 Examples Measuring the effect of the attack in terms of what survives is a common methodology in system analysis. The literature has become replete with plots of survival probability as a function of the operational variables.Reference 15-4 is an early and classic summary of the methods in connection with aerialoperations. It is likely that the various types of attack configurations, computer programs, and numerical results that have been computed are so numerous and broad in scope that great numbers of problems might find approximations already done. This is true even though the analyses may have been naval problems when what is now wanted is an Army problem, or the analysis was an aerial attack, when what is wanted now is a ground attack. The problems will tend formally to resemble one another for reasons discussed more fully below. Yet the steady increase in automatic computer capability has meant that it is becoming easier to compute the solution anew than to retrieve the original study and results, even if a systematic catalog of the latter were available, which is not the case. The following are but isolated illustrations of ways in which combat survival has been modeled. (1) Defense of a position by an infantry squad. The following representation of one method by which a squad can attack a position defended by another squad is reported in reference 15-5. In the method, an attacking squad advances in stages as a group, without firing, dropping under cover at the end of each advance. Each advance produces a volley from the defenders, in which each defender selects at random an attacker as target and fires one shot, the defenders all firing (in ignorance of each other's selection of target and outcome). A hit eliminates an attacker, the possibility of which was estimated in numerical value under the conditions. A(n), the number of attackers surviving after n volleys is a Markov chain under the assumptions. Detailed transition probabilities and numerical illustrations are given in the reference. It is noted that the model makes no allowance for the psychological effects of losses on the attackers. On the average DIA (n) shots are fired at each attacker on the nth volley, where D is the number of defenders. The probability that an attacker survives the volley is thus approximately (1-dn)DIA where d, is the kill-probability considering the range at the nth stage. The average surviving the stage is thus A(n) times this latter probability. 15-3 (2) Reference 15-6 analyzes the survival of a weapon complex that is composed of missile sites and a command site, where the sites are separated by distances of less than 2 lethal radii so that a hit may affect more than one site. On the assumption of a circular normal hit distribution around an aiming point, the survival of the complex can be computed for a given ground layout. The aim point can be optimized and strategies of one or more attacks investigated if a Markov chain can be used to model the outcomes. (3) Reference 15-7 considers a single Blue attack using B independent identical missiles against a Red target complex consisting of R separate individual locations with nonoverlapping vulnerable areas. Of the R a number R1 are firing sites and a number Rc are control sites. Any surviving control site can operate all surviving firing sites, survival of at least one control site being necessary for operating any surviving firing sites. If a Blue missile is allocated to a Red firing (control) site, the probability that the site survives iss" (sc for a control site). B1; Blue missiles are allocated to the ith Red firing site, and Bci missiles are allocated to the ith Red control site. The expected number of operable surviving Red firing sites is L (s1)B!i [1 _ II(1 _ scBci)] ; The number is minimized if all of the Blue missiles are allocated to one of two extremes, either all to the firing sites or all to the control sites, provided that the missiles can be uniformly distributed over the sites chosen as a group. This is also the optimal solution if the total number of Blue missiles is an integral multiple of both R1 and Rc. In practice, discreteness prevents this solution in every case. Thus the optimal solution is ordinarily a mixed allocation. 15-3.2 Relationship of Survival to Structure The relationship of survival to structure, in combat, is analogous to the relationship of reliability to structure when no deliberate enemy may be threatening the system involved. Reliability is typically measured by the interval of reliability, or the probability of correct system performance on a single mission. The same is true for a combat system. A structure that is parallel in reliability works if any component works. A combat force that succeeds in its mission if any unit of the force succeeds is a parallel force. For example, a group that is trying to penetrate an enemy position, succeeds if any member of the group penetrates. The increase in the amount of firepower available to a single combat unit has tended to accentuate the likelihood of parallelism in important structures. The ancient principle of defense in depth is simply that the attack must penetrate a series structure. Thus a likely form is the attack of a series structure by a parallel structure. A defense in depth succeeds if any stage of it succeeds. It is thus a parallel structure for survival, a series structure for penetration. Thus the probability of survival of a single position is apt to be of the form (1 _ pD)A where A is a measure of the size of the attacking force and D a measure of the depth of defense If the defense is nonhomogeneous and the attack is nonhomogeneous, this form becomes A D II(1 -II p;;). i~l j~l 15-4 Delivery Errors Delivery errors are composed of errors of aim, effects of ballistics, effects of meteorological con ditions, and fuzing errors. These create a dispersion of the impact points of a statistical population around an average center of impact. For a cluster or salvo aiming errors of rounds will be correlated but meteorological, ballistic, and fuzing errors may even then make the round impact points rather independent statistically. 15-4 By a Gaussian or normal round is meant a round whose impact point has the probability density of the multivariate normal distribution: 1 [X-I'] T [X-I'] g(x) = ------e-1' 2 u-u- N (27r)N12(II u;)l/2 i=l where N = 2 or 3, J..t is the average impact point [J..tt, ... ] and u is the vector of dispersions (standard deviations) [O"t, ... ]. The round is termed circular if N = 2 and u1 = 0"2, and spherical if N = 3 and Ut = 0"2 = U3. The probable error (PE) is the distance from the center of impact exceeded by one half the theore tical impacts. The probable error may be measured horizontally in range (along the line of fire), laterally (normal to the line of fire), or vertically. The circular probable error (CEP) is the radius of a circle about the center of impact containing half of the theoretical impacts. Probable errors are derived from normal distributions, with the PE = .6745u and CEP = 1.1774u. In addition to the mean and standard deviation of horizontal and vertical spread, there are numerous other sample statistics of impact points whose use may aid analysis of fire, namely: extreme horizontal dispersion, extreme vertical dispersion, extreme spread, mean radius, radial standard deviation, radius of the covering circle, and the diagonal of a pattern. Detailed consideration of these is beyond concern here, except to remind us that they may be useful, when measurements are available, in testing hypoth eses about the distribution of shots. Many of these statistics are tabulated for normal dispersions. Details of these are given in reference 15-8. The topic of coverage refers generally to the extent to which one or more rounds hit a target of given extent. A great deal of statistical literature has been devoted to developing coverage topics in connection with Gaussian rounds. Topics involving targets that have extent typically require special statistical tables. A simple polar planimeter method for computing the hit probability for an irregularly shaped target is described in reference 15-9. Simple Gaussian statistics are not adequate for representing homing weapons, which pose the general topic of pursuit. It is possible to make state-representations of homing missiles in flight, in correspondence with the actual control states of the system, and to simulate these models in order to calculate ultimate hit probabilities. The topic then belongs in the category of sequences of trials (para. 15-6). 15-5 Effects of Trials 15-5.1 Weapon Effects The mechanics by which the effect extends throughout its region of effect vary. In the case of fragments, for a given object the probability that it will be hit decreases with radius from the point of impact.·Nuclear blast overpressure and effect on structures of given pressure resistance decrease with increase in distance from the detonation point. Thermal ignition and burning at the moment of detonation will depend upon the target and upon conditions. Evidently the nuclear air burst exposes many targets simultaneously to effect, to some extent conducting independent trials at each point. By contrast the radial extent of a ground burst is often blocked by the target's protection. For example, a 5-inch shell may have lethal radius adequate to wipe out a gun emplacement of a 12-foot radius if the point of impact is inside the emplacement, but may have no effect at all if the point of impact is outside a well-bagged emplacement. 15-5.2 Effective Radius At radius r from the point of impact, the effect may be produced with probability e(r) upon any target of a given type that is located so as to have its effective vulnerable region at least partly inside this radius. For a single target, the effective radius of the unit of fire is then r = f' e(r)dr 15-5 where the integration is now over all r instead of just over a target. The radius r is the radius for what has been termed a "cookie-cutter" i.e., the effect is approximated as if every target inside the circle (sphere) of radius r is affected (e(r) = 1 there) and no target outside the circle is affected. This circle is termed the effective circle. When a number of targets may be found within the radius, then the number may be counted and the total used as a measure of damage. The targets may be of different type of value, and weights may be assigned representing the worth of targets. When the cookie-cutter method is not used, then for each radius r the function e(r) may be used to multiply by the target density at radius rand all such products summed to produce the total expected damage. This total must, of course, include every possible impact point of the unit of fire (round, detonation, etc.). 15-5.3 Discrete Effects, Total Probability per Trial If a round from a given weapon is directed at a given target (i.e., a trial is made), the round selects an impact point I. When the probability density that the point selected is xis symbolized by i(x), then, for discrete effects the total effect-probability per round is e = Ji(x)e(x)dx. This is a natural measure of effect. For example, suppose that i(x) is the circular normal distribution -i.e., i(x) = g(x)g(y) (see app. D)-centered on the center of a circular target that has a radius equal to R. Suppose moreover that k(l) is 1 on the target. Then k(I) = 1 -exp (-R 2 /2rr2) is the probability of killing the target. Example: Suppose a gun emplacement is circular with a radius of 11 feet, and for I inside the emplacement, let e(I) be supposed to be 1. If fire at the emplacement is circular normal with standard deviation of 50 feet, then k(I) = 0.0239. 15-6 Sequences of Trials in Detection and in Fire 15-6.1 Detecting a Target Two methods of search are compared: (1) search by discrete glimpses, or looks, and (2) continuous searching. (1) Discrete Search In searching for a target, a scanner (e.g., a radar) may "look" repeatedly in the direction of the target. On some, but generally not all of these scans, the powerinteraction between the scanner and the target is sufficient (e.g., returns enough power to the radar antenna) to exceed a given predetermined threshold of detection. Each scan is thus a trial, and detection (a visible blip on the scope) is accomplished. On a trial involving a target-scanner combination of given specification S, at a given range r between them (vector if necessary), denote the probability that detection occurs by ar(S). In the discussion which follows we suppose S to be constant and suppress consideration of it, replacing ar(S) by ar. The search may proceed with stationary detection rate, typically, because the relative range remains constant. For given S, the effect of possible changes in r with successive scans can then be ignored, and ar is constant at some value a. Search is then a process of stationary discrete Bernoulli trials. In the case where range between target and scanner is decreasing, let search begin with a look when the range of the target is r1 and at the nth scan let the range of the target be r, = r 1 -(M2 + ... + M,) where M, is the reduction in range of the target between the scans i -1 and i. Then on the nth trial the probability of success is ar, which can be abbreviated to a,. The probability j, that during the entire search, detection of the target first occurs at ranger, is then " j, = a, II [1 -a,_I], with ao = 0. i=l 15-6 Since some range rN corresponds to the target reaching the scanner, there is a definite N probability that detection fails to occur at all, namely II [1 -an]. n=l In practice, the detection probability f" may be a maximum at some intermediate range because of the design of the defense. Values of the probabilities a,(S) and details for experimentally determining them are beyond the scope of this volume. The general method is simply to conduct repeated experiments and fit the data on targets that survive to a given range to the equations. (2) Continuous Search Scanning at sufficiently high speed may qualify to be represented as continuous. Scanning may be the more nearly continuous if it is simultaneously performed by, for example, a great many independent human scanners, as in the case of an advancing military echelon (the target range may then require appropriate averaging). The detection probability (and rate) of a detection per trial is now replaced by a time rate of detection, denoted here by a if the detection probability is not affected by changes in range or is for other reasons constant during the search. A nonstationary detection probability on discrete trials is correspondingly replaced by a detection rate a(r), and range is assumed to vary continuously. Obviously the rate a(r) will depend upon the scanning rate and upon the approach velocity of the target, typically varying directly with the scanning rate and inversely with the approach velocity. It may be experimentally estimated by fitting data to the equation below. Attack detection. If a target is approaching a scanner, then the probability density f(r) that the target is first detected at range r is -leo o(r)dr. f(r) = a(r) e r -1eo a(r)dr The probability that noThe probability of surprise attack = e " detection occurs. 15-6.2 Hitting a Passive Attacker In certain types of attacks-e.g., bombing aircraft, short-range-weaponed attacker-the attacker must close range at least to some point in order to achieve its objective, and will not fire until then. We confine attention to post-detection defense against such an attacker, ignoring the preliminary task of detection and considering only targets which have been detected. The trials of interest are now successive defensive shots at the target. Where there is a significant interval between successive firings, obviously, defensive fire consists of discrete trials. For a high rate of fire or especially if many defending guns shoot simultaneously at the attacker, the fire may qualify to be represented as continuous. Unpredictable errors of aim, and of interior and exterior ballistics make the success of each shot a random outcome. For a target-defensive-weapon combination of given specification S, let hr(S) be the probability that a shot fired when the target is at range r is a hit if the trials are discrete; let h(r,S) be the hit rate if trials are continuous. If successive trials are statistically independent (this may not exclude bias of aim), then the firing process is precisely a Bernoulli process discrete or continuous as appropriate, and stationary or nonstationary accordingly as the hit probabilities are independent of, or depend on, the ranger. Target survival is a simple Markov process, with range as stated. Note that formally this firing process is quite analogous to the detection process already discussed, with hit substituted for detection. Consequently, for any given S and for a continually closing attacker, the probability (density in the continuous case) that a hit first occurs at range rk is _ jhk :~~ [1 -hk+d in the discrete case j(rk) -) _(eo h(R) dR ~h(r) e }r in the continuous case. 15-7 The probability that the attacker succeeds in closing-the average fraction of successful closings-is consequently in the discrete case. = {~[1 -h(r;)] _ !"" h(R) dR e ro in the continuous case. As with detection, the function h(R) and the series h(r;) may be estimated by fitting experimental data to these formulas. If any k hits are required and sufficient to kill the attacker, the kth hit is then the kill. The kill process is thus a series of k successive Bernoulli processes, and thus the kill effort is an Erlangk random variable. United States World War II naval operations analysis termed as "splash rate" the probability density d(r) that an attacker would be shot down at ranger if it had survived up to that range. 15-6.3 Attack of Area Targets Suppose a target to be located in a given region of size (area, volume) R, the exact location of the target being unknown and assumed to be equally likely to be in any portion of the region. Suppose that a shot intended for the target when fired into the region will damage any such target which is within a region of size a, the shot's lethal neighborhood, of the point of impact of the shot. Then the damage probability per shot is c = ajR, a unit coverage factor. (1) Stationary discrete fire: silent targets. Successive shots will constitute stationary Bernoulli trials if no information concerning the probable location of the target is gained from a shot. The damage probability c can be used in the earlier formulas above for the hit probability h as a function of number of shots fired. (2) Continuous fire. Suppose that the size a of the lethal region for each shot is exceedingly small in relation to R. The volume of fire must now be correspondingly large to be likely of producing a hit, and may be executed at a rate that can be approximated as continuous. Then the density of the time t measured from the beginning of bombardment, at which t the target is first hit is given by f(t) = he-"t where h-1 is the average length of the time interval elapsing until the first hit. As before, if k hits are required to destroy the target, then the target-destruction time is Erlang. When a number N of individual targets are located in the area, then h may be interpreted as the number of targets hit per unit time. If N is moderate and the targets sufficiently isolated that no single shot can damage more than one, then-for a given time t of continual bombardment-survival to time t is for each target a trial with success probability e-111• Consequently, the probability that n targets remain undamaged at time t after the bombardment begins is the binomial ( ~)ce-ht)n(1 _ e-ht)N-n If the density of targets per unit area is q, so that Rq = N is very large, then the number damaged in the time interval t will be Poisson in distribution. (3) Nonstationary bombardment. For stationary bombardment, no attempted route of traversing the target region will alter the statistics of survival time of targets. If instead there is some preferable traverse (or necessary traverse and it is known of each shot whether it hits the target or not), then the process becomes nonstationary. As a theoretical example, suppose that each additional discrete shot could be made to cover a region of size r not overlapping any region covered by any previous shot. Then the total number of shots required is not more than Rjr. The probability of hit on the nth trial, if no previous hit has occurred, is r I R -nr.. The probability for the total search 15-8 that hit occurs for the first time on the n-th shot is now constant, equal to r, as may be verified. This corresponds to sampling without replacement in statistics. In the case of continuous fire, the total time T required to traverse the area will now be proportional to R. The probability density of a hit for the first time at time t during the traverse, given no previous hit, will increase linearly with t. For the total traverse, the probability density of first hit at time t will be constant and equal to 1/R. Results analogous to those of search and defense against a closing attacker can be obtained in case the instantaneous hitting rate at each point of a traverse is some pre scribed function of the length of the traverse path to date. 15-7 Combat as a Reaction Process 15-7.1 Lanchester and Force Relationships The topic of this paragraph began in its current form with Lanchester (ref. 15-10), although earlier notice of the phenomena is recorded by Fiske (ref. 15-11). Doubtless it has been noticed and utilized through countless unknown generations of wars and their strategists. Suppose that two forces, Blue and Red, are opposed in combat at time t with force sizes B(t) and R(t), respectively. As losses occur, these force-sizes will be decreasing (the absence of re-enforcements is assumed for the time being) at rates which, neglecting the discreteness of force-size as a quantity, can be approximated as dB(t)/dt and dR (t) Idt, respectively. The question of interest is: What are the primary functional quantitative relationships that will exist in the combat between these forces when other factors are considered? The material covered in the paragraphs which follow emphasize two groups of factors: (1) the quantitative factors of force-size, unit effort, and unit combat-effectiveness; (2) the effect of the information and tactics of force contacts. In many cases it is nearly impossible to separate the effects of contact and the effects of information. It is evident that in order to find simple functional relationships, some idealizing of situations and some hypothesizing may be necessary. The relationship need not be a single or homogeneous one; if the battle is a composite of contact between elements of the two forces, the total relationships may be a composite of the local ones. Lanchester's was the first recorded attempt to postulate mathematical forms for basic relationships and to draw military consequences from the mathematical consequences of the suppositions. Since his work was published, attempts that have been made to apply overly-simple "Laochester-type" relationships to entire battles tend to demonstrate that total relationships are neither simple nor automatic. Lanchester attempted no precise use of the statistics of hit probabilities although he noted the unreliability of small-scale reaction. Instead, he focused attention on "concentration," i.e., upon the form of the contact between the two forces, while assuming an average combat effectiveness. In developing these ideas today, we have the advantage of being able to analyze with greater readiness and at less computational effort (especially in view of the availability of computers either for direct computation or for Monte Carlo simulation) the effects of randomness. As will be evident, inclusion of these effects shows that the precise Lanchester equations are somewhat biased in their predictions from the given assumptions. In defending the validity of attempting to establish mathematical functions for the above relationships, Lanchester himself pointed out that force-size itself is an exceedingly popular and much used measure to explain why the outcome of battle goes the way it does, and upon which to base expectancy of its future outcome. Force-size, he points out, is arithmetic. To commit oneself so far to arithmetic but to allow no deeper quantitative analysis, is not much justification for placing any confidence in the arithmetic of force-size alone. It should be emphasized that the purpose of analysis such as this is to evaluate and to design battle tactics. In these paragraphs, emphasis will necessarily be upon using Lanchester-type approximations for descriptive purposes in order to summarize the effects of tactical choice. 15-9 15-7.2 Average Rates of Fighting and of Effectiveness The rate at which fighting occurs may to a considerable extent be measured in terms of the trials with which the probabilities of hit and kill are associated. In primitive hand to hand combat, the unit of measure may be the individual blow. In modern combat it may be the round of ammunition. At timet in a given battle, fu(t) will denote the average rate at which trials are generated by a single Blue force-unit; fu(t) will denote the corresponding rate for a Red unit. When no ambiguity results or when the rate is stationary, merely the symbols fn and f u may be used. Attention is mainly confined to trials which produce either a kill, or have no effect at all, nor any cumulative effect. For an individual trial, the kill probabilities will be denoted by k 13 for Blue and by k 11 for Red. The unit killing rates (in time) will be Cn = kufn, the number of Reds that Blue unit can kill per unit time, and cu = kufu for Red. As indicated, nonstationarities may be included in the rates. At times it is convenient to consider the exchange or attrition ratio f = c11 , the average number of Blue Cn units lost per Red unit killed. Attention is confined mainly to homogeneous forces that on each side are composed of force-units that are identical in kill probability and in fighting-rate. 15-7.3 Contact Relationships The outcome of a battle between two given forces may, of course, depend critically upon how the two forces come into total fighting positions. For example, the classic military maneuver of flanking can have the effect of decreasing the fraction of the flanking force which is exposed to all of the flanked force, or to any given fraction of it, while increasing the fraction of the flanked force that is exposed to (a given fraction of) the flanking force. If this happens when the two forces are equal in individual fighting capaLility of each force-unit, and initially equal in total force sizes, the flanking force will now almost surely win (cf. para 15-9). By total contact is meant not just infantry contact, but exposure to artillery, of rear positions to aircraft and missiles, and even civilian production of missiles and conversion to a saboteur force. Even for the local contact of two forces in the field, a complete description of position relationships and interrelationships (much less of information concerning them) can be very complicated. In modern war, the problem becomes more complex with increases in mobility and fire power that brings more points under threat. No general military term appears to exist which describes or even refers to the totality of contact, or to the modes of such total contact, between composite forces. In terms of physics, something like a two-directional graph or incidence-matrix appears needed to represent the possible potential existing between each type of force-unit in each force. However, the fighting advantages that can be gained by combining units of different force-types in attack upon other units suggest that such a matrix is apt not to be the most obvious index of the total relative positional advantage of either force. The design of form and time pattern of contact is, of course, part of the plan-ning of the attack and the defense in addition to designing the concentrations, maneuvers, timing, specificity of troop type and other aspects of engagement. Contact is at the very heart of military science. The history of this science, which is certainly as old as written history itself, is filled with records that reveal the consequences of the tactics and tactical effects of contact that were employed. Consequently, any quantitative theory of combat that is gross or crude in its representation of contact is at once to be recognized to be tentative, requiring the greatest care in employment. 15-8 Population Reaction Processes A number of types of population processes have a significant formal similarity to combat processes. The basic classical nonmilitary topic is actually that of chemical reactions, reaction rates, and reaction processes. The subject of epidemics has more apparent similarity to combat. Consumer marketing by advertising is a less obviously related topic, formally quite similar (note that some "epidemics" of ideas are termed "fashions"). Propaganda is another form of spread of ideas. (1) Chemical Reactions. In a chemical or physical reaction between substances, the molecules of one substance are brought into contact with the molecules of another substance, and a reaction (combat) takes place. In the reaction the various molecules are transformed 15-10 (2) (3) (4) (5) (killed, captured, destroyed, etc.), the substances emerging after the reaction being termed reaction products. The reliability of the reaction will be lower--i.e., the expected reaction products may not emerge-if the reaction rate is slow (low hit or kill probabilities) or the force-sizes involved are quite small (contact fails). The rate and reliability of the reaction-and sometimes even its most likely direction-can be considerably influenced by "stirring," by controlling the rate and identity of mixing, and the timing of contact. In the case of commercial chemical processes, the importance of managing the dynamics and timing of mixing of the substances, with even micro-precision, is well-known. Note that there is no obvious counterpart in a military "combat reaction" of a chemical catalyst. The true nature of catalysis is in fact of some controversy among chemists. Epidemics. The importance of quarantining, as a strategy for controlling the disease spread in the case of an epidemic, stresses the importance of the precise nature of contact. The virulence of the disease and the susceptibility of potential hosts correspond to damage functions and to target vulnerable-areas. The contact may be critical in determining not merely how fast the disease spreads, but whether or not the disease reaches the entire population. Diseases are thus rather unreliable reactions. Contagious disease differs from combat in the degree to which a friendly force-unit can infect another friendly force-unit. But when morale and loyalty are critical aspects of fighting, this effect is also known in combat. Marketing; Advertising. Marketers compete for customers whom they "capture." There are no killed or wounded in this battle. A battle consisting of individual duels takes place when customers of competing products compare their experiences with the products (live advertisements sometimes literally dramatize these engagements). Advertising can have the effect of increasing the rate of occurrence of individual duels by forcing widespread engagements to take place promptly, e.g., by television transmission of ideas. The spread of a new product through a population is like the spread of a disease in an epidemic. When only the idea of the product needs spreading, advertising is a weapon that can reach great numbers of the opposing force all at once. Propaganda. This important arena is formally identical with the previous example of advertising, except that the stakes are much higher. The products are ideas, the marketers are political entrepreneurs or nations in cold war. There is a "sowing" of a seed-a germ, a suspicion, a resentment-usually by the communications media that will reach the most people as quickly as possible. The sowing can be timed so that ideas that require germination periods will mature in a schedule that will lever political or para-military action, or revolt, the more powerfully. Ecology. Ecology refers to the competition of natural species for the materials and other species that are their resources. Ecology includes the "balance of nature" which is a quantitative balance of flow of the elements of natural life between species and regions and forms of life. Quantitatively, this flow is a kind of combined "materials-balance population-process" all at once. As one population becomes too large or too small, the balance is disturbed and reaction rates begin to reflect the disturbance. Mathematically, population reactions could be dichotomized accordingly as the individual reaction has a random or a deterministic outcome. For reactions between small forces, a random model of the reaction may appear more realistic because of the unreliability of aim and of hits, and because of the occasional critical dependence of the outcome upon accuracy of fire, shooting first, etc. Conceptually, deterministic reactions may be regarded as approximation models of random reactions, approximations which may be better the larger are the sizes of the forces involved. But in large military reactions, the forces that are engaged with each other tend to consist of a composite of heterogeneous local engagements, with different reaction rates and different modes of contact. Hence the large deterministic model will also have to be heterogeneous in composition in order not to lose representation. 15-11 15-9 Modes of Engagement 15-9.1 One Against One Combat In hand-to-hand combat, each participant has but one opponent at a time. This organization of the contact between the two forces may occur when greater accessibility is difficult or particularly inefficient. Each blow may constitute a trial. Any natural tendency for blows to alternate first by one opponent and then the other tends to reduce the likelihood of a double kill. If, as in ancient fighting, it is likely that there is a definite field or arena of battle in which a number M of duels or matches is simultaneously in progress, then M will tend to remain constant at the maximum number of duels that can be effectively fought within the capacity of the arena if replacements are promptly made for fallen units. Under these conditions, and where cR and cB are the products of the rates of trials and probabilities of kill for Red and Blue respectively, the approximate rates of change in the force-sizes will be dB(t)jdt = -cRM and dR(t)jdt = -cBM until at some time tM one force can no longer put M units into the battle arena, at which time the mode of contact henceforward changes. Hence for anytime t after the start of the battle and prior to the time tM B(t) = B(O) -cRMt and R(t) = R(O) -c8 Mt and since j = cR, CJJ B(O) -B(t) = j[R(O) -R(t)]. The linear relationship between losses on both sides up to time t has been referred to as Lanchester's linear law, but it does not hold merely for this type of contact. For both forces to be identical in size at time tM-i.e., for B(tM) = R(tM) = M-the initial forcesizes must be related as B(O)jR(O) = f. If again the force-units are of a single type, with duels consisting only of pairs of duelists, consider the case in which the battle is such that all of the units of the smallest force are always fighting. Then the number M(t) of matches that are being simultaneously fought at timet will be M(t) = min[B(t), R(t)]. Consequently, the momentary attrition rates will be approximately dB(t)jdt = -cRM(t) and dR(t)jdt = -cBM(t). Suppose for example that B(O) is greater than R(O). If a time t occurs at which B(t=) = R(t=), then prior to the time t=, = R(t) d {B(t) = B(O) -cnR(0)[1 -e_c 81 ] M(t) ' an R(t) = R(O) e-cnt . When Blue does not lose, these relations hold for all times t. Since the initial force ratio s" > 1, Blue will lose only when j > 1 and so < f. In that case, u = e-cnt= = [f-1]/(f-So). . _ jB(t) = R(O) ul-1 e-cRt For times t after t-, tR(t) = [R(O)jj][j _ 8 o + uf-1 e-cRT 15-9.2 Contact Other than One Against One There are many modes of contact that are other than one against one and, consequently, involve concentration. The paragraphs below describe some of the simplest representations of the simplest kinds of concentrations. As before, homogeneity of force is assumed and no cumulative effect of blows. Geometry and information as to position now become important factors. 15-9.3 Ganging In hand to hand fighting when the Blue force is superior in numbers, suppose that all Blue units B(t) are deployed into the fighting in such a way that each Red unit must fight an average of c(t) R(t) 15-12 Blue adversaries. c(t) is thus the Blue concentration. Then R(t) matches will be going on, in each of which the frequency with which a Red unit receives blows will be fac(t), while the frequency with which a Blue unit receives blows will be !R/c(t). Restrictions that will be imposed because of the range of fire or the geometry of units or of the battle ground will typically put an upper limit c on c(t)-for example, in swordplay, not more than 2 Blues to a Red may be effective. Up to the time fc-if there is such a time-at which c(t) becomes equal to c, the following approximate attrition relationships will hold: dB(t)jdt = -cRR(t) and dR(t)jdt = -cnc(t)R(t) = -cnB(t) assuming that unit effectiveness per blow, kn and kR, are not affected by the concentration (if they are, perhaps it would be mainly through effects that would ordinarily be cumulative). Subsequent to the time tc, the equation for dR(t)jdt becomes dR(t)jdt = -(cn)cR(t) and the battle would proceed according to the conditions represented in paragraph 15-9.1 above. It may be possible to utilize reconnaissance and remote communication to obtain complete infortion on targets, so that concentration of force can be obtained without overlapping of fires or overkill of a target. The resulting "contact" then becomes the equivalent of the type of ganging discussed above, with either no upper limit restricting the value of c(t), or normally a rather weak one. Then the approximate attrition rate for Red will always be of the form dR(t)jdt = -cnB(t). The attrition rate for Blue may now not be that of remote ganging unless Red possesses the same reconnaissance and coordination capability. But as long as Red is even saturated with local targets, the relation dB(t)jdt = -cRR(t) may hold as in local ganging. Of course, an inferior counterinsurgent force may possess even better reconnaissance than a technically superior force. 15-9.4 Area Intelligence Only, Coordinated Fire, Ganged Probing When at best Blue fire can only be distributed over Red's position-because Blue has only estimates of Red's position (as in area-fire, in which precise target locations within the area are unknown)-then as Red's force decreases, the undestroyed target area presented to Blue decreases and overkill occurs. If Blue's fire can be coordinated to prevent overlap, then the approximate momentary rate of change for Red is dR(t) = -k (0) j R(t) B(t)dt B B R(O where ks(O) was the Red-elimination rate per unit of Blue fire at time 0. The contact described here is approximately that when Red is a guerrilla force and the Blue force is of World War II type with area weapons. Whether fire overlaps or not must be carefully determined. For example, light fire that has high ballistic dispersion is not likely to constitute overlapping fire. Reference 15-12 provides tables of numerical values in area artillery duels with firing doctrines that affect overlap of fire. 15-10 Lanchester Square Law and Its Application to Historical Battles When the contact is such that the relationship in paragraph 15-9.3 holds on both sides, i.e., dB(t) = -c8 R(t) and dR(t) = -cRB(t) dt dt the joint relationship is termed the Lanchester square law, owing to the fact that . dB(t) CnR(t) smce dR(t) = cRB(t)' then R2(0) -R2(t) = CR [B2(Q) -B2(f)]. I Cn 15-13 In order for the battle to end in a draw, with R(t) = B(t) = 0, the initial force-sizes should be related as R2(0) eR ' --= -= j = the ' exchange rate" B 2 (0) eB As a numerical illustration Reference 15~1 supposes that B(O) = R(O) = 1,000, but that Red manages first to engage 500 Blues with the entire Red force of 1,000. The Red will annihilate the 500 Blues at a loss of only 134 Reds, leaving 866 Reds to meet the remaining 500 Blues. These 500 Blues can then be defeated with a loss of 159 Reds, leaving 707 Red survivors, with Blue wiped out. The solution to the Lanchester square equations is B(t) = B(O) cosh (VeReB t) -V] R(O) sinh (VeReB t) R(t) = R(O) cosh (VeReB t) -~JB(O) sinh (VeReB t). These hyperbolic solutions display the acceleration of the action toward the end, i.e., the last half of the weaker force is annihilated in a shorter time than the first half. This effect results from the greater concentration of fire which the remaining members of the stronger force are able to focus on the remnants of the weaker. Note that these propositions have to be treated as assumptions in supposing that the square law characterizes a combat. For the purpose of showing how to verify the applicability of the Lanchester equations as a regression model for a combat history, where there is strong a priori reason for believing the equations to be valid, reference 15~13 analyzes the capture of Iwo Jima by United States forces in World War II. The following is taken from the report. During the engagement, enemy troops were neither withdrawn nor reinforced. At the termination of the engagement, all enemy troops have been destroyed. During the first few days of the engagement our forces landed varying numbers of troops. Losses caused by other than enemy activity were negligible. The available battle record made it possible to establish the number of our troops that were effectively engaged in fighting at the beginning of each day of the battle. To simplify computation, it was assumed that fresh troops were put ashore at a constant rate during each day, but at different rates per day according to the records. Prior to the end of the entire engagement, on the 28th day, the island was declared secure. In order to detect any differences in the fighting rates before and after this day, the historical data were divided into the period before and after. By fitting the data to the square law at the beginning of the battle and on the 28th day, it was found that before the 28th day, 0.0106 enemy casualties occurred per day per effective U. S. unit. By use of this value, R(t) (enemy) was then computed from the equations, and eR was then estimated from the fit to be 0.0544. Figure 15~1 shows the theoretical and actual numbers of U. S. troops, M~, by days. Enemy troops were not considered casualties unless killed. Two cases were considered for U. S. troops: (a) excluding all casualties; (b) excluding killed only. In case (h), eB = 0.0088, eR = 0.0113, and j = 1.3. In case (a), j = 5.1. This gives about 4 U. S. casualties per U. S. death, compared to the actual record of 4.5. This example of Iwo Jima illustrates the use of the Lanchester equations for a short period of time in a battle. Reference 15~14 discusses the use of Lanchester theory for predicting the future outcome of future battles, pointing out that the fighting rates and kill probabilities would have to be predicted. Reference 15~4 reports a regression study of the Lanchester parameters using data on 92 historical land battles. For none of these battles were interim records available of force-sizes as is the case for the I wo Jima engagement above. However, from various sources the value of j was estimated for each battle. A plot of lnj against lnso indicated a linear relation, which led to a study of the regressions W = b + clnso where W = ln(sof~1 ' 2). The data yielded b = 0.115 ± 0.064 and e = -0.367 ± 0.122 as 95% confidence limit estimates, the standard error of the estimate being reported as 0.297. The resulting positive correlation of j with so is noted as against an expectation that the correlation should be negative. The study concludes that victory in battle is thus not simply a matter of numerical superiority. No studies are known which investigate vector models of the Lanchester type. 15-14 74 (b) troops not killed 70 ---------66 M(t) (thousa s) __ theoretical 62 ____ actual la) troops not killed, wounded58 or missing 54 ----- 6 12 18 24 30 36 Day after 0-Day Figure 15-1 15-11 Guerrillas vs. Regulars • Recent military operations draw attention to a third major case in addition to one-to-one fighting and to the square law of total mutual contact. This is the situation in which a Red force is completely exposed to a Blue force, but the Blue force is, for example, a guerrilla force whose whereabouts are known to the Red force only in a general way, i.e., as in paragraph 15-9.4. The joint differential equations for this case are dB(t) -[kR(O)fR/B(Q)] B(t) R(t)dt ~R(t) = -cnB(t) dt where if cR remains constant, cR = kR(Q)fR/B(O) where ku(O) is the rate per unit fire at which Red can eliminate Blue at time 0. As the size of the Blue force dwindles, dB(t)/dt tends to decline also because of the lower density of Blues in the same field. The effect is as if the unit Red effectiveness declines. If the potential Red unit effectiveness stays constant at cR = kR(O) fR, then the equation can be written dB(t) = _ (c ) B (t) R(t) dt u B(O) showing the decline in Red unit effectiveness as compared to the Lanchester square case. The usual integration yields B(t) -B(O) showing that the forces are equally matched if B 2(0) = [ R2(0). I 2 15-15 Thus Red's lack of knowledge of Blue's exact whereabouts gives Blue an advantage equivalent of a factor of v2 in initial force sizes. Let The solution divides into two cases: Case (1) a ~ 0 Then Red wins, the Red force size R(t) decreasing asymptotically to y'a while the Blue force size B(t) goes to 0. The exact solutions are: R(O) -va R(t) = va 1 + gv(t) 1 -gv(b)' g = R(O) + y'a where 2agv(t) B(t) B(O) [1-gv(t)F Case (2) a < 0 The battle ends at time 2w B(O) [ R(O) J t* = ' where 1.v = tan-1--= /v-a CR y-a with Red destroyed, and Blue's terminal force size equal to. B(t*) = af -2B(0) . The exact solutions are: R(t) = y -a tan (yt + wy-a) _ f(a) _ 1 B(t) -2B(O) sec 2(yt + 1vv -a) where y = -cR v -a j2B(O). 15-12 Stochastic Extensions of Lanchester Methods When the effects of fire are unreliable, and are thus probabilistic, then the above types of Lanchester differential equations for the average progress of combat may be misleading. They are also then usually numerically biased. As an exaggerated case, suppose that a single Blue force-unit (soldier, weapon-system, man-weaponsystem) is pitted against a single Red force unit in a duel in which their fire at each other can be represented as a process of independent trials with no cumulative effect of hits, but only a kill or not as the outcome of each trial. The assumption that the fire constitutes independent trials-for example, at rates fn, fu-is weaker than the assumption that is made in the Lanchester average analysis that the fire is continuous and continuously effective. The assumption that there is no accumulation of effects of hits, hut only kills or miss, may seem stronger than the Lanchester equations were they applied to the case of two duelists only to simulate partial effects, however, later it is demonstrated that this is not actually so. Let kB and ku denote the kill probabilities per trial. Although it is immaterial to the effect of probabilistic assumptions, assume also that there are limited ammunition supplies-AB for Blue, Au for Red-which set time limits of runout Tn = AB!fn for Blue and Tu = Au/fu for Red if they survive so long. Attention can be limited to the case in which Tu < Tn. Let sn(t), su(t), and SJJu(t) denote the probabilities that Blue still survives at timet, Red still survives at time t, and both still survive at time t, respectively. Then sB(t) is the average size of the Blue 15-16 force at time t, and similarly for sR(t). These probabilities are thus the probabilistic equivalents of the deterministic quantities B(t) and R(t). The probabilities sB(t) and sR(t) are most easily related to the win probabilities. In the case of stationary trials, starting at time 0, let WB(t) and wR(t) denote the probability densities that Blue defeats Red at time t and that Red defeats Blue at time t, respectively. Then with c = en + c11 , the following formulas are straight-forward descriptions of what happens: t < TR ( -ct _ )e_c T -c t TR :::=; t < Tn -)e R Re /J -c T -cBTB -k A -k A = e R Re = e B B e R R { = SBR(TB) Ws(t) _ {CsSnR(t) -0 -{CRSBR(t) t < TR WR(t) -0 t ;::: TR SB(t) = 1t WB(X)dX + SBR(t) (1/c)(CB + CR e-c min[t,TRl] S(ljc)[cR + cB e-ct] t < TR sn(t) = { w11 (x)dx + Sn11(t) = I (1/c)[c11 (1-e-cTR)] + SJJn(t), Tn :::=; t < Tn 0 f (1/c)[c11 (1-e-cTR)] + SBn(T n), t ;::: Tn • For all t, sB(t) + sR(t) -Ssn(t) = 1, i.e., at least one of the two combatants is, with probability 1, surviving at any time. By comparison with the Lanchester equations, d sn(t) = { -cR[sR(t) -[sn(t) -SsR(t)]] t < Tn dt 0 t ;::: Tn Since sn(t) is the average Red force size at timet, the term [sn(t) -snn(t)] in the above equation represents a probabilistic correction to the simple Lanchester scheme. The general form of this correction is given below. The term corrects for the chance that Blue has already lost. Even when Blue ought to win because of average superiority, the unreliability of hits can reverse the probable outcome. The ultimate win probabilities are: limited ammunition unlimited ammunition Cn + Cn e-cTR Cn WB = probability that Blue wins -WRR c c Wn = probability that Red wins = ~ [1 -e-cTn] Cn c c WBR = probability that both survive = e-ksAs e-ksAs 0 By use of the methods of Chapter 6, the above type of formulation readily provides a 2-dimensional Markov process for two forces of size [B(t), R(t)], in Poisson fire. The transition rates will be the current values of the products cnB(t) and cnR(t) in the case of total contact. As a result, the average forcesize decline-rate, for Blue will be R(O) d- dt B(t) = -cn[R(l) L: rpr(B(t) 0, R(t) rj]. r=o 15-17 The summation term inside the brackets here expresses the average value of r and the probability that Blue has already been defeated. It gives a "Poisson correction term" to the deterministic Lanchester case. Recent literature has reported a number of extensions of stochastic micro-models of this type, utilizing the Markov models afforded in the case of Poisson fire or renewal fire. In the case of tank duels, aiming doctrines and practices suggest that a Markov representation of the hit probability is appropriate in that case. These kinds of representations evidently afford schemes for rapid computer Monte Carlo simulations. References 15-1 Methods of Operations Research. Philip M. Morse and George E. Kimball. MIT Press. 1951. 15-2 "The Distribution of an Urban Population and an Application to a Servicing Problem," by Herbert K. Weiss. Opns. Res. 9 (1961) pp. 860-874. 15-3 "Optimum Weapon Deployment for Nuclear Attack," by F. M. Perkins. Opns. Res. 9 (1961) p. 80. 15-4 AD 133 012. Techniques of Systems Analysis, by H. Kahn and I. Mann. Rand RM-1829-1. June 1957. 15-5 "A Method of Computing Survival Probabilities of Several Targets versus Several Weapons," by Jane Ingersoll Robertson, Opns. Res. 4 (1956), pp. 546-557. 15-6 "A Vulnerability Model for Weapon Sites with Interdependent Elements," by Sidney I. Firstman. Opns. Res. 7 (1959) pp. 217-225. 15-7 "A Missile Allocation Problem," by Harry J. Piccariello, Opns. Res. 10 (1962) pp. 795-798. 15-8 Statistical Measures of Accuracy for Riflemen and Missile Engineers, Frank E. Grubbs, 1964. 15-9 "A Polar-Planimeter Method for Determining the Probability of Hitting a Target," G. R. VanBrocklin, Jr. and R. G. Murray, Opns. Res. 4 (1956) pp. 87-91. 15-10 Aircraft in Warfare, by F. W. Lanchester. Constable. London, 1916. 15-11 "The Fiske Model of Warfare," by Herbert K. Weiss. Opns. Res. 10 (1962) pp. 569-571. 15-12 AD 145 142. "Tables for Engagement Probabilities, Part II" National Bureau of Standards. March 18, 1957. 15-13 "Lanchester's Generalized Equation and the Battle of Iwo Jima." Operations Evaluation Group (LO) 624-53 by J. H. Engel. 15-14 "Historical Data and Lanchester's Theory of Combat." CORG-SP-128, (Combat Operations Research Group, Ft. Belvoir). July 1961, By R. L. Helmbold. also in Opns. Res. 12 (1964) pp. 778-781. 15-15 "Lanchester-Type Models of Warfare," by Herbert K. Weiss. Proc. First Interntl. Conf. on Opns. Res. Stonebridge Press. Bristol U.K. 1957. 15-16 Statistics of Deadly Quarrels, by Lewis F. Richardson. Boxwood Press. Pittsburgh. 1960. 15-17 "Stochastic Models of War Alliances," by William J. Horvath and Caxton C. Foster. J. Conflict Resolution 7 (1963) pp. 110-116. 15-18 Appendix A -x Table of e X 0.0 .1 • 2 .3 .4 .s .6 • 7 .8 .9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 X 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 • .00 .01 .02 .03 .04 .OS .06 .07 .08 .09 1.0000 • 9900 • 9802 .9704 . 9608 • 9512 . 9418 .9324 .9231 . 9139 .9048 .8958 .8869 .8781 .8694 .8607 .8521 .8437 .8353 .8270 .8187 .8106 .8025 •7945 .7866 . 7788 • 7711 .7634 . 7558 .7483 .7408 .7334 •7261 •7189 .7118 • 7047 .6977 .6907 .6839 . 6771 .6703 .6637 .6570 .6505 .6440 .6376 .6313 .6250 .6188 .6126 .6065 .6005 .5945 .5886 .5827 .5769 • 5712 .5655 .5599 .5543 .5488 .5434 .5379 .5326 .5273 .5220 .5169 .5117 .5066 .5016 .4966 .4916 .4868 .4819 .4771 .4724 .4677 .4630 .4584 .4538 .4493 .4449 .4404 .4360 .4317 .4274 .4232 .4190 .4148 .4107 .4066 .4025 .3985 •3946 • 3906 .3867 .3829 .3791 .3753 .3716 .3679 .3642 .3606 .3570 .3535 • 3499 .3465 .3430 .3396 .3362 .3329 .3296 .3263 .3230 .3198 .3166 .3135 .3104 .3073 .3042 • 3012 .2982 . 2952 . 2923 .2894 . 2865 . 2837 .2808 .2780 .2753 .2725 .2698 • 2671 .2645 .2618 • 2592 .2567 .2541 .2516 . 2491 . 2466 • 2441 .2417 .2393 .2369 .2346 .2322 . 2299 .2276 . 2254 .2231 .2209 • 2187 .2165 .2144 .2122 .2101 .2080 .2060 .2039 .2019 .1999 .1979 .1959 .1940 .1920 .1901 .1882 .1864 .1845 .1827 .1809 .1791 .1773 .1755 .1738 .1720 .1703 .1686 .1670 .1653 .1637 .1620 .1604 .1588 .1572 .1557 .1541 .1526 .1511 .1496 .1481 .1466 .1451 .1437 .1423 .1409 .1395 .1381 .1367 .00 .1 .2 .3 .4 .s .6 • 7 .8 • 9 .1353 .1225 .1108 .1003 .0907 .0821 .0743 .0672 .0608 .0550 .0498 .0451 .0408 .0369 .0334 .0302 .0273 .0247 .0224 .0202 .0183 .0166 .0150 .0136 .0123 .0111 .0100 .0091 .0082 .0074 .0067 .0061 .0055 .0050 .0045 .0041 .0037 .0033 .0030 .0027 .0025 .0023 .0020 .0019 .0017 .0015 .0014 .0012 .OOll .0010 .0009 .0008 .0007 .0007 .0006 .0005 .0005 .0004 .0004 .0004 .0003 .0003 .0002 .0002 .0002 .0002 .0002 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0000 .0000 .0000 A-1 Appendix B Table of Poisson Probability Function n -at aT = occurrence rate prob (N(T) = n} = ~e 1 N(T) = number in period T n. a1· = .1 .2 ·3 .4 . 5 .6 -7 .8 ·9 n = 0 9-0484 -1* 8.1873 -1 7.4082 -1 6.7032 -1 6.0653 -1 5.4881 -1 4.9658 -1 4.).;.933 -1 4.0657 -1 1 s:.o4S4 -2 1.6375 -1 2.2224 -1 2.6813 -1 3-0326 -1 3-2929 -1 3. 4761 -1 3-5946 -1 3-65S1 -1 2 4. 5242 -3 1.6375 -2 3-3337 -2 5·3626 -2 7. 5816 -2 9 8786 -2 1.2166 -1 1. 4372 -1 1.6466 -1 3 1. 5081 -4 1.0916 -3 3-3337 -3 7·1501 -3 1.2636 -2 1-9757 -2 2.8388 -2 3-8343 -2 4.9398 -2 4 3-7702 -6 5.4582 -5 2. 5003 -4 7-1501 -4 l. 5795 -3 2.9636 -3 4.9679 -3 7.668 -3 1.1115 -2 c: / 2.1833 -6 1. 5002 -5 5-7201 -5 l. 5795 -4 3-5563 -4 6-9551 -4 1.2270 -3 2.0006 -3 6 3-8134 -6 1.3163 -5 3-5563 -5 8.1143 -5 1. 6360 -3 3.ooos -4 7 3-0483 -6 8.1143 -6 1.8697 -5 3-8524 -5 > 8 1..%97 -6 4.3406 -6 I to.:! aT = 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 l.·; n = 0 3-6788 -1 3-3287 -1 3-0119 -1 2.7253 -1 2.4660 -1 2.2313 -1 2.0190 -1 1.8268 -1 1.653 -1 L 4S57 -1 1 3.6788 -1 3.6611J -1 3-6143 -1 3. 5429 -1 3.4::;24 -1 3·3370 -1 3·2303 -1 3.1C;56 -1 2.975L -1 2.841S -1 2 L83S4 -1 2.0139 -1 2.1686 -1 2.3029 -1 2.4166 -1 2.5102 -1 2.5843 -1 2.6398 -1 2.6772 -1 2.69S7 -1 3 6.1313 -2 7·3842 -2 8.6744 -2 9-9792 -2 1.1278 -1 1.2551 -1 l. 3783 -1 l. 4959 -1 L6o67 -1 l. 70~8 -1 4 l. 5323 -2 2.0306 -2 2.6023 -2 3-2432 -2 3 ·9472 -2 4.7066 -2 5-5131 -2 6.3575 -2 7·2302 -2 8.1216 -2 5 3.0557 -3 4.4674 -3 6.2456 -3 8.4324 -3 L 1052 -2 1.4120 -2 1.7642 -2 2.1615 -2 2.6029 -2 3-0862 -2 6 5-1094 -4 8.1903 -4 1.2491 -3 1.8270 -3 2-5788-3 3-5300 -3 4.7045 -3 6.1244 -3 7-8086 -3 S·7730 -3 7 7-2992 -5 1.2870 -4 2.1413 -4 3·3930 -4 5-1577 -4 7-5643 -4 1.0753 -3 1.4873 -3 2.007S -3 2.6527 -3 8 9-12 1+0 -6 l. 7697 -5 3-2120 -5 5· 5137 -5 9-0259 -5 1. 4183 -4 2.1506 -4 3-1606 -4 4.5178 -4 6.3001 -4 0 / 1.0138 -6 2.1630 -6 4.2827 -6 7-9E42 -6 1.4040 -5 2.3638 -5 3-8234 -5 5-9700 -5 9-0356 -5 1.3300 -4 10 1.0354 -6 1.9656 -6 3-5458 -6 6.1174 -6 1.0149 -5 1.6264 -5 2.5270 -5 ll 1. 5685 -6 2.6614 -6 4.3649 -6 * Floating decimal position (9.0484-1 • 90484) ~)" Appendix B Table of Poisson Probability Function (Continued) 3. 'c 3.8 aT = 2.0 2.2 2.4 2.6 2.8 3·0 3·2 3·6 n = 0 1.3534 -1 l.1o80 -1 9·0718 -2 7.4274 -2 6.o810 -2 4.9787 -2 4.0762 -2 3·3373 -2 2.7324 -2 2.2371 -2 1 2.7067 -1 2.4377 -1 2.1772 -1 l. 9311 -1 l.7027 -1 l. 4936 -1 1.3044 -1 l.l3"7 -1 '1·8365 -2 e. 5oos· -2 2 2.7067 -1 2.6814 -1 2.6127 -1 2.5104 -1 2.3838 -1 2.2404 -1 2.o870 -1 1.';;2;-0 -1 1. 7706 -1 1. 6152 -1 3 1.8045 -1 l.S'664 -1 2.0901 -1 2.1757 -1 2.2248 -1 2.2404 -1 2.2262 -1 2.1862 -1 2.1247 -1 2.0459 -1 4 ~ .0224 -2 1.0815 -1 1.2541 -1 1.4142 -1 1. 5574 -1 1. 6803 -1 1.7809 -1 1.8582 -1 1.9122 -1 1.9436 -1 5 3. 6o8~· -2 4.7587 -2 6.0196 -2 7 ·3539 -2 8.7214 -2 1.0082 -1 1.1398 -1 1.2636 -1 1.376: -1 1.4771 -1 6 1.2030 -2 1.7448 -2 2.4078 -2 3.1867 -2 4.0700 -2 5.o4o9 -2 6.0789 -2 7.1604 -2 8.2608 -2 S·3~~1 -2 7 3·4371 -3 5.4838 -3 8.255 -3 1.1836 -2 1.6280 -2 2.1604 -2 2.778S -2 3. 4779 -2 4.2404 -2 5-072~ -2 8 8.5?27 -4 1. 5o80 -3 2.4766 -3 3.8468 -3 5.6980 -3 8.1015 -3 1.1116 -2 1. 4781 -2 1. s 112 -2 2.4123 -2 9 1. s os·5 -4 3.6863 -4 6.6o44 -4 1.1113 -3 1. 7727 -3 2.7005 -3 3·9522 -3 5·5840 -3 7.6472 -3 1. 0125 -2 10 3.8190 -5 8.109S· -5 1.5850 -4 2.8894 -4 4.9636 -4 8.1015 -4 1.2647 -3 1.8986 -3 2.7530 -3 3.270h -3 11 6.:;436 -6 1. 6220 -5 3.4583 -5 6.8294 -5 1.2634 -4 2.2095 -4 3·6792 -4 5.8683 -4 ;•. 0097 -4 1.3370 -3 12 1.1573 -6 2.o736 -6 6.9166 -6 1. 4797 -5 2.9480 -5 5·5238 -5 9.8112 -5 1. 6627 -4 2.702S· -4 4.2340 -4 13 1.2769 -6 2.S:594 -6 6.3496 -6 1.2747 -5 2.4151 -5 4.3486 -5 7.4850 -5 1.2376 -~ 14 1.26<;-9 -6 2.7315 -6 5.5201 -6 1.0561 -5 1.S247 -5 3 ·3792 -5 15 1.1776 -6 2.3938 -6 4.61SJ -6 s.;1o1 -c 16 1.0394 -6 2.0212 -6 .aT = 4.0 4.5 5.0 5·5 6.0 6.5 7·0 8.0 9·0 10.0 n = 0 1.8316 -2 1.11Q9 -2 6. 7380 -2 4.o868 -3 2.4788 -3 1. 5034 -3 9.1188 -4 3·3546 -4 1.2341 -4 4.5400 -; 1 7.3263 -2 4.9990 -2 3·3690 -2 2.2477 -2 1.4873 -2 9·7724 -3 6.3832 -3 2.6837 -3 1.1107 -3 4.5400 -4 2 l.!J652 -1 1.1248 -1 8.4224 -2 6.1812 -2 4.4618 -2 3·176o -2 2.2341 -2 .1. 0735 -2 4.9981 -3 2.2700 -3 > I 1.9537 -1 1. 6872 -I 1. 4o37 -1 1.1332 -1 8.9235 -2 6.8814 -2 5.212<;--2 2.8626 -2 1. 4994 -2 7. 5667 -3 3 ~ 4 1.9537 -1 1.8981 -1 1.7547 -1 1. 5582 -1 1.3385 -1 1.1182 -1 9.1226 -2 5· 7252 -2 3·3737 -2 1.8917 -2 5 1. 562'.' -1 1.7o83 -1 1. 7547 -1 1.714o -1 1. 6o62 -1 1. 4537 -1 1.2772 -1 9.1604 -2 6.0727 -2 3.7833 -2 6 1.0420 -1 1.2812 -1 1.4622 -1 1. 5712 -1 1. 6062 -1 1.5748 -1 1. 4900 -1 1.2214 -1 9·1090 -2 6.3056 -2 7 5·9540 -2 8.2363 -2 1. o444 -1 1.2345 -1 1.3768 -1 1.4623 -1 1. 4900 -1 1.3959 -1 1.1712 -1 s.oo79 -2 8 2. s-rro· -2 4.6329 -2 6. 5278 -2 8.4871 -2 1.0326 -1 1.1882 -1 1. 3038 -1 1.3959 -1 1.3176 -1 1.1260 -1 s-1.3231 -2 2.3165 -2 3.6266 -2 5.1866 -2 6.8838 -2 8. 5811 -2 1.0140 -1 1.2408 -1 1.3176 -1 1.2511 -1 10 5·2925 -3 1.0424 -2 1.8133 -2 2.8526 -2 4.1303 -2 5.5m -2 7·0983 -2 9·9262 -2 1.1853 -1 1.2511 -l 11 LS245 -3 4.2644 -3 8.2422 -3 1.4263 -2 2.2529 -2 3·2959 -2 4. 5171 -2 7.2190 -2 9·7020 -2 1.1374 -1 12 6.4151 -4 1.5992 -3 3·4342 -3 6. 5373 -3 1.1264 -2 1. 7853 -2 2.6350 -2 4.8127 -2 7.2765 -2 9·4780 -2 13 l.S73S -4 5·5355 -4 1.3209 -3 2.7658 -3 5·1990 -3 8.9265 -3 1. 4188 -2 2.S·616 -2 5·0376 -2 7.2S08 -2 14 s.63S7 -5 1. 7793 -4 4.7174 -4 1.0866 -3 2.2281 -3 4.1444 -3 7.0942 -3 1.6924 -2 3.2384 -2 5.2077 -2 15 1. 5039 -5 5·3378 -5 1. 5725 -4 3·984o -4 8.Sl26 -4 1.7959 -a 3.a1o6 -3 ,.0260 -3 r. s·431 -2 3·4718 -2 16 3·7598 -6 1.5013 -5 4.9139 -5 1.3695 -4 3·3422 -4 7·2959-1. 484 -3 . 5130 -3 1.0930 -2 2.16SS -2 17 3·S73S -6 1. 4453 -5 4. 43o8 -5 1.1796 -4 2.7896 -4 5·9640 -4 2.1238 -3 5·7363 -3 1.2"C64 -2 18 4.0146 -6 1.3538 -5 3·9320 -5 1.0074 -4 2.3193 -4 9·4389 -4 2.3S32 -3 ( ,OCll -3 1C 1.0565 -6 3·9170 -6 1.2417 -5 3.4462 -5 8.5449 -5 3·9743 -4 1.3704 -3 3·7322 -3 20 1.0777 -6 3·7251 -6 1.1200 -5 2.9907 -5 1. 58'?7 -4 6.16~o -4 1.8661 -3 21 1.0643 -6 3.4668 -6 9·9690 -6 6.0561 -5 2.6430 -4 8.8861 -4 22 1.0243 -6 3·1720 -6 2.2022 -5 1.0812 -4 4.0391 -4 23 7·6598 -6 4.2309 -5 1. 7562 -4 2~: 2.5533 -6 1. 5866 -5 7.31-:-3 -5 25 5·7117 -6 2.9269 -5 I 26 1.9771 -6 1.1257 -5 27 4.1694 -6 28 1.4891 -6 Appendix B Table of Poisson Probability Function (Continued) ' S.T = 11 12 13 14 15 16 17 . 18 19 20 '1 = 0 1.6702 -5 6.1442 -6 2.2603 -6 8.3153 -7 3·0590 -7 1.1254 -7 4.1399 -8 l. 5230 -8 5. 6028 -"S 2.o612 -s 1 1.8372 -4 7·3730 -5 2.9384 -5 1.1641 -5 4. 5885 -6 1.8006 -6 7·0379 -7 2.7414 -7 1.0645 -7 4.1223 -8 2 1.0104 -3 4.4238 -4 1.9100 -4 8.1490 -5 3.4414 -5 1.44o4 -5 5·9822 -6 2.4673 -6 1.0113 -6 4.1223 -7 ~ 4 3·7050 -3 1.018<; -2 1.7695 -3 5·3oS6 -3 8.2766 -4 2.6899 -3 3. oo2:; -4 1.3310 -3 1. 7207 -4 6.4526 -4 7.6824 -5 3·0730 -4 3·3899 -5 1.4407 -4 1.4804 -5 6.6616 -5 6.4049 -6 3.0423 -5 2. 7432 -6 1.3741 -5 5 6 2.2415 -2 4.1095 -2 1.2741 -2 2.5481 -2 6.<;937 -3 l. 5153 -2 3· 7268 -3 8.6959 -3 1.9358 -3 4.8395 -3 9·8335 -4 2.6223 -3 4.8984 -4 1.3879 -3 2.3982 -4 7.1945 -4 1.1561 -4 3.6610 -4 5.4)'64 -5 1.8321 -4 7 I 8 6.4577 -2 8.8794 -2 4.3682 -2 6.5523 -2 2.8141 -2 4.5730 -2 1. 7392 -2 3.0436 -2 1.0370 -2 1.9444 -2 5·9937 -3 1.1988 -2 3·3706 -3 7.1625 -3 1.8500 -3 4.1625 -3 9·9369 -4 2.3600 -3 5.2347 -4 1.3087 -3 9 l.oS53 -1 8. 7364 -2 6.6054 -2 4.7344 -2 3.2407 -2 2.1311 -2 1.3529 -2 8.3251 -3 4.9822 -3 2 .sos2 -3 10 1.1938 -1 1.0484 -1 8.5870 -2 6.6282 -2 4.6511 -2 3.4098 -2 2.3000 -2 1.4985 -2 s.4662 -3 5.3163 -3 11 1.1933 -1 1.1437 -1 1.0148 -1 8.4359 -2 6. 6287 -2 4.9597 -2 3·5545 -2 2.4521 -2 1.6351 -2 1.05-:'5 -2 12 1.0943 -1 1.1437 -1 1.0994 -1 9.8418 -2 8.2859 -2 6. 6129 -2 5·0355 -2 3·6782 -2 2.5839 -2 1. 7625 -2 13 14 9.259L -2 7.2753 -2 1.0557 -1 9.o489 -2 1.0994 -1 1.0209 -1 1.0599 -1 1.0599 -1 9·5607 -2 1.0244 -1 8.1389 -2 9·3016 -2 6.5849 -2 7·9960 -2 5·092S -2 6. 5480 -2 3·7837 -2 5·1351 -2 2.:116 -2 3·8737 -2 15 5·3352 -2 7.2391 -2 8.8475 -2 9.8923 -2 1.0244 -1 9·9218 -2 9.0621 -2 7.8576 -2 6.5o44 -2 5.1649 -2 16 17 13 19 3.6680 -2 2.3734 -2 1.4504 -2 8.3971 -3 5.4293 -2 3. 8325 -2 2.5550 -:-2 1.6137 -2 7.1886 -2 5.lf972 -2 3·9702 -2 2. 7164 -2 8.6558 -2 7.1283 -2 5. 5442 -2 4.oS52 -2 9.6034 -2 8.4736 -2 7.0613 -2 5·5747 -2 9.<;218 -2 9·3381 -2 8.3006 -2 6.9899 -2 9.6285 -2 9.6285 -2 9·0936 -2 8.1363 -2 8.8398 -2 9·3597 -2 9·3597 -2 8.8671 -2 7.72;;.c -2 8.6327 -2 <;.1123 -2 9.1123 -2 6.4561 -2 7.5)54 -2 8.4394 -2 8.3835 -2 >I.... 20 21 22 23 24 25 4.6184 -3 2.4192 -3 1.2096 -3 5.(849 -4 2.6514 -4 1.1666 -4 <;.6820 -3 5·5326 -3 3.0178 -3 1.5745 -3 7.8725 -4 3·7758 -4 1.7657 -2 1.0930 -2 6.4589 -3 3·6507 -3 1.9774 -3 1.0283 -3 2.8596 -2 1.<;·064 -2 1.2132 -2 7·3846 -3 4.3077 -3 2.4123 -3 4.1810 -2 2.9864 -2 2.0362 -2 1.3280 -2 8.2998 -3 4.9799 -3 5·5920 -2 4.2605 -2 3·0986 -2 2.1555 -2 1.4370 -2 9·1969 -3 6.<;159 -2 5.5<;86 -2 4.3262 -2 3·1976 -2 2.2650 -2 1.5402 -2 7 ·9804 -2 6.840L -2 5·5966 -2 4.3800 -2 3.2850 -2 2.3652 -2 8.6567 -2 7·8322 -2 6. 7642 -2 5·5878 -2 4.4237 -2 3·3620 -2 2.8835 -2 3.4605 -2 7.6s14 -2 6.6882 -2 5·57 35 -2 4.4582 -2 26 27 4.c;357 -5 2.0108 -5 1. 7441 -4 7·7514-5 5.1414 -4 2.4755 -4 1.2989 -3 6. 7352 -4 2.8730 -3 1. 5961 -3 5.6596 -3 3·3539 -3 1.0070 -2 6.3406 -3 1.6374 -2 1.0916 -2 2.4568 -2 1. 7289 -2 3.42SS -2 2.5406 -2 28 29 30 31 [.8<;98 -6 2.):965 -6 1.0987 -6 3·3220 -5 1.3746 -5 5·4985 -6 2.1284 -6 1.1493 -4 5·1522 -5 2.2}26 -5 9.3625 -6 3·3676 -4 1.6257 -4 7·5868 -5 3.4263 -5 8.5506 -4 4.4227 -4 2.2114 -4 1.0700 -4 1.9165 -3 1.0574 -3 5·6393 -4 2.9106 -4 3· E497 -3 2.2567 -3 1.2788 -3 7.0128 -4 7.0176 -3 4.3558 -3 2.6134 -3 1.5175 -3 1.1732 -2 7.6864 -3 4.8680 -3 2.9336 -3 1.814[ -2 1.2515 -2 8.3435 -3 5.3S2c -3 32 3.8035 -6 1.4990 -5 5.0157 -5 1.4553 -4 3·7255 -4 8.5359 -4 1.7715 -3 3.3643 -3 33 1.4984 -6 6.3594 -6 2.2799 -5 7·0561 -5 1.9192 -4 4.655S -4 1.0200 -3 2.03~0 -3 34 2.6186 -6 1.0058 -5 3.3205 -5 9·5960 -5 2.4649 -4 5·6998 -4 1.1<:'<:'4 -3 35 36 37 38 1.0474 -6 4.0734 -7 4.3107 -6 1.7961 -6 7.2815 -7 1.5179 -5 6.7464 -6 2.9174 -6 1.2284 -6 4.6609 -5 2.2010 -5 1.0113 -5 4. 5241 -6 1.2677 -4 6.3383 -5 3.0835 -5 1.4606 -5 3.0)42 -4 1.6330 -4 8.3859 -5 4.1930 -5 6.8537 -4 3.3076 -4 2.0582 -4 1.Cb32. -4 39 40 41 42 43 44 45 5·0394 -7 1.9720 -6 8.3812 -7 3·4751 -7 6.7413 -6 3·0336 -6 1.3318 -6 5·7078 -7 2.0427 -5 9·7030 -6 4.4965 -6 2.0341 -6 8.9880 -7 3.8812 -7 5·5551 -5 2.7776 -5 1.354'? -5 6.4520 -6 3.0009 -6 1.364o -6 6.0624 -7 :\,~) ) Appendix C Tables of Cumulative Poisson Distribution Function a N(T) = Occurrence rate = Number of events of events in a time period of length T prob (N(T) ~ n} n L i=O (aT)i • Il., e -aT aT = .1 .2 ·3 .4 "'•j /.o ·7 .8 ·9 >I ~ n = 0 1 2 3 4 5 6 7 8 ·90484 ·99532 ·99984 1.00000 .81873 ·98248 .99885 . ··99994 1.00000 ·74082 ·96306 ·9964o ·99973 ·99998 1.00000 .67032 ·93845 ·99207 ·99922 ·99994 1.00000 1.00000 .6o653 .90980 .98561 ·9922:'· ·99983 ·99999 1.00000 .s4881 .87810 .97688 ·99664 ·99961 ·99996 1.00000 1.00000 .49658 .84420 .96586 ·99425 ·99921 ·99991 1.00000 1.00000 .44933 .80879 ·95258 ·99092 ·99859 ·99982 ·99998 1.00000 1.00000 .40657 ·772h8 ·93714 .986~ 4 ·997t6 -99966 .9999(~ 1.00000 1.00000 tiT· =· l.O 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 n = 0 1 2 3 4 5 6 7 8 9 10 11 .36788 .73576 .91970 .98101 ·99634 ·99941 ·99992 1.00000 1.00000 1.00000 ·33287 .69903 ·90042 ·97426 ·99456 ·99903 ·999'85 ·99998 1.00000 1.00000 ·30119 .66263 .87949 .96623 ·99225 ·99850 ·99975 ·99996 1.00000 1.00000 .27253 .62682 .85711 .95690 .98934 ·99777 .99960 ·99994 ·99999 1.00000 1.00000 .2466o ·59183 .83350 .94628 ·98575 .99680 ·99938 ·99989 ·99998 1.00000 1.00000 .22313 ·55782 .8o88s ·93436 ·98142 ·99554 ·99907 ·99983 ·99997 1.00000 1.00000 .20190 .52493 .78336 .92119 ·97632 ·99396 ·99866 ·99974 ·99996 ·99999 1.00000 .18268 .49325 ·75722 .90681 ·97038 ·99200 ·99812 ·99961 ·99993 ·99999 1.00000 1.00000 .16530 . 46284 . 73062 .89129 .963)9 ·98962 ·99743 ·99944 ·99989 ·99998 1.00000 1.00000 .14957 .43375 .(0372 .87470 ·95592 ·98678 .99655 ·99921 ·99984 ·99997 1.00000 1.00000 Appendix C Tables of Cumulative Poisson Distribution Function (Continued) aT = 2.0 2.2 .2.4 2.6 2.8 3·0 3·2 3·4 3.6 3.8 >I ~ n = 0 1 2 3 4 5 ~ 0 7 8 9 10 11 12 13 14 15 16 ·13534 .4o6o1 .67668 .85712 ·94735 ·98344 ·99547 ·99890 ·99976 ·99995 ·99999 1.00000 1.00000 .11080 ·35457 .62271 .81935 ·92750 ·97509 ·99254 ·99802 ·99953 ·99990 ·99998 1.00000 1.00000 .09072 .30844 .56971 ·77872 ·90413 .96433 ·98841 ·99666 ·99914 ·99980 -99996 ·99999 1.00000 1.00000 .07427 .26738 ·51843 .73600 .87742 ·95096 .98283 ·99467 ·99851 ·99962 -99991 ·99998 1.00000 1.00000 .o6o81 .23108 .46945 .69194 .84768 ·93489 ·97559 ·99187 ·99757 ·99934 ·99984 ·99996 ·99999 1.00000 1.00000 .04879 .19915 . 42319 .64723 .81526 ·91608 ·96649 .98810 ·99620 ·9'f890 ·99971 ·99993 ·99998 1.00000 1.00000 .04076 .17120 ·37990 .60252 ·78061 .89459 ·95538 .98317 ·99429 ·99824 ·99950 ·99987 ·99997 ·99999 1.00000 1.00000 .03337 .14684 ·33974 ·55836 .74418 .37C:;4 .94215 ·97693 ·99171 ·99729 ·99919 ·99978 ·99994 ·99999 1.00000 1.00000 . 02732 .12569 ·30275 .:;.1522 .{0644 .84412 ·92673 .96921 ·98833 ·99598 ·99873 ·99963 ·99990 ·99998 ·99999 1.00000 1.00000 .02237 .10738 .26890 ;47348 .c.6784 .81556 ·90911 ·9~989 .98402 ·99420 -99807 -99941 ·99983 ·99996 ·99999 1.00000 1.00000 :,, )~ "''-~- Appendix C Tables of Cumulative Poisson Distribution Function (Continued) aT = 4.0 4.5 5·0 5·5 6.o 6.5 7·0 . 8.0 9·0 10.0 n = 0 .01832 .01111 .oo674 .oo4o9 .00248 .00150 .00091 .00034 .00012 .00005 1 .09158 .o611o .04043 .02656 .01735 .01128 .00730 .00302 .00123 .00050 2 .23810 .17358 .12465 .08838 .o6197 .04304 .02964 .01375 .oo623 .00277 3 . 43347 ·34230 .26503 .20170 .15120 .11185 .08177 .04238 .02123 .01C'3 1~ 4 .62884 ·53210 .44049 ·35752 .28506 .22367 .17299 .09963 .05496 .02925 5 . 78513 ·70293 .61596 .52892 .44568 ·36904 ·30071 .19124 .11569 .06709 6 .88933 .83105 ·76218 .68604 .60630 .52652 . 44971 ·31337 .20678 .13014 7 ·94887 ·91341 .86663 .80948 ·74398 .67276 ·5~871 . 452;?6 .)2390 . 22022 8 ·97&54 ·95974 ·93191 .89436 .84724 .79157 ·72909 ·59255 .4556:::. ·33282 9 ·99187 ·98291 ·96817 .94622 .91608 .87738 .83050 .71662 .58741 .45793 10 ·99716 ·99333 ·98630 ·97475 ·95738 ·93316 ·90148 .81589 ·70599 ·58304 11 ·99908 ·99760 ·99455 ·98901 ·97991 ·96612 ·94665 .88808 .80301 .69678 12 ·99973 ·99920 ·99798 ·99555 ·99117 ·98397 ·97300 ·93620 .87577 .79156 13 ·99992 ·99975 ·99930 ·99832 ·99637 ·99290 ·98719 ·96582 ·92615 .86446 14 ·99998 ·99993 ·99977 ·99940 ·99860 ·99704 ·99428 ·98274 ·95873 ·916:;:,4 > 15 1.00000 ·99998 ·99993 ·99980 ·99949 ·99884 ·99759 ·99177 ·97796 ·95126 I 16 1.00000 1.00000 -1 ·99998 ·99994 ·99982 ·99957 ·99904 ·99628 ·98889 ·97296 17 1.00000 1.00000 ·99998 ·99994 ·99985 ·99964 ·99841 ·99468 ·98572 18 1.00000 1.00000 ·99998 ·99995 ·99987 ·99935 ·99757 ·99281 19 1.00000 1.00000 1.00000 ·99998 ·99996 ·99975 ·99894 ·99655 20 1.00000 1.00000 ·99999 ·99991 ·99956 ·99841 21 1.00000 1.00000 1.00000 '·99997 ·99982 ·99930 22 1.00000 1.00000 ·99999 ·99993 ·99970 23 1.00000 ·99998 ·99988 24 1.00000 ·99999 ·99995 25 1.00000 ·99998 26 1.00000 ·99999 27 1.00000 28 1.00000 A~pendix C Tables of Cumulative Poisson Distribution Function (Continued) aT = lloo 12}0 13°0 l!....Q 15o0 16oo 1700 1~, 0: lS·.C 20o0 n = 0 o00002 o00001 1 o00020 ooooo8 .00003 o00001 2 3 0 00]21 o00492 o00052 o00229 .00022 o00105 o00009 00004~( o00004 o00021 o00002 o00009 o0000L o00002 4 o01510 000760 o00374 o00181 oooo86 oooo4o o00018 o00008 ooooo4 ooooo2 5 6 7 o03752 o07861 014319 o02034 o04582 o08950 o01073 o02589 .05403 o00553 o01423 003162 o00279 ;00763 .01800 o00138 ooo4o1 o01000 o00067 o00206 o00543 o00032 o00104 o00289 oQ001:; oC0052 .OC151 000007 .00026 .00078 8 9 10 o23198 034051 045989 015503 024239 034723 o09976 o16581 025168 006206 o10940 017568 o03745 o06985 011846 o02199 004330 -07740 o01260 0 02612 004912 o00706 o01538 o03037 oC•0387 oOJ386 o01332 o0020() .0050( o0108l 11 057927 046160 ·35316 o26oo4 o18475 012699 o08467 oos489 o03467 0 0213~ l2 068·370 057596 046310 035846 026761 019312 o13502 009167 .06056 . 03901 13 14 .78129 o8s4ou 068154 .77202 057304 067513 o46445 ·57044 036322 046565 027451 036753 020087 028083 .1lt26c 020808 .09840 014975 006613 o1048E 15 16 0 =0740 o9hho8 084442 o8c871 076361 083549 066936 075592 o56809 0 66412 0 46674 o56596 ·37145 046774 028665 037505 o2l479 029203 o1:;'65l .22107 17 096781 ·93703 o89046 .82720 074886 065934 056402 046865 037336 o29703 18 .?8231 096258 093017 088264 081947 .74235 065496 0 56224 o46948 .38142 .,,::: -99071 097872 095733 .92350 .87522 081225 073632 o650S2 056061 047026 2·0 0S9533 o9S84o ·97499 o95209 -91703 .86817 .80548 073072 064717 055909 >I 00 21 22 23 24 . 0 9775 0;9896 ·9995~ ·S-9980 o}9394 ·99695 099853 o99931 o'98592 09?233 ·95·603 -99801 097116 o98329 099067 o99498 -94689 -96726 o98054 098884 o91077 o94176 o96331 097768 086147 090473 093670 ·95935 079912 085509 089889 ·93174 o72550 -79314 .84902 .89325 064370 072061 o78749 o84323 25 26 27 23 29 30 -99S';92 0?9997 . ;!S-999 1o00000 1o00000 1o00000 o99969 o'S()987 ·99994 ·99993 ·99999 1o00000 o99903 o99955 099980 ·99991 0 99:96 o99998 099739 o99869 099936 099970 099986 o99994 o99382 -99669 099828 o99914 -999~8 099980 098688 099254 099590 099781 099837 o99943 097476 o98483 099117 -99502 ·99727 099855 095539 097177 o98268 098970 099406 099667 092687 o95144 096873 098046 o<;8815 ·99302 088782 092211 o94752 o96567 097818 098652 31 1o00000 -99999 099997 099991 099972 099925 o99819 o99600 o991?1 32 1o00000 00000 0 /,./,//./ o99996 ·99987 099962 099904 o99777 099527 33 34 1o00000 1o00000 1o00000 o99998 ·99999 o99994 099997 099982 ·99991 ·99951 o99975 099879 . 99936 o99731 o99851 35 36 37 1o00000 1o00000 1.00000 1o00000 1o00000 o99999 1o00000 1o00000 -99996 o99998 o99999 o99988 ·99994 099997 o99967 099984 099992 o99920 099958 -99978 38 39 40 41 42 43 l4 45 1o00000 1o00000 1o00000 1o00000 1o00000 1o00000 -99999 1o00000 1.00000 1o00000 1o00000 o99996 ·99998 ·99;1991o00000 l. 00000 1o00000 1.00000 o99989 099995 o99998 o99999 1.00000 1o00000 1.00000 1o00000 A;Jpendix C Tables of Cumulative Poisson Distribution Function (Continued) J" aT ; n.o l2j0 13.0 }_',.0 1).0 16.0 17.0 i-. l~:. c 20.0 n = 0 .00002 .00001 1 .00020 .00008 .00003 .oooo1 2 .OOJ21 .00052 .00022 .OOOOS• .oooo4 .00002 3 .00u92 .00229 .00105 .ooo~;·: .00021 .00009 .OOOOL .00002 4 .01510 .00760 .00374 .00181 .ooo86 .ooo4o .00018 .oooo8 .oooo4 .oooo2 5 .03752 .02034 .01073 .00553 .00279 .00138 .00067 .00032 .0001"; .oooo.., 6 .07861 .0u582 .02589 .01423 .00763 .00401 .00206 .00104 .00052 .ooo2E. 7 1.14319 .08950 .05403 .03162 .o18oo .01000 .00543 .00289 .00151 .00072 8 .23198 .15503 .09976 .06206 -03745 .02199 .01260 .00706 .00387 .00209 G / ·34051 .24239 .16581 .10940 .06985 .04330 .02612 .01538 .oo386 .0050C 10 .45989 .34723 .25168 .17568 .11846 .07740 .04912 .03037 .01832 .01081 11 ·57927 .46160 ·35316 .26004 .18475 .12699 .08467 .o5489 .03467 .0213S· 12 .63870 .57596 . 46310 ·35846 .26761 .19312 .13502 .09167 .06056 .03901 13 .78129 .68154 ·57304 .46445 .36322 .27451 .20087 . ])!260 .09840 .06613 14 .8s4ou .77202 .67513 ·57044 .46565 -36753 .28083 .20608 .14975 .l048E· 15 . 70740 .84442 .76361 .66936 . 56809 . 46674 ·37145 .28665 .21479 .15651 16 .9uhc8 .8)871 .83549 ·75592 . 66412 -56596 .46771-+ ·37505 .29203 -22107 17 .<;-6781 -93703 .89046 .82720 . 74886 .65934 -56402 .46865 ·37836 .2()70318 .()3231. .96258 ·93017 .88264 .81947 .74235 .65496 . 56224 .46948 ·38142 19 ·99071 .97872 ·95733 ·92350 .87522 .81225 ·73632 -65092 -56061 .47026 20 ·99533 .98840 ·97499 ·95209 ·91703 .86817 .80548 ·730":'2 .64717 ·55909 ' /> 21 · 0 9775 ·99394 ·98592 ·97116 .94689 ·91077 .86147 . 799l2 .72550 . 64370 I 22 ·99396 -99695 ·99233 ·98329 ·96726 ·94176 -90473 .85509 ·7'?314 -72061 ~ 23 -9995!..--99853 -99603 ·99067 .98054 -96331 ·93670 .89889 .84902 .78749 2u -SS-980 ·99931 ·99801 -99498 ·98884 -97768 ·95935 ·93174 .89325 .84323 25 ·99992 ·99969 ·99903 ·99739 ·99382 .98688 ·97476 ·95539 .92687 .88782 26 -?9997 ·'99987 ·99955 -99869 ·99669 ·99254 -98483 ·97177 -95144 ·92211 27 ·9S·999 ·99~4 ·99980 ·99936 ·99823 ·99590 -99117 ·98268 -96873 ·94752 23 1.00000 ·99998 ·9Si991 ·99970 .)'9914 ·99781 ·99502 .t;:8970 ·98046 ·96567 29 1.00000 .)9999 ·99996 ·99986 ·999~B ·99887 ·99727 ·99406 .)·8315 ·97818 30 1.00000 1.00000 ·99998 ·99994 ·99980 ·99943 ·99855 ·99667 ·99302 .98652 31 1.00000 ·99999 ·99997 ·99991 ·99972 -99925 -99819 ·99600 ·99191 32 1.()0000 -9S'999 ·99996 ·99987 ·99962 ·99904 ·99777 ·99527 33 1.00000 1.00000 ·99998 ·99994 ·99982 ·99951 ·99879 ·99731 34 1.00000 ·99999 ·99997 ·99991 ·99975 ·99936 ·99851 35 1.00000 1.00000 ·99999 ·99996 ·99938 ·S'9967 ·99920 36 1.00000 1.00000 1.00000 ·99998 ·99994 ·99984 ·99958 37 1.,00000 1.00000 ·99999 ·99997 ·99992 ·99978 38 1.00000 1.00000 ·99999 ·99996 -99989 39 1.00000 1.00000 1.00000 ·99998 ·99995 4o 1.00000 1.00000 ·99999 ·99998 41 1.00000 1.00000 1.00000 ·99999 42 1.00000 1.00000 1.00000 43 1.00000 1.00000 L4 1.00000 1.00000 45 1.00000 Appendix D 1 -x2/2 Table of Probability Density Functions for the Standard Normal*: g(x) =~e X .-00 .01 .02 .03 .o4 .05-.o6 .07 .08 .09 .o ·3989 ·3989 ·3989 ·3988 -3986 ·3984 -3982 ·3980 ·3977 ·3973 .l ·3970 -3965 ·3961 ·3956 -3951 -3945 ·3939 ·3932 -3925 ·3918 .2 ·3910 -3902 -3894 -3885 .3876 -3867 -3857 -3847 -3836 -3825 ·3 -3814 -3802 ·3790 -3778 -3765 -3752 ·3739 ·3725 -3712 -3697 .4 -3683 -3668 -3653 -3637 -3621 -3605 ·3589 ·3572 ·3555 -3538 . 5 ·3521 ·3503 -3485 -3467 .3448 -3429 -3410 ·3391 -3372 ·3352 .6 ·3332 -3312 -3292 -3271 -3251 -3230 -3209 -3187 -3166 -3144 ·7 -3123 -3101 -3079 -3056 .3034 -3011 .2989 .2966 .2943 .2920 .8 .2897 .2874 .2850 .2827 .2803 -2780 .2756 .2732 .2709 .2685 ·9 .2661 .2637 .2613 .2689 .2565 .2541 .2516 .2492 .2468 .2444 1.0 .2420 .2396 .2371 .2347 .2323 .2299 .2275 .2251 .2227 .2203 1.1 .2179 .2155 .2131 .2107 .2083 .2059 .2036 .2012 .1989 -1965 1.2 .1942 .1919 .1895 .1872 .1849 .1826 .1804 .1781 .1758 .1736 1.3 .1714 .1691 .1669 .1647 .1626 .1604 .1582 .1561 .1539 .1518 1.4 .1497 .1476 .1456 .1435 .1415 .1394 .1374 -1354 -1334 .1315 1.5 .1295 .1276 .1257 .1238 .1219 .1200 .1182 .1163 .1145 .1127 1.6 .1109 .1092 .1074 .1057 .1040 .1023 .1006 -0989 .0973 .0957 1.7 .0940 .0925 .0909 .0893 .0878 .0863 .0848 .0833 .0818 .o8o4 1.8 -0790 .0775 .0761 -0748 .0734 -0721 .0707 .0694 .0681 .o669 1.9 .0656 .o644 .0632 .0620 .0608 .0596 .0584 .0573 .0562 .0551 2.0 .0540 .0529 .g519 .o5ol3 .0498 .0.488 .0478 .0468 .0459 .0449 2.1 .o44o .0431 .0422 .0413 .o4o4 .0396 .0387 .0379 .0371 .0363 2.2 .0355 .0347 .0339 .0332 .0325 .0317 .0310 .0303 .0297 .0290 2.3 .0283 .0277 .0270 .0264 .. 0258 .0252 .0246 .0241 .0235 .0229 2.4 .0224 .0219 .0213 .0208 .0203 .0198 .0194 .018g .0184 .0180 2.5 .0175 .0171 .0167 .0163 .0158 .015i!t .0151 .0147 .0143 .0139 2.6 .0136 .01]2 .0129 .0126 .0122 .0119 .0116 .0113 .0110 .0107 2.7 .0104 .0101 .0099 .0096 .0093 .0091 .oo88 .oo86 .oo84 .0081 2.8 .0079 .0077 .0075 .0073 .0071 .0069 .0067 .0065 .0063 .0061 2.9 .0060 .0058 .0056 .0055 .0053 .0051 .0050 .oo48 .0047 .oo46 3·0 .oo44 .oo43 .oo42 .oo4o .0039 .0038 .0037 .0036 .0035 .0034 3·1 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026 .0025 .0025 3.2 .0024 .0023 .0022 .0022 .0021 .0020 .0020 .0019 .0018 .0018 3·3 .0017 .0017 .0016 .0016 .0015 .0015 .0014 .0014 .0013 .0013 3.4 .0012 .0012 .0012 .0011 .0011 .0010 .0010 .0010 .0009 .0009 3·5 .0009 .ooo8 .ooo8 .0008 .0008 .0007 .0007 .0007 .0007 .ooo6 3-6 .ooo6 .ooo6 .ooo6 .0005 .0005 .0005 .0005 .0005 .0005 .ooo4 3·7 .ooo4 .ooo4 .ooo4 .ooo4 .0004 .ooo4 .0003 .0003 .0003 .0003 3-8 .0003 .0003 .0003 .0003 .0003 .0002 .0002 .0002 .0002 .0002 3·9 .0002 .0002 .0002 .0002 .0002 .0002 .0002 .0002 .0001 .0001 4.0 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 4.1 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 4.2 .0001 .0001 .0001 .0001 .0000 .oooo .oooo .oooo .oooo .oooo * From: National Bureau of Standards Applied Mathematics Series 23, June, 1953· A-10 Appendix E 2 1 X -t /2 Table of Symmetric Integral of the Standard Normal Density*:/ f e dt 27r -x X .oo .01 .02 .03 .04 .05 .06 .07 .08 .09 .o .oooo .0080 .0160 .0239 .O?J19 .0399 .0478 .0558 .0638 .0717 .1 .0797 .0876 .0955 .1034 .11J-3 .1192 .1271 .1350 .1428 .1507 .2 .1585 .1663 .1741 .1819 .1897 .1974 .2051 .2128 -2205 .2282 ·3 -2358 .2434 -2510 -2586 .2661 -2737 .2812 .2886 -2961 -3035 .4 -3108 .3182 -3255 ·3328 -3401 -3473 ·3545 -3616 -3688 -3759 c;.·' ·3829 -3899 -3969 .4039 . 4108 .4177 .4245 -4313 .4381 .4448 .6 .4515 -4581 .4647 -4713 -4778 .4843 .4907 .4971 -5035 -5098 7• I .5161 -5223 -5285 ·5346 -5407 -5467 -5527 -5681 -5646 -5705 .8 ·5763 .5821 -5878 -5935 -5991 .6047 .6102 .6157 .6211 .6265 -9 .6319 .6372 .6424 .6476 .6528 .6579 .6629 .6680 .6729 .6778 l.O .6827 .6875 .6923 .6970 -7017 -7063 -7109 -7154 -7199 -7243 1.1 1.2 1.3 -7287 -7699 .8064 ·7330 ·7737 .8098 ·7373 ·7775 .8132 -7415 -7813 .8165 -7457 -7805 .8198 -7499 -7887 .8230 -7540 ·7923 .8262 -7580 -7959 .8293 -7620 -7995 .8324 ·7660 .8029 .8355 1.4 .8385 .8415 .8444 .8473 .8501 .8529 .8557 .8584 .8611 .8638 1.5 .8664 .8690 .8715 .8740 .8764 .8789 .8812 .8836 .8859 .8882 1.6 .8904 .8926 .8948 .8969 .8990 -9011 .9031 .9051 .9070 .9090 1.7 .9109 -9127 .9146 .9164 .9181 .9199 -9216 -9233 -9249 -9265 -...\,: 1.8 -9281 -9297 -9312 -9328 -9342 -9357 -9371 ·9385 -9399 .9412 1.9 -9426 -9439 .9451 -9464 -9476 -9488 -9500 -9512 -9523 -9534 2.0 -9545 -9556 ·9566 -9576 -9586 -9596 .9606 .9615 .9625 .9634 2.1 .9643 .9651 .9660 .9668 .9676 .9684 .9692 .9700 ·9707 -9715 2.2 -9722 ·9729 -9736 -9743 ·9749 -9756 -9762 -9768 -9774 -9780 2-3 2.4 ·97% oc----/•/Li)O -9791 -9840 -9797 .9845 .9802 .9849 .9807 -9853 .9812 -9857 .9817 .9861 -9822 .9865 -9827 .9869 .9832 .9872 2.5 2.6 .')876 -9907 .9879 -9909 -9883 -9912 .9886 -9915 .9889 -9917 -9892 -9920 .9895 .9922 .9898 .9924 -9901 -9926 -9904 ·9929 2-7 2.8 .9931 ·9949 -9933 -9950 .9935 -9952 -9937 ·9953 .9939 -9955 -9940 ·9956 -9942 .9958 -9944 -9959 -9946 .9960 -9947 -9961 2-9 3·0 ·9963 -9973 -9964 -9974 ·9965 -9975 .9966 .9976 .9967 -9976 .9968 .9977 .9969 -9978 -9970 .9979 -9971 -9979 -9972 .9980 3·1 .9981 .9981 .9982 .9983 .9983 -9984 -9984 .9985 -9985 -9986 3·2 .9986 -9987 -9987 .9988 -9988 -9988 -9989 -9989 ·9990 -9990 3-3 .9990 -9991 ·9991 ·9991 ·9992 ·9992 .9992 ·9992 ·9993 ·9993 3-4 .9993 ·9994 .9994 ·9994 ·9994 ·9994 .9995 -9995 -9995 .9995 3·5 .9995 -9996 ·9996 .9996 ·9996 ·9996 -9996 .9996 ·9997 ·9997 3-6 .9997 ·9997 .9997 ·9997 .9997 ·9997 ·9997 ·9998 -9998 .9998 3·7 ·9998 ·9998 .9998 .9998 ·9998 .9998 ·9998 ·9998 -9998 ·9998 3·8 .9999 ·9999 -9999 .9999 -9999 .9999 -9999 ·9999 .9999 ·9999 3·9 -9999 ·9999 .9999 -9999 .9999 ·9999 .9999 .9999 -9999 .9999 4.0 .9999 ·9999 .9999 ·9999 .9999 .9999 1.0000 1.0000 1.0000 1.0000 * From: National. Bureau of Standards Applied Mathematics Series 23, June, 1953· I A-ll INDEX Absorbing state 6-13. Acceleration: vector representation 2-6; matrix representation 3-13. Activity: allocation of effort 2-14; concepts of 11-1; forms 7-2; differential analysis 7-7; input-output 2-13, 3-12, 3-13; linear combinations 9-2; linearity 9-1; rates 3-13; scheduling 2-14; vector representation 2-11; vs. action 7-1. Advertising 15-11. Allocations 2-14, 3-13. Assignment: definition 2-15; form of activity 3-13; weapon 15-4. Bayes' Theorem 4-5. Bernoulli: process (trials) 5-5; continuous 5-6; discrete 5-5; forms of variables 4-14; representing attack 158; representing obsolescence 14-14. Beta distribution 4-14. Binary distribution 4-13. Birth-death queue 12-13. Brownian motion 5-4. • Chance 4-3 . Chapman-Kolmogorov Equation 6-9, 6-10, 12-11. Chemical processes: as linear combinations 9-3; as reaction 15-10. Circular probable error 15-5. Combat 15-1; as an event-state process 7-7; examples of systems 11-2; one against one 15-12; other than one-to-one 15-12; guerrilla vs. regulars 15-15. Compound-Poisson 5-4, 11-5; demand 14-5, examples 14-6; forms of variables 4-14. Connectedness 3-7. Constrained optimization 8-1, types 8-9. Constraints: categories 8-6. Contact: as combat 15-10; other cases 15-11. Contagious shift 5-10. Convex functions 8-16. Convex hull 9-6. Convex polyhedron 9-8. Convolution: forms 4-13. "Cookie-cutter" 15-6. Cost effectiveness 8-18. Critical path scheduling (CPS) 3-14. Cycles 11-8, 14-28. Damage: types 15-1, 15-2. Decision process 8-4; sequential 8-20. Defense problems 8-2, 8-16, 8-17, 15-3. Delivery errors 15-4. Demand: as dependent process 6-2; as an event-state process 7-6; compound 11-5, 14-5; decaying 14-27; for inventories 14-3, 14-4; methods of meeting 11-6; nonstationary service 12-10; randomness 4-2, 5-1; recurring 4-11. Design: characteristics 2-16; of attack 8-2; of defense 8-2, 8-16; of inventories 14-4; strategy 2-2. Deterministic behavior 4-1. Dice throwing 4-4. Diet problem 9-3. Dimensionality 2-2; of vectors 2-9, 2-19. Discounting: in ordering 14-27. Distribution: as a vector 2-16; function 4-12; of effects 2-17, 4-2; spatial 2-16; specific types: .see Beta, Binary, Erlang, Exponential, Gamma, Gaussian (normal), Lognormal, Pareto, Poisson, Rectangular, Weibull. Distribution system 8-3. Duality: in linear programs 9-9. Duels: "noisy" 6-2, 15-2, 15-12, 15-17. Dynamic programming 8-4, 8-16, 8-19, 13-2. Ecology 15-11. Economic order quantity (EOQ) 14-24. Effectiveness: example of determination 8-17; of weapons 15-5; in combat 15-10; (.see Measures of effectiveness). Eigenvector 2-3. Embedded event process 12-6. Epidemics 7-10, 7-11, 15-22. Ergodicity 5-3, 5-4, 6-10. Erlang: phases 12-10; unit of traffic intensity 12-3; Joss formula 12-12. Erlang distribution 5-8, 13-8; in arrival 12-5; in service 12-5, 12-10, 12-13; in target destruction 15-8. Events 7-4; states 7-4; binary 7-5; compound 7-6. Exchange ratio 15-10. Exponential distribution 5-8, 13-2, 13-7. Exponential smoothing: of inventory sizes 14-8. Extrema: definition 8-11; notation and forms 8-12; properties 8-12. Feedback 7-9. Flyaway kit problem 8-4, 14-4, 14-30. Games: military games Ch. 10; payoff 3-13; strategy in 2-1, 2-18; 2x2 game solution 10-2; 2xm graphical solution 10-3; World War II examples 10-5. Gamma distribution: form 4-14; in reliability 13-8. Ganging: in combat 15-12. Gaussian (normal) distribution 2-17, 4-14. Gaussian process 5-4, 5-10. Gradient: of a function 8-12; in inequalities 9-8. Graphs 3-2, 3-14. 1-1 Hazard rate 6-16, 13-6; monotonic 13-8; obsolescence 14-26. . Holding effort: in inventories 14-1, 14-13, 14-21. Homogeneous process 5-3. Hypervectors 2-11, 3-1, 3-12. Impulses: in random variables 4-6. Independent processes: Chapter 5; types 5-4; in service 12-9. Inequalities: forms of 8-9; optimization with 8-14. Input: elements of 7-8. Input-output process 3-12, 5-l. Inventories: Ch. 14; as a process 6-1, 6-2, 14-15, 14-18; control 14-19, 14-22; demand for 11-5, 14-22; forecasting 14-8; peacetime vs. wartime 14-22; review methods 14-21; slack assets 14-20; stochastic representation 5-1; types 14-3. lwo Jima: casualties 15-14. Lagrangian function 8-14. Lagrange multiplier 8-14, 8-18, 9-10. Lagrange simulation 7-5, 12-12. Lanchester theory 2-9, 2-12; examples in history 15-28; linear law 15-12; square law 15-13; stochastic extensions 15-16. Laplace transform 4-13. Law of large n.umbers 4-4. Leontief: input-output model 3-12. Linear algebra 2-4. Linear independencies 9-5. Linear programming: application 8-15; characteristics of solution 9-9; concept 9-2; example of Simplex 9-14; production planning 14-30; Simplex method 9-10; treatment of nonlinearity 9-4. Linearity: of activity 2-14; piecewise 8-16. Lognormal distribution 4-14. Machine repair: representation 12-14. Maintenance: optimum time 13-11; policies 11-7, 13-10; preventive 13-9. Marketing 15-11. Markov chains 6-5, in combat model 15-4. Markov-Poisson process (trials) 6-6. Markov process: Chapter 6; compared to semi-Markov 6-7; in attack of targets 15-3, 15-8; in reliability 13-2, 13-12; in tank duels 15-18; simulated by Monte Carlo 6-15. Matrices 2-11, 3-1, 3-12; applications 3-1; input-output 3-12; in routing 3-5; multiplication 3-4, 3-6; notation 3-2; stochastic 3-5; transformation 3-6. Matrix algebra 3-15. Measures of effectiveness 7-11, 8-7, 8-8, 11-9; in combat 15-2; in reliability 13-2; in service system 12-3; in supply systems 14-2, 14-3, 14-16; normalized 7-12; selection of 8-7, 8-8. Measures of effort: and acceleration 14-13; in service system 12-4; in supply system 14-1. Modelling 8-8. Monte Carlo methods 4-7, 5-7; in service systems 12-9, 12-10, 12-11, 15-1. Multi-dimensionality: examples 2-1. Networks: as linear activity 9-3; branch flow 3-13; ex 1-2 amples 3-8, 8-21; in service system 12-9; in supply system 14-10; routing in 3-3, 3-9. "Newsboy" problem 14-4, 14-29. Non-Markovian process 6-14. Nonstationary process 5-3. Objective function 8-1. Objectives: in optimization 8-5; of activity 7-8. Obsolescence: inventory 14-13; output 14-25. Operations research: beginning of 1-1; development of 1-1; objectives 1-2. Optimization: constraining action 8-9; with equalities 8-13; with inequalities 8-14; (see Constrained optimization, Extrema). Outcomes: combined 4-4; complement 4-4; conditional 4-4; independent 4-4; of activity 7-9. Output: fractionation in 3-11; (see Outcomes). Pareto distribution: form 4-14. Performance: randomness in 4-2. PERT 3-14, 9-16. Poisson distribution 5-2. Poisson duel 6-8, Chapter 15. Poisson process (trials) 5-6; examples 5-8, 5-9, 6-9; in arrivals 12-5, 12-8. Populations: in dependent process 6-1, 6-2; in independent process 5-1; vector representation 2-9. Probability: Chapter 4; uses of 4-10; density 4-6. Probable error (PE) 15-5. Processes: definition 7-2; event-state types 7-4; independent-dependent 5-2; (see Bernoulli, Poisson, Compound Poisson, Markov, Semi-Markov, Gaussian, Non-Markovian); network representation 3-10; production 2-2; vector representation 2-13. Production: as a network 3-9, 3-10; coordinated with distribution 8-3; for decaying demand 14-27; programming 9-3; randomness in 4-2; vector representations 2-2, 2-13. Programming: general 2-13; (see Linear programming, Dynamic programming). Propaganda 7-10, 7-11, 15-11. Provisioning: of expeditions 8-3; one-time types 14-29; (see also Supply). Pseudorandom numbers 4-9. Quantum mechanics 2-4. Queues 11-6, 13-10; of test facility 4-12; categories 125; formulas 12-12; Markov representation 6-8. Rabbit problem 6-8, 6-10, 6-11, 6-13. Random numbers: in Monte Carlo 4-7. Random-time mesh 7-3. Random processes: 4-1; Chapter 5; states of 5-3; (see Bernoulli, Poisson, Gaussian, Markov, Semi-Markov, Compound Poisson, Non-Markovian). Random variables: Bernoulli 4-6; continuous 4-6; defined 4-5; discrete 4-6; distribution 4-7; forms 4-12; notation 4-12; (see also specific types of distributions: Beta, Binary, Erlang, Exponential, Gamma, Gaussian, Lognormal, Pareto, Poisson, Rectangular, Weibull). Randomness: of demand 4-2, 14-41; of performance 4-2; of production 4-2; of traffic 4-3. • Rectangular distribution 4-14. Recurrence: in demand 4-11; of activity 7-7. Redundancy 13-2; programming 13-13. Reliability: Chapter 13; definition 13-1; analysis of 13-4. Renewal process 6-3; arrivals 12-5; average number 6-17; maintenance 13-2, 13-3; rate 6-17; stationary 6-15. Requirements: in optimization 8-5; of activity 7-8. Saddle point: in linear program 9-9; of a function 8-12. Safety stock 14-18. Sample points 4-3, 5-1, 5-2. Sample space 4-3. Scalars 2-5, 2-6. Search: Bayes' approach 4-5; of targets 15-6; optimiz ing 8-4; vector representation 2-14. Semi-Markov process 6-4; compared to Markov 6-7. Service systems: Chapter 12; as event-state 7-6; as renewal 6-3; costs 12-7; examples 11-3; measures of performance 12-3; scheduling 12-9. Shadow price 8-14, 8-15. Simulation: Monte Carlo 4-7. Slack variables 8-15. Spare parts: optimal amounts 13-13. State 7-4, 7-10, 13-2; in dependent processes 6-1; in independent processes 5-1, 5-3; state transitions 3-10, 11-7, 11-8. Steady-state process: defined 5-3; examples 5-4. Stieltjes integral 13-3. Stochastic: matrix 3-5; processes 4-1; Chapter 5. Strategy: in games 2-2; vector representation 2-15. Supply effort 14-1. Supply systems 8-3, 11-3, Chapter 14; as a network 14-10; effectiveness 14-15; effort 14-12. Support 13-10, 13-11, 13-12; (see also Maintenance). Survival: effect of structure 15-4; representation 15-3. • 1-3 System evaluation concepts 11-1, 11-2. Systems: Chapter 11; and activities 7-12; combatexamples 11-2; lifetime 13-6, 13-8; maintenance of 11-7; readiness 13-3; replacement of 11-8; representation of 11-4; service-examples 11-3; states of 3-10, 7-10, 11-7; supply-examples 11-3. Target analysis 15-4, 15-8. Targets: area 15-8; complex 15-3; composite 15-2; destruction 9-2, 15-4; detection 15-6; vulnerability 15-2. Time measurement 7-3. Traffic: as activity 7-12; randomnesses 4-3; service aspect 12-3. Transportation: as activity 7-12; as network 3-8; ex ample of optimization 8-3; problem 8-3, 9-16; result of linear combinations 9-3; vector representation 2-13. Trapping set 6-13. Trials: continuous 5-5; (see Markov, Poisson, Bernoulli processes); definition 4-3; discrete 5-5; rounds as 15-5, 15-7. Vectors: acceleration 2-6; activity 2-11; addition 2-8, 2-9; analysis of components 2-9; definition 2-2, 2-4; direction 2-6, 2-10; force 2-6; geometrical 2-3, 2-5; linear 2-3; magnitude 2-6, 2-9; multiplied by matrix 3-4; n-dimensional 2-4, 12-7; notation 2-4; of trials 4-6; population vectors 2-8; product 2-7, 3-4, 3-7; representation of system 6-3; scalar multiples 2-6, 2-8; state 2-3, 12-22; time integrals 2-8; unit 2-7; velocity 2-5, 2-10. Velocity: as a vector 2-5, 2-10; matrix representation 3-13. Vulnerable area 15-2. Waiting time: representation 12-6, 12-10. War gaming: targets 15-3; use of vectors 2-12. Weibull distribution 13-2, 13-7. Weapon systems effectiveness 8-17,8-18. Wiener process 5-4, 5-10 . By Order of the Secretary of the Army: HAROLD K. JOHNSON, General, United States Army, Chief of Staff. Official: KENNETH G. WICKHAM, Major General, United States Army, The Adjutant General. Distribution: Active Army: OSD (5) USACDC Agcy (2) DASA (3) USAMC (10) SA (2) USAMICOM (5) USofA (10) USAECOM (5) ASA (FM) (2) USATECOM (5) ASA (I&L) (2) USAWECOM (5) ASA (R&D) (5) USAMUCOM (5) USASA (2) USAMECOM (5) Lab (5) OSofSA (4) DCSPER (2) Centers (2) ACSI (2) Log Comd (3) MDW (2) DCSOPS (3) Armies (4) DCSLOG (5) Corps (5) ACSRC (2) Div (4) ACSC-E (2) CAR (2) Instl (1) USMA (20) CA (2) USAWC (20) COA (3) USACGSC (20) CINFO (2) Br Svc Sch (5) CNGB (2) Specialist Sch (5) CLL (1) CRD (10) AFSC (10) ICAF (10) CMH (1) TIG (2) NWC (5) AFIP (2) TJAG (1) PMS Sr Div Units (1)TPMG (2) PMS Jr Div Units (1)TAG (2) PMS Mil Sch Div Units (1) CofCh (1) Gen Dep (OS) (4) CofEngr (3) Dir of Trans (2) Army Dep (3) Gen Hosp (1) CofSptS (3) USALMC (10) TSG (3) PG (3) USACDC (10) Arsenals (4) USA Maint Ed (5) DivEngr (1) USCONARC (10) Engr Dist ( 1) ARADCOM (10) MTMTS (1) ARADCOM Rgn (3) Proc Dist (2) OS Maj Comd (10) USARV (10) NG and USAR: None. For explanation of abbreviations used, see AR 320-50. / I {r U.S. GOVERNMENT PRINTING OFFICE: 1969 0-282.-789