UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGIM CENTRAL CIRCULATION AND BOOKSTACKS The person borrowing this material is re- sponsible for its renewal or return before the Latest Date stamped below. You may be charged a minimum fee of $75.00 for each non-returned or lost item. Theft, mutilation, or defacement of library materials can be causes for student disciplinary action. All materials owned by the University of Illinois Library are the property of the State of Illinois and are protected by Article 16B of Illinois Criminal low and Procedure. TO RENEW, CALL (217) 333-8400. University of Illinois Library at Urbana-Champaign OEC 1 g 99 When renewing by phone, write new due date below previous due date. L162 Digitized by the Internet Archive in 2013 http://archive.org/details/minimalcoveringp966youn ?.9cc ? "TlUCDCS-R-79-966 ^l A \>£t, UILU-ENG 79 1712 March 1979 -w THE MINIMAL COVERING PROBLEM AND AUTOMATED DESIGN OF TWO-LEVEL AND/OR OPTIMAL NETWORKS by MING HUE I YOUNG DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS Report No. UIUCDCS-R-79-966 THE MINIMAL COVERING PROBLEM AND AUTOMATED DESIGN OF TWO-LEVEL AND/OR OPTIMAL NETWORKS by MING HUEI YOUNG March 1979 Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 61801 This work was supported in part by the National Science Foundation under Grant No. MCS77-09744 and was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science, March 1979. ACKNOWLEDGMENT The author wishes to thank his advisor, Professor S. Muroga, for his invaluable guidance during the preparation of this thesis and during the preceding years of research, and al- so for his careful reading and constructive criticism of the original manyscript. The author also wishes to thank Professor J. Liebman, who gave careful proof reading and helpful criticism of this thesis, and to Mr. R. B. Cutler, who gave helpful comments and provided friendship during the author's study here. The excellent typing job done by Mrs. Ruby Taylor, Mrs. Zigrida Arbatsky, and Miss Kim Howard is appreciated and acknowledged. The financial support of the Department of Computer Science and the National Science Foundation under Grant No. MCS7 7-09744 is acknowledged. Special thanks goes to Mrs. Ming-Chiao Lien Young for her patience, understanding, and encouragement during the author's five years here. TABLE OF CONTENTS Page 1. INTRODUCTION 2. FORMULATION OF THE LOGIC MINIMIZATION PROBLEM INTO THE MINIMAL COVERING PROBLEM . . 4 2.1 Single-output And Multiple-output Switching Functions 4 2.2 Logic Minimization For A Single-output Switching Function .... 4 2.3 Logic Minimization For A Multiple-output Switching Function ... O 3. ZERO-ONE IMPLICIT ENUMERATION ALGORITHM FOR THE MINIMAL COVERING PROBLEM hiinimal 3.1 Some Basic Definitions -, o 3.2 Reduction Operations 3.3 Basic Implicit Enumeration Algorithm For The Minimal Covering Problem ... 16 4. SCHEME FOR THE PROBLEM REDUCTION 20 4.1 A Scheme For Detection of Domination Relations 20 4.2 Comparison Of Some Computational Results 21 5. NEW PROPERTIES OF THE MINIMAL COVERING PROBLEM 23 5.1 Reducibility Of A Partial Solution 23 5.2 Excluding Relation Between Two Columns 26 5.3 Implementation . . . 5.4 Some Computational Results 37 6. AN HEURISTIC ALGORITHM FOR THE LARGE SCALE MINIMAL COVERING PROBLEM .... 43 6.1 The Heuristic Algorithm 44 6.2 Some Computational Results 49 Page 7. SYMMETRIC MINIMAL COVERING PROBLEMS 51 7.1 Symmetric Permutations 51 7.2 Symmetric Permutations Of The Problem Formulated From The Logic Minimization Problem 56 7.3 Complete Characterization Of Symmetric Permutations 71 7.4 A Necessary And Sufficient Condition For A Permutation To Be Symmetric 81 7.5 Preservation Of A Symmetric Permutation During Program Backtracking 91 7.6 Preservation Of A Symmetric Permutation During The Three Reduction Operations 94 7.7 Preservation Of Symmetric Permutations With Different Generators 106 7.8 Some Computational Results 137 8. PERMUTATIONAL PRECLUDING PROCEDURE 145 8.1 Generalized E-sets 145 8.2 Precluding Of Subproblems 148 9. THE MINIMAL COVERING PROBLEM WITH PARTITIONED CONSTRAINT MATRIX 155 9.1 Upper Bounds On The Values Of Groups Of Variables . . 157 9.2 Some Computational Results 161 10. THE GENERAL COST MINIMAL COVERING PROBLEM 164 10.1 Generalization Of The Basic Algorithm 166 10.2 Precluding of Subproblems 167 10.3 The Symmetric Property Of The General Cost Minimal Covering Problem yil 10.4 Heuristic Approach For The Large-scale General Cost Minimal Covering Problem 175 Page 11. CONCLUSION 179 REFERENCES 183 VITA 187 THE MINIMAL COVERING PROBLEM AND AUTOMATED DESIGN OF TWO-LEVEL AND/OR OPTIMAL NETWORKS Ming Huei Young Department of Computer Science University of Illinois at Urbana -Champaign, 1978 Efficient implicit enumeration algorithms for the minimal covering problem are presented in this thesis. These algorithms are developed mainly for minimizing the logic expression of the switching function. They are extensions of the Quine-McCluskey method. "The reducing property" and "the excluding property" of the minimal covering problem are introduced to speed up the enu- meration in solving problems. Symmetric property of the minimal covering problem is extensively explored. Procedures for utilizing this property in the implicit enumeration algorithm are developed based on the theory of finite permutation group. The concept of an upper bound on the value of a group and of variable is also introduced in this thesis. Programs developed based on these algorithms are in- corporated into a system for the automated design of two-level AND/OR optimal networks. 1. INTRODUCTION The logic minimization problem is an important logic de- sign problem. This problem is to find a minimal set of terms for a switching function such that this function can be expressed as a sum or sums of these products. A switching function expressed in a disjunction or disjunctions of terms can be easily implemented with PLAs (Programmable Logic Arrays). Since the size of a PLA for implementing a switching function in a disjunction of terms or dis- junctions of terms is proportional to the number of different terms in this disjunction or these disjunctions, minimization of the logic expression for a switching function means minimization of the size of a PLA. In implementing a PLA as part of LSI chips, chip areas are covered by the PLA or electric power consumed by the PLA is minimized if the logic expression of a function to be implemented is minimized. In using each PLA as a separate package, minimization of the logic expression of each switching function usually reduces the number of packages needed to implement these functions, if a large network is to be realized by many packages. The most well-known method for the logic minimization pro- blem is the Quine-McCluskey method [6]. This method consists of * A switching function may be a single-output or a multiple-output switching function unless it is explicitly specified. two stages. The first stage is to derive the set of all potential terms for the switching function. The second stage is to find a minimal set of terms from the set derived in the first stage. In this method, the problem of the second stage is formulated as a minimal covering problem and is solved by a "reduction and branching" method. Three reduction operations are used in this method to reduce a problem into a smaller equivalent problem. When a problem cannot further be reduced, this problem is de- composed into subproblems by fixing some variable to and 1, and each sub- problem is solved individually by repeating the reduction and branching. In this thesis, a zero-one implicit enumeration algorithm for the minimal covering problem is introduced. This algorithm is an extension of the Quine-McCluskey method. Some new properties of the minimal covering problem, which can be used to speed up the Quine-McCluskey method, are incorporated in this algorithm. These properties are presented in Chapters 5, 7, 8 and 9. An heuristic algorithm for large-scale minimal covering problems is proposed in Chapter 6. • If the given switching function has some symmetric proper- ties, these properties are reflected in the minimal covering problem formulated for the minimization of the logic expression of this func- tion. These are also discussed in Chapter 7. Some new properties presented in Chapters 5 and 7 are gen- eralized for the general cost minimal covering problems in Chapter 10. Although this algorithm is developed mainly for the mini- mization problem of logic expressions, it can also be applied to mini- mal covering problems or general cost minimal covering problems formu- d for other problems [1, 2, 3, 4, 5]. For example, problems [24], which the late Professor Fulkerson of Cornell concluded were diffi- cult to solve, were solved by the program based on this algorithm. Comparison of computational results shows that this algorithm is one of the best algorithms for the minimal covering problem. 2. FORMULATION OF THE LOGIC MINIMIZATION PROBLEM INTO THE MINIMAL COVERING PROBLEM A method for formulating the logic minimization problem into the minimal covering problem is described in this chapter. This is the method described in [6], 2.1 Single-output And Multiple-output Switching Functions Let B be a set with only two elements and 1. Three logic operations AND, OR, and NOT are denoted by ".", 'V, and "-", respectively. Let 1^ be the set of all t-vectors (y , y 2 > ..., y t > such that y = or 1 for 1=1, 2, ..., t. A single-output switch- l ing function f(y , y 2 y fc ) on B is a mapping from B to B. Each single-output switching function f on B can be expressed [6, 30] as a disjunction of terms : f(y r y 2 . ".>y t >=V V "• '\ V VV""S - ... -Z • Z • ... • Z (2.1.1) h 2 y where Z = y , , or y , , for some s(r) in {1, 2, ..., t} for each r J s(r) J s(r) r = ± v ± 2 , ..., i o> k v k 2 , ..., k g , ..., l v % v ..., l y , and each Z is called a literal, r A multiple-output switching function is a set of single- output switching functions defined on B . 2.2 Logic Minimization For A Single-output Switching Function Let f(y x , y 2 , ..., y t > and g(y r y r ..., y t > be two single-output switching functions. If every (y , y„, ..., y ) satisfying f(y., J~ t .... y ) - 1 satisfies also gCy^ y 2> ..., y fc ) = 1, then f is said to imply g. For example, f(y, , y~, y^) = y l ' y 2 V y 2 ' y 3 im P lies 8 ( >V ^2' y 3 ) = y l ' y 2 v y 2 ' y 3 v y l ' y 3" An implicant of a single-output switching function f is a product which implies f. For example, y. ■ y„ is an implicant of f(y,, y 2 > y 3 ) = Yi " y v y 9 ' y r A product p = Z. • Z. • ... • Z. is said to 22 X l X 2 X j(i) subsume another product q = Z • Z • . . . • Z, if each literal k l k 2 j (k) (i.e., Z ) of q is a literal of p. For example, the product i Y-, ' y ? ' Yo subsumes the product y„ • y~. A prime implicant of a single-output switching function f is defined as an implicant of f such that no other product subsumed by it can be an implicant of f. For example, y • y„, y • y_ and y • y_ are prime implicants of f(Y x > Y 2 > Y 3 ) = y 1 ' Y 2 v Y 2 ' Y 3 - A vector (y.^ y 2> ..., y fc ) is said to be a true vector of a single-output switching function f if f (y , y , ..., y ) = 1. For example, (1, 1, 0) is a true vector of the function f (y , y , y ) = y • y ^ y • y . A true vector of a single-output switching function f is said to be covered by the prime implicant q . of f if this true vector is also a true vector of q . . l l Each prime implicant of the single-output switching function f covers some true vectors of f. If all the true vectors of the single-output switching function f are covered by prime implicants q , . . . , q , K.-i K. 1 r then f = q ^ q v . . . v q holds and {q , q , . . . , q } is 12 1 1 2 r called a realization set of f . To find a minimal set of terms to express a single-output switching function f as a disjunction of these terms, all the prime implicants of f are first found, and then a minimal set of prime im- plicants are chosen from these prime inplicants such that all the true vectors of f are covered by those prime implicants in it. All the prime implicants of a given function f can be found by a method called iterated consensus , or some other methods [30, 31] . Let q , q , ..., q be all the prime implicants of a single-output switch- 12 n ing function f, and y , y , ..., y be all the true vectors of f. Let A = [a | be an m x n matrix, where a . is defined as 1 if y . is covered by q . , ij if y . is not covered by q. , JL for all i, j. The matrix A is called the prime implicant table of the single-output of the switching function f. For each i, let x. be a zero-one variable such that if x. = 1, prime implicant q. is to be chosen in a realization solution set for f, and if x. = 0, q. is not. Then the logic minimization problem of a single-output switching function can be formulated as the following minimal cover- ing problem: minimize: x. + x„ + ... + x l 2 n The prime implicant table defined here is different from that defined in textbooks of switching theory in that rows and col- umns are interchanged. subject to: ( \ / \ x l 1 X 2 1 • > • • • X n 1 k ) V ) x ± » or 1 for 1 - 1, 2 n, where A is the prime implicant table of f. Example 2.2.1 Let us consider the minimization problem of the switch- ing function f(y r y 2 , y 3 , y 4 ) = y ± • y 2 - v 3 - v 2 - y 3 - y^ - *i • y 3 * y 4 v y i ' y 2 ' y 4 • All the prime implicants found by the iterated consensus method are: q l = y l ' y 2 ' y 3' q 2 = y 2 * y 3 ' V q 3 = y l * y 3 ' y 4> q 4 = y l ' y 2 * V q 5 = y i * y 2 ' y 4' q 6 = y l * y 2 * y 3' q 7 = y l ' y 3 ' y 4» q 8 = y 2 ' y 3 ' y 4* A11 the true vectors of this function are: y^ = (1, 1, 0, 0) , y 2 = (0, 1, 1, 0) , y = (1, 1, 1, 0), y 4 = (i, o, o, i), y 5 = (i, i, o, i), y 6 = (0, 0, 1, 1), y ? = (1, 0, 1, 1) y g : (0,1, 1, 1). The prime implicant table of this function is as fol- lows: /"l q 2 q 3 q 4 q 5 q 6 q 7 q 8. y l 1 1 ? 2 1 1 ?3 1 1 a = y 4 1 1 9 5 1 1 .Jh- y 6 1 1 *7 1 1 7 8 \ 1 1 (2.2.1) So the minimal covering problem formulated for this problem is to minimize x, + x„ + x„ + x, + x r + x, + x. k 8 ' subject to x l X 2 x 3 X 4 X 5 X 6 X 7 X 8 \ / 'I s * 1 1 1 1 1 1 1 x. = or 1, for i = 1, 2, ..., 8, where A is the matrix of (2.2.1). ™ 2.3 Logic Minimization For a Multiple-output Switching Function The problem to find a minimal set of terms for multiple-out- put switching function {f , f_, ..., f } such that each single output 9 switching function f can be expressed as a disjunction of some terms in this set is more complicated than the problem for the single-output switching function. All possible products of f , f ..., f are first formed, 1 2 y such as f • f , ..., and f • f ...... f . Let 4» 4» .... H» 1 « 12 u 1 2 ' £ denote f^, f 2 , . .., f and all their products. Then all prime impli- cants for each H* . and true vectors for each f are derived. Let 1 J q..- i q. » •••» q. be all the prime implicants of ¥. for each 1 2 ^(i) x i = 1, 2, ..., I, and let y , y. , ..., y be all the true vec- J l J 2 J m(j) tors of f for each j =1, 2, ..., u. Then an m x n matrix which in- dicates which truth vectors are covered by each q. is constructed, where m = m(l) + ... + ra (j) + ... + m ( y ) and n = n(l) + . . . + n(i) + ... + n(i) . In constructing this matrix, each prime implicant q. of k f^ covers only the true vectors of the single-output functions contained in the product ? . Then a minimal covering problem is formulated from this m x n matrix in the same manner as in the single-output switching function case. Example 2.2.2 Let us consider the logic minimization problem of the multiple-output function {f = y • y v y ' y, ^ y • y • y f 2 = y l y 3 v y l ' y 2 ' y 4 v y l ' ^2 ' y 4 } * The P roduct of f x and f 2 is ^ ' f 2 = Y x ' y 2 ' y 4 - y;L ' y 3 ' y 4 - y x • y 2 ■ y 3 • y^. All prime implicants of f^ f 2 and fj . f g are : q ± = y 2 . y ^, ^ - y . y q 3 = y l • y 3 ' V q 4 = y l * y 2 * y 3' q 5 = y l ' y 3' % = y l " y 2 * V q 7 = y l ' y 2 ' y 3 ' V q 8 = y 2 ' y 3 ' V q 9 = y l ' y 2 ' y 4 ' q 10 = y i * y 3 ' y 4' q ll = y l ' y 2 ' y 3 ' V and q l2 = y 2 • y 3 ' y 4 . 10 The true vectors of f are: y ± = (0, 0, 0, 0), y 2 = (0, 1, 0, 0, 0) y = (0, 1, 0, 1), y 4 = (0, 1, 1, 1), y 5 = (1, 0, 0, 1), y = (1, 0, 1, 1), y ? = (1, 1, 0, 1), y g = (1, 1, 1, 1) The true vectors of f are: y n = (0, 0, 0, 0), y. . = (0, 0, 1, 0) , 10 11 = (0, 1, 0, 1), y = (0, 1, 1, 1), y 13 = (1, 0, 0, 0) , y 14 = (1, 0, 0, 1), y 15 = (1, 1, 0, 0) y 1& = (1, 1, 0, 1). The prime iraplicant table for this multiple-output function is £ 1 h f l • f 2 q l q 2 q 3 q 4 q 5 q 6 q 7 q 8 q 9 q io q ll q l2 y l 1 y 2 1 1 y 3 1 1 1 1 1 _w 1 1 y 5 1 1 y 6 1 y 7 1 1 1 1 1 y 8 y 9 1 1 1 *io 1 *l] 1 1 1 1 1 1 1 y 13 1 y 14 1 1 y 15 1 1 ° ^16 1 1 1 1 1 where the area with a single shows that all elements in that area are al 1 zeroes. 11 The following terminology will be used later. The set of all multiple-output prime implicants ( abbreviated as MOPI) of a multiple-output switching function f is defined as the set of all prime implicants of all possible products f , T , . . . , T of the output functions t ±t f^ ..., f^ of f. A multiple-output implicant (ab - breviated as MOI) of a multiple-output switching function f is defined as an implicant of some possible product ¥ of the output functions f f 2 , ..., f^ of f. A realization set of a multiple-output switching func- tion f is defined as a set of terms such that each output function f of i f can be expressed as a disjunction of some terms in this set. 12 3. ZERO-ONE IMPLICIT ENUMERATION ALGORITHM FOR THE MINIMAL COVERING PROBLEM The zero-one implicit enumeration algorithm for a zero-one integer linear programming problem is first introduced by E. Balas [9]. The basic idea for this algorithm consists of the following steps: Bl . Examine if a subproblem (initially the given problem) can be easily solved or not. If it can be concluded by some means that no solution better than the best solu- tion found so far can be obtained for the current sub- problem, go to step B3. If a best solution of the current subproblem is found and if this solution is better than the best solution obtained so far, store this new solution as the best solution obtained so far and go to step B3. B2 . Choose an unfixed variable, which is called a branch- ing variable , and generate two subproblems by fixing the chosen variable to and 1. Store these two sub- problems. B3 . Pick one subproblem from the storage where the sub- problems are stored and go to Bl. If there is no subproblem left in the storage, then the given pro- blem is implicitly enumerated and the best solution obtained so far is an optimal solution for the given problem. ™ 13 In using this implicit enumeration procedure, one must have some easy means to detect that no solution better than the best solu- tion obtained so far can be found for each subproblem. Also, one must have a good way to store all subproblems generated at step B2. The criterion used in B2 for choosing a branching variable has a strong influence on the execution time of the above procedure. To show how this zero-one implicit enumeration algorithm is applied to the minimal covering problem, some basic concepts are introduced in Section 3.1, and the three reduction operations of the minimal covering problem are restated in Sections 3.2. Then the ba- sic implicit enumeration algorithm for the minimal covering problem is outlined in Section 3.3. 3.1 Some Basic Definitions A minimal covering problem is a problem to minimize x, + x. + ... + x , subject to: 12 n J / \ x. (P) n 1 < /* x. = or 1 for i = 1, 2, ... n, where A = (a. . ) is an m by n matrix with a = or 1. The i-th row of A is said to be covered by the j-th column of A, or the j-th column of A is said to be covered by the i-th row of A if a r = 1. 14 A solution of the minimal covering problem is defined as an n-vector (x 1 x 2 , ..., x n ) with x ± = or 1 for i = 1, 2, ..., n. A solution (x^, x 2 , ..., x ) is said to be a feasible solution of (P) if it sat- isfies / \ 1 n An optimal solution of the minimal covering problem (P) is a feasible solution which minimizes x n + x_ + ... + x . 12 n A variable of the problem (P) is said to be fixed if it has been assigned a fixed value or 1. A variable is said to be free if it is not fixed yet. A subproblem is a problem obtained from another problem by fixing some free variables of that problem to or 1 . A partial solution of a subproblem is the set of fixed variables of that subproblem. The given problem is considered as a subproblem with an emtpy partial solution. A completion of a partial solution S is de- fined as a solution that is derived from S by specifying all free variables to or 1. A constraint is said to be satisfied by a par - tial solution S if it can be satisfied by a completion derived from S by specifying all free variables to 0. Henceforth, a ± and r denote the i-th column and the j-th row of the matrix A, respectively. It is assumed that each column a of i matrix A contains at least one non-zero element. 15 3.2 Reduction Operations The following three reduction operations are discussed in [6] for reducing a prime implicant table or equivalently, for reduc- ing the constraint matrix of a minimal covering problem. A column a - (a a , ..., a .) (or row r.) is said to 3 ij *-3 mj i be dominated by another column a" k = (a^ , & . . . , a ) (or another row r ) if a < a for i = 1, 2, . . . , m (or a . . < a, for 1J IK 1J — kj j =1, 2, . . . , n). Operation 1 . If row r. is dominated by row r., then r is deleted 1 3 3 from the constraint matrix. Operation 2 . If column a. is dominated by column a., then column J l a is deleted from the matrix and the variable x is J j fixed to 0. Operation 3 . If row a. consists of components a = 1 for only one k and a = for all j f k, then the variable x^ is fixed to 1 and column a is deleted from the matrix. Also, all rows with k-th component equal to 1 are de- leted from the matrix. Column a, is said to be essen- k tial. It is shown in [14] that these three operations can be ap- plied in any order to obtain a unique matrix where none of these op- erations can be further applied. * Rows and Columns in this thesis are interchanged unlike those in [6] . 16 3.3 Basic Implicit Enumeration Algorithm For The Minimal Covering Problem In the following outline of the algorithm, each subproblem is stored with a level number, denoted by LEVEL, indicating which subproblem this subproblem is obtained from. At the beginning, the given problem is assigned level number 1. The level number of a sub- problem obtained from a subproblem of level k is k + 1. A level num- ber k is said to be higher than another level number k if k < k . The partial solution of each subproblem is stored in a stack XSL with its level number. The algorithm is outlined as follows: Ml Reduction . Using the operations introduced in 3.2, reduce the con- straint matrix as much as possible. Update the current par- tial solution. If the matrix is reduced to a null matrix go to M5. M2 Bounding . M2. 1 Find a lower bound ZMIN of the problem under the cur- rent partial solution. M2.2 Test if "ZBAR-ZMIN <_ 0" is satisfied, where ZBAR is the best value obtained so far. If it is satisfied go to M6. M3 Branching . M3.1 Choose a row r. by some criterion. M3. 2 Based on the row chosen at M3.1, for each non-zero element a in this row, generate a subproblem by fixing variable x. to 1. J 17 M3. 3 Store indices (j,-k) for all subproblems just gener- ated in a stack XX, where j is the index of the branching variable and k is the level number of the subproblem. The problem corresponding to column j with the fewest non-zero elements is stored first. MA Next Subproblem . Get an index (j ,-k) from the top of stack XX and set vari- able x to 1. Then delete all rows with the j-th element j equal to 1. Update the current partial solution and go to Ml. M5 Getting a feasible solution . The current partial solution is a feasible solution. If this feasible solution is better than the best feasible solution obtained so far, keep this solution as the best feasible solution found so far. M6 Backtracking . M6.1 Find a partial solution, in XSL, one level higher than the current subproblem and consider it as the current partial solution. In XSL, erase all partial solutions with level number greater than the level number of the current partial solution. If no par- tial solution is left in XSL, the given problem has been implicitly enumerated and the best solution ob- tained so far is an optimal solution. M6. 2 Retrieve the lower bound ZMIN (which was calculated at M2.1 or M6.6) of the current subproblem. 18 M6.3 Test if "ZBAR-ZMIN <_ 0" is satisfied. If it is sat- isfied go to M6.1. M6.4 Compare the level number LEVEL of the current partial solution with the level number LVT of the next sub- problem to be considered in XX. If LVT < LEVEL + 1 go to M6.1. If LVT > LEVEL + 1, delete the next sub- problem in XX and repeat M6.4. M6.5 Set the variable, corresponding to the subproblem which has just been implicitly enumerated, to and delete its corresponding column in the constraint matrix. M6.6 Calculate a lower bound ZMIN of the current subpro- blem and test if "ZBAR-ZMIN < 0" is satisfied. If it is satisfied, go to M6.1. Otherwise, go to M4. Henceforth, "program backtracks" means that the program con- trol goes to M6 (i.e., a subproblem has been implicitly enumerated and program tries to derive another subproblem) . The method used in M2 . 2 or M6 . 6 for finding the lower bound ZMIN of a subproblem with a certain partial solution is the one intro- duced in [11] and is restated as follows. Let l ± be the weight of column i (i.e., the number of non- zero elements in column i). Arrange these numbers in a descending order: JL > j^ > . . . _> £ # Let h be the number of unsatisfied 12 m constraints by the current partial solution and r be the smallest integer such that l ± + t ± + ... + i > h . Then ZMIN . s calculated by ZMIN _ 12 r r + XP, where XP is the number of variables which are fixed to 1 in the < urrent partial solution. 19 The criterion used in M3.1 for choosing a row is described as follows: M3.1.1 For each column i calculate n = (The number of non- zero elements in column i) + (The number "bicolumnar rows" covered by column i) . Here a row is said to be "bicolumnar" if it contains only two non-zero ele- ments. M3.1. 2 Find the column i with the largest n . If there o is a tie, the column with the greatest index is chosen. M3.1. 3 From all the rows covered by column i , choose the o row with the smallest number of non-zero elements. If there is a tie, the row with the smallest index is chosen. 20 4. SCHEME FOR THE PROBLEM REDUCTION In using the implicit enumeration algorithm introduced in the last chapter to solve the minimal covering problem, long computa- tion time will have to be spent in the problem size reduction if the column domination relation or the row domination relation are checked for each pair of columns or each pair of rows each time the algorithm goes through the MI Reduction in the previous chapter. The computa- tional efficiency of this algorithm is greatly improved by the use of the scheme introduced in Section 4.1 for checking the domination re- lations among rows and among columns. 4.1 A Scheme For Detection Of Domination Relations In the Ml Reduction, the column domination relation and the row domination relation are checked only in the beginning. After a column or a row has been checked not to be dominated by any others, it needs to be checked again only when the existing non-zero elements in it are deleted. So in the Ml Reduction, two arrays, MM1 and MM2, are used to keep track of which columns and which rows need to be checked again. A column is tested to see if it is dominated by any other columns as follows: Ml. 1 Find the first row which is covered by the column to be tested. Ml. 2 All the columns that are covered by the row found at step Ml.l are the candidate columns that may dom- inate the column to be tested. Check if any of these 21 columns dominate the column to be tested. A row can be similarly tested. 4.2 Comparison Of Some Computational Results Comparison of some computational results for the two cases — with and without the use of the new scheme introduced in Section 4.1 — has been made on some example problems, as summarized in Table 4.2.1. Programs for these two different cases are coded in FORTRAN and com- piled by the FORTRAN G compiler. Computational results are obtained by solving the problems on the IBM 360/75J computer. PROB. NO. PROBLEM SIZE USING CONVENTIONAL PROCEDURE USING PROCEDURE < IN SECTION 4 STATED 1 m n NO. OF ITER NO. OF BKTRK TIME IN SEC NO. OF ITER NO. OF BKTRK TIME IN SEC 1 55 44 * 51 25 8.97 49 24 1.35 2 35 15 * 174 85 5.12 159 77 1.76 3 40 60 5 2 0.72 5 2 0.15 4 60 60 47 24 10.91 47 24 1.17 5 60 80 >250 ? >72.20 350 191 11.60 Table 4.2.1 Comparison of two cases: program with conventional procedure of checking dominating relations and program with the proce- dure stated in Section 4.1. * The branching operation is slightly changed due to different checking procedure. The number in the column under "NO. OF ITER" shows the number of itera- tions, i.e., the number of times the program went through step M4 in solving a problem. The number in the column under "NO. OF BKTRK" shows the number of backtracks, i.e., the number of times the program went through step M6. The number in the column under "TIME IN SEC" shows the computation time (in seconds) used in solving a problem. 22 From this table, one can see a great computational improve- ment due to this new scheme of checking domination relations among rows and columns in spite of the simplicity of the scheme. For pro- blems with greater size, the improvements will be even greater. 23 5. NEW PROPERTIES OF THE MINIMAL COVERING PROBLEM Two new properties of the minimal covering problem that can be used to improve the algorithm's efficiency are presented in this chapter. 5.1 Reducibility Of A Partial Solution A feasible solution (x x ..., x ) of the minimal cover- i z n ing problem (P) is said to be reducible if there exists x. = 1 for i some i and (x , x ..., x 0, x ..., x ) is also a feasible J. « j-i 2+1 n solution of (P). Otherwise, a feasible solution is irreducible . It is easy to see the following theorem: Theorem 5.1.1 Every optimal solution of the minimal covering problem (P) is irreducible. A partial solution S of a subproblem is reducible if among all the constraints of (P) , those which S can satisfy (along with all free variables assigned to 0) can also be satisfied by another partial solution S', which is obtained from S by setting one non-zero variable in S to and keeping all other fixed variables in S unchanged (along with all free variables assigned to 0). Theorem 5.1.2 Every feasible completion of a reducible partial solu- tion S is reducible. Proof Let S be a feasible completion of S. Since S is reducible, there exists another partial solution S such that (1) S^ is obtained from S by setting some fixed variable x. in S from 1 to 0. l 24 (2) S satisfies all constraints which S can satisfy. Let S be a solution obtained from S by setting the vari- able x from 1 to 0. We shall show that S is a feasible solution i t of (P). Among all constraints of (P) , those which are satisfied by S are satisfied by S . Therefore, they are satisfied by S , since S is a completion of S . (Notice that every coefficient a is or 1.) Now let us consider the constraints which are not satisfied by S but are satisfied by S . Let an arbitrary constraint among them be a kl X l + '•• + a kt X t + (a k,t+l X t + 1 + ••' + \n V i 1 ' where x , ..., x are the fixed variables (0 or 1) in S. Since t+1 t+n this is not satisfied by S but is satisfied by S , a x + ... + a x>l must hold for the feasible completion S kl 1 kt t - of S. This is true even if x. is changed from 1 to since x i is not one of x , ..., x . Thus, the constraint which is not satisfied * * by S but is satisfied by S is also satisfied by S^^ . Therefore, S is reducible. Q.E.D. From this theorem, it is easy to see the following corol- lary. Corollary 5.1.3 Every completion of a reducible partial solution is either infeasible or not optimal. Thus, the computational efficiency of the zero-one implicit nneration algorithm in Chapter 3 can be improved by checking the 25 reducibility of a partial solution. If the reducibility is detected, there will be no optimal completion under the current partial solu- tion. Therefore, the program can backtrack. The reducibility can be checked by the following theorem: Theorem 5.1.4 A partial solution S is reducible if and only if there exist j^ j 2 , ,„, j for some p > j such that (1) x e S for i = 1, 2, ... , p, J i (2) x =1 for i = 1, 2, ..., p, J i (3) a. < a. + a, + . . . + a , J l J 2 J 3 j p where a is the column of A corresponding to variable x for Ji j i 1 - 1, 2, ..., p. Proof Suppose S is reducible and x. , x. x are variables J l J 2 Jp fixed to 1 in S. Then (1) x e S fori- 1, 2, ..., p, J i (2) x =1 for i = 1, 2, ..., p. J i Since S is reducible, there exists another partial solution S', which is obtained from S by setting a non-zero variable x in S to and j l keeping all other variables unchanged, such that the constraints satisfied by S are also satisfied by S ' . From the property that the constraints satisfied by S can be satisfied by S\ it can be seen * It does not matter whether S has fixed variables other than x , x , . .., x. . J l J 2 J p 26 that at least one of a.. , a. . , .... a., must be 1, if a = 1. So p must be greater than 1 and a. < a + ... + a J l " j 2 Jp Conversely let us assume that there exist i„, i „ i 1 2 J p for some p > 1 such that (1) x. e S for i = 1, 2, . . ., p, J i (2) x =1 for i = 1, 2, ..., p, i (3) a^ < a, + . . . + a_ J l " J 2. 1 Let x , x , ..., x. , x , ..., x. be those variables fixed to 1 J l J 2 J p J p+1 J p+ k in S, and let S' be obtained from S by setting x. to and keeping J l all other variables in S unchanged. Now let us show that every con- straint satisfied by S is also satisfied by S'. Let a x + a + ... + a. x > 1 be a constraint satis- 11 1 12 in n — fied by S. Then, by definition, a.. + a + ... + a > 1. If 1J 1 lj 2 ijp-fk" a =1, then one of a.. , a.. , ..., a., must be 1, since 1J 1 1J 2 1J 3 1J p -*■ -^ _v a < a + ... + a.. . If a.. =0, then a.. +a.. + ... + a. 1J 1" 1J 2 1J P l h ^2 1J 3 x Vk > 1. Therefore, the constraint a. n x n +a.^x^+...+a x > 1 is il 1 i2 2 in n — satisfied by S ' . Q.E.D. 5.2 Excluding Relation Between Two Columns If column a. is not dominated by column a., an E-set (E means excluding operation) of a. with respect to a, is defined to be J i 27 the set of rows which are covered by a, but not by a and is denoted J i by E. . . Example 5.2.1 If a. = 1 'o s 1 1 and a. = J 1 1 1 1 then E.. = {r 3 , r 6 } and E . . = {r ^ zf. m The second new property of the minimal covering problem is stated in the following theorem and will be referred to as "the ex- cluding property". Theorem 5.2.1 Let E. . be the E-set of column a. with respect to column ij 3 a and let x be a feasible solution of (P) with x. = 0, x =1 and x 1 i J k t = 1 for t = 1, 2, ..., r (other variables are assigned either or 1). If each row in E.. is covered by some of the columns a , K l K 2 , a k » r then x', which is obtained from x by replacing x. = 0, x. = 1 by X. ■ 1, x. ■ and the remaining variables unchanged, is also a feasible solution of (P). The objective values of both solutions are the same. 28 Proof Let S be the partial solution with x. = 0, x =1 and x =1 J k t for t = 1, 2, ..., r, and let S' be the partial solution with x = 1, l x = and x =1 for t=l, 2, ..., r. (S and S' have no other fixed J k. variables.) Since each row in E.. is covered by some of the columns a k ' a k ' " ' ' ' a k ' eacn con straint of (P) satisfied by S is also 12 r satisfied by S ' . Since x is a feasible completion of S (by the defi- nitions of x and S) , the constraints which are not satisfied by S, must be satisfied by the partial solution S", the set of the vari- ables which are not fixed in S and are fixed to or 1 by x. Since each constraint of (P) is satisfied either by S or S", and since each constraint satisfied by S is satisfied by S ' , each constraint of (P) is satisfied either by S' or by S". Since x is a solution obtained by assigning each variable according to S' and S" (by definition), x' is a feasible solution of (P). From the definition of x', it is easy to see the objective — w —V value of x and x' are the same. Q.E.D. Suppose a. and a. are two columns covered by the row chosen at step M3.1 in the outline of the basic algorithm. (There may be more than 2 columns covered by this row, but consider only two of them as a^ and a..) Let us consider the two subproblems cor- >onding to these two columns a. and a.: one obtained by setting y. . 1 and the other obtained by setting x. = and x. = 1. After the subproblem with x - 1 has been implicitly enumerated but before Bubproblem with x « and x. = 1 is considered, an E-set E is 1 1 ij 29 constructed. Then each time a free variable s^ is fixed to 1, E r J is tested if each row in E,. is covered by some column in ij {a, , a. , . . . , a, }, where x, , x , . . . , x are the variables k l k 2 k r k l k 2 k r fixed to 1 in the partial solution S (S may have other variables fixed to 1) and a. ^ a. for t = 1, 2, .... r. If each row in E . . \ J ij is covered, then for each feasible completion x of the current par- tial solution S, there is another feasible solution x', which is obtained from x by replacing x. = 0, x. = 1 by x. = 1, x. = and i J i J the remaining variables unchanged, by Theorem 5.2.1. Since the ob- jective values of x and x' are the same and x is a feasible solution examined before in the subproblem with x. =1, the objective value of x cannot be smaller than the value of the best solution obtained so far. So no feasible completion better than the best solution ob- tained so far can be found under the current partial solution. Thus, the program can backtrack . Computation saved by this modification of the algorithm is illustrated in the dotted triangle in Figure 5.2.1. 5.3 Implementation In order to implement the two properties stated in Sections 5.1 and 5.2, an array variable YY is introduced. For each partial solution S, YY is defined by YY. = E a. . x. 1 x.eS lj J * See Section 3.3. 30 0, x. = 1} with E. . lx - (J , . . . , X , : : (J , X ,| = 1} 111* • • • » E^'k" Vi ii x =1} with E , k' k k r r r I Lgure '>.'/.] Computation saved by the checking of E... — The dotted triangle can be skipped when each row in E i a covered by some columns of a. , a. , ..., a. . l . 31 for i = 1, 2, ..., m, where a..'s are the elements of the given ma- trix. YY. is called the current value of row i. In the beginning, YY . is initialized to for each i = 1, 2, ..., m. It is updated by YY. = YY . + a,, for each i when 1 i ij a variable x. is fixed to 1, and is updated by YY = YY - a for J i i ij each i when a variable x. with value 1 is set free. 3 Now the reducibility of a partial solution S can be checked as stated in the following theorem. Theorem 5.3.1 A partial solution S is reducible if and only if there exists ax e S satisfying the following condition: (1) x q = 1, (5.3.1) (2) YY V :• 2 for every k such that a. =1, where YY, is K - kq k the current value of row k. Pr o° f From the definition of YY, YY = Z a x . for k = 1, 2, ..., m, * x.eS ^ J J Let x. = x and all other non-zero variables in S be x , ..., x J i q h J P Then YY k = a kJ + a + ... + a^ for k = 1, 2, ..., m. From (2) of the condition (5.3.1), a, . + . . . + a, . > 1 whenever a, = 1, i.e., a. £ a. + ... + a. holds. Since at least one element of 3 1 J 2 J p a is not 0, p must be greater than 1. By Theorem 5.1.4, S is re- J l ducible. Conversely, let us assume S is reducible. By Theorem 5.1.4, there exist x , x. , ..., x. for some p > 1 such that J l J 2 Jp (1) x. e S for i = 1, 2, . .., p, J i (2) Xj =1 for i = 1, 2, ..., p, 32 (3) a. < a. + . .. + a. . J l~ J 2 J p From a. Ji a . + ... a. , it can be seen that one of J-L - J 2 Jp a, , a, , . . . , a. . must be 1 , if a, . =1. In other words, P E a, . > 2 for every k such that a. . =1. (5.3.2) 1=1 kj i- kj l Let x be the x. . Then x e S and x =1. From (5.3.2), it can be q j x q q seen that YY, = I a. . > 2 if a, =1. k ±=1 kj ± - kq Q.E.D. Whether each row in an E-set is covered by some of columns a , a , . . . , a can be tested as stated in the following theorem. 12 r Theorem 5.3.2 Let x. , x, , . . . , x, and x. be variables which are k l k 2 k r J fixed to 1 in S. Each row in E.., the E-set of column a. with re- spect to column a , a , . . . , a, if and only if 1 2 r YY > 2 whenever row r is in E . . , q - q ij where YY is the current value of row r . q q Proof From the definition of YY, YY = a . + £ a . for q = 1, 2, ..., m. (5.3.3) q qj t=1 qk t From the definition of E.., it can be seen that a . = 1 if r is in !J qj q E . If each row in E . . is covered by some column in {a, , a, , . . . , a, }, " 1J k l k 2 k r then for each r in E.., at least one of a . , a , , . . . , a , must q ij qk 1 ' qk 2 ' qk r v': ft does not matter whether S has fixed variables set to 0, but S has no other fixed variables set to 1. 33 be one. So whenever risinE,YY=a + Z a > ? q ij q qj i=1 qk. - c ' Conversely, let us assume that YY > 2 whenever r is in q - q E i . From equality (5.3.3), r a + Z a > 2 whenever r e E. . (5.3.4) qj t=i qk t ~ q iJ Since r^ e E_ implies a =1, it can be seen from (5.3.4) that Z a > 1 whenever r e E (5 3 5") t=i q t " q ij "qk^ a qk 2 ' "" a qk From (5.3.5), it is clear that at least one of a^, , a must be 1 whenever r E E . Thus, r is covered by at one a, , a , . . . , a for each r in E K i k 2 k r q iJ Q.E.D. The following theorem shows a relation between the condi- tions stated in Theorems 5.3.1 and 5.3.2. Theorem 5.3.3 Let S be a partial solution and YY, be the current k value of row k for each k. If there exists x in S which satisfies q (1) x q = 1 (5.3.6) (2) YY >_ 2 for each k such that a =1, K kq then each row of any E ±q such that a q is not dominated by a. satisfies YY k il 2 whenever r is in E. . (5.3.7) Proof Let r k be a row in E. q . Then a kq = 1 by definition. Since \q = 1,YY k- 2is held h y (2) of (5.3.6). Thus, (5.3.7) is proved. Q.E.D. Based on this theorem, the two properties introduced in this chapter can be implemented by testing only condition (5.3.7) 34 for an E-set E. whenever such an E-set is constructed. For each subproblem, E-sets are constructed only when any of the subproblems , which are generated at the same time as the gen- eration of this subproblem in step M3.2, have been implicitly enu- merated. The number of E-sets constructed for this subproblem is the number of the subproblems which are generated at the same time as this subproblem and are already enumerated. When a subproblem is generated by fixing a variable x. to 1 for some j , sets of rows (i) or a set of rows (ii) , in the follow- ing, are generated: (i) The E-sets E. .,E. ., ...,E. . for V X 2 J X r J this subproblem, or (ii) the set of rows covered by the j-th column of the matrix A, if no E-set is constructed for this subproblem. These sets or this set is generated for testing the new backtrack conditions for this subproblem based on Corollary 5.1.3 and Theorem 5.2.1. All these sets for each level subproblem, which will be re- ferred to as the testing sets in the following modified algorithm, are stored in a stack XQ along with the corresponding level num- bers. When the program backtracks, those testing sets with level numbers greater than the current level numbers are deleted. For each partial solution S, each testing set in the stack XQ is tested to see if this testing set satisfies YY >^ 2 K. for each k such that r is in it. If some testing set in XQ sat- isfies the above condition, then the program backtracks. If only the "excluding property" is to be implemented, then only the E-sets for each subproblem are considered as the testing sets for that problem in the above discussion. 35 The whole algorithm with the two properties stated in Sec- tions 5.1 and 5.2 incorporated is outlined as follows: Ml Reduction . Using the operation described in Section 2.1, reduce the constraint matrix as much as possible. Update the cur- rent partial solution and the current value YY for each i row. If the matrix is reduced to a null matrix, go to Step M5. M2 Bounding . M2.1 Find a lower bound ZMIN of the subproblem under the current partial solution. M2.2 Test if "ZBAR-ZMIN < 0" is satisfied, where ZBAR is is the best value obtained so far. If it is satis- fied go to M6. M3 Branching . -a. M3.1 Choose a row r. by some criterion. M3.2 Based on the row chosen at M3.1, for each non-zero element a., in this row, generate a subproblem by fixing variable x. to 1. M3.3 Store indexes (j ,-k) for all subproblems just gener- ated in a stack XX, where j is the index of the branching variable and k is the level number of the subproblem. The problem corresponding to a column with the fewest non-zero elements is stored first. M4 Next Subproblem . M 4'! G et a variable from the top of stack XX and set it to 1. Update the current partial solution. 36 M4. 2 Update the current value YY . for each row. M4.3 Test if any of the testing sets in XQ satisfies YY _> 2 whenever r is in that set. If some of the testing sets satisfies the above condition, go to M6. M4.4 Generate testing sets for this subproblem, store them in a stack XQ, and go to Ml. M5 Derivation Of A Feasible Solution . The current partial solution is a feasible solution. If this feasible solution is better than the best feasible solution obtained so far, keep this solution as the best feasible solution found so far. M6 Backtracking . M6.1 Find the partial solution, in XSL, one level higher than the current subproblem and consider it as the current partial solution. In XSL, erase all partial solutions with level numbers greater than the level number of the current partial solution. If no par- tial solution is left in XSL, the given problem has been implicitly enumerated and the best solution ob- tained so far is an optimal solution. M6. 2 Update the current value YY. for each row i and the l testing sets in XQ. M6.3 Retrieve the lower bound ZMIN, which was calculated previously at M2.1 or M6.7, for the current subpro- blem. 37 M6.4 Test if "ZBAR-ZMIN <_ 0" is satisfied. If it is sat- isfied go to M6.1. M6.5 Compare the level number LEVEL of the current par- tial solution with the level number LVT of the next subproblem to be considered in XX. If LVT LEVEL + 1, delete the next sub- problem in XX and repeat M6.5. M6. 6 Set the variable, based on which a subproblem has just been implicitly enumerated, to and delete its corresponding column in the constraint matrix. M6. 7 Calculate a lower bound ZMIN of the current subpro- blem and test if "ZBAR-ZMIN <_ 0" is satisfied. If it is satisfied go to M6.1. Otherwise, go to M4. 5.4 Some Computational Results The algorithm described at the end of Section 5.3 was coded in such a way that no reducibility of a partial solution is tested in the algorithm, i.e., only the E-sets for each subproblem are genera- ted in step M4.4. This is because, from our experience with sample problems, the reducible condition stated in Theorem 5.3.1 was rarely satisfied by partial solutions in the algorithm outlined in the last section. This algorithm (without checking the reducibility of a par- tial solution) was coded in FORTRAN. Some problems formulated from the logic minimization problem, obtained from literatures, or randomly generated by the author were tested by this program. Computational results are shown in Table 5.4.1. These results were obtained by 38 running programs on the IBM 360/75J computer using FORTRAN H compiler. The number in the column under "d" shows the percentage of non-zero coefficients in the constraint matrix A of a problem, i.e., d = (No. of l's in A) / (m x n ) . The numbers in the columns under "m"' and "n"' are the num- bers of rows and columns, respectively, left in the constraint matrix A after the program first went through the reduction procedure in PROB. NO. PROBLEM SIZE PROBLEM SIZE AFTER THE FIRST REDUCTION NO. OF ITER NO. OF BKTRK TIME IN SEC. m n d m' n' 1 55 44 0.117 45 43 49 24 0.84 2 112 79 0.085 83 73 616 349 24.71 3 105 97 0.072 97 91 5079 3375 201.63 4 114 83 0.094 73 70 2254 1494 97.43 5 166 156 0.035 87 94 77 37 3.02 6 203 167 0.041 151 161 81877 57195 >6300* 7 35 15 0.20 35 15 159 77 1.24 8 117 27 0.11 117 27 6321 3063 94.14 9 60 60 0.062 43 50 47 24 0.93 10 60 80 0.065 52 75 350 191 7.7 11 30 90 0.07 30 80 62 26 1.14 Table 5.4.1 Some computational results by the algorithm in Section 5.3. " took L046 seconds for the program to derive an optimal solution on the CDC Cyber 1 75 computer. 39 solving a problem. Numbers in the columns under "NO. OF ITER", "NO. OF BKTRK", and "TIME IN SEC." have the same meaning as those in their corresponding columns in Table 4.2.1, respectively. Problems 1 through 6 are problems formulated from the logic minimization problem. Problem 1 is for minimizing the logic expression of a six-variable switching function; problems 2, 3, and 4 are for minimizing the logic expressions of seven-variable switching functions; problems 5 and 6 are for minimizing the logic expressions for eight- variable switching functions. Problem 7 is the problem IBM 9 reported in [15], which has been used as a problem for comparison in many papers, such as [10, 11, 19, 20, 25]. Comparison of computational results for this problem is shown in Table 5.4.2. PROGRAM Author' s COMPUTER USED IBM 360/75J ILLIP [11] GEOFFRION'S [10] SHAPIRO'S [19] ILP2 [20] ENUMER 8 [25] IBM 360/75J IBM 7044 COMPUTATION TIME (in seconds) 1.24 1.73 26.4 IBM 360/65 CDC 3600 DSZ1IP [29] CDC 6600 CDC Cyber/175 20.2 75.1 4.749 1.236 Table 5.4.2 Comparison of computational results of the problem IBM 9 The comparison of operational speeds of different computers is given later in this section 40 Problem 8 is the smaller of the two difficult problems reported in [24]. It is stated in [24] that these two problems may be used to measure the computational efficiencies of integer program- ming packages. Comparison of computational results of this problem is shown in Table 5.4.3. PROGRAM Author's ENUMER 8 [25] COMPUTER USED IBM 360/751 UNIVAC 1108 COMPUTATION TIME (in seconds) 94. 14 5 960 Table 5.4.3 Comparison of computational results of problem 8 Problems 9, 10, and 11 are problems randomly generated. To make the comparison in Tables 5.4.2 and 5.4.3 more meaning- ful, operational speeds of different computers are shown in Table 5.4.4. All figures in this table except those in the last column are obtained from [26, 27]. The figures in the last column under "ESTIMATED RATIO" were given by the author according to the FIXED ADD/SUB time, the STORAGE CYCLE time and the number of register for each computer, which are listed in columns 2, 3 and 4, respectively. Only rough comparison can be made for the particular problems in Tables 5.4.2 and 5.4.3, since the estimated ratio may not be accurately applicable to these problems. In order to see the computational improvement due to the check- ing of E-sets for subproblems, some problems were tested by both algo- rithms listed in Sections 3.3 and 5.3 (only "the excluding property" is implemented in the later algorithm). Comparison of computational results is shown in Table 5.4.5. X^r:;';;;;. \ J r, [i [urt h he : reduced * considering the ,*«» 11 thi , problem, which is discussed in Chapter 7. 1C 41 COMPUTER FIXED ADD/SUB STORAGE CYCLE NO. OF REGISTER ESTIMATED RATIO IBM 360/75 J 0.68 0.75 16 1 IBM 360/65 J 1.4 0.75 16 « 1.5 IBM 360/50 4 2 16 « 4 IBM 7044 5 2 1 « 6.5 IBM 7094 2.8 1.4 1 « 4 CDC 3600 2.1 1.4 1 ~3.5 CDC 6600 0.3 1.0 8 « 0.9 UNIVAC 1108 0.75 0.75 16 ~ 1 CDC Cyber/175 — — — * « 0.16 Table 5.4.4 Comparison of different computers PROB. t NO. PROBLEM SIZE WITHOUT CHECKING E-SETS WITH CHECKING E-SETS M N NO. OF ITER NO. OF BKTRK TIME IN SEC. NO. OF ITER NO. OF BKTRK TIME IN SEC. ** 1 55 44 49 24 1.32 49 24 1.35 ** 2 112 79 841 477 54.18 616 394 39.92 3 105 97 7020 4107 275.42 5079 3375 201.63 4 114 83 3256 1879 126.49 2254 1194 97.49 ** 5 166 156 77 37 4.66 77 37 4.73 ** 10 60 80 378 198 12.22 350 191 11.60 Table 5.4.5 Comparison of two cases: with and without checking E-sets for the given problem ** No sufficient information about the operational speed of Cyber 175 is known. This ratio is estimated based on runing the same programs on two computers IBM 360/751 and CDC Cyber/175, by the author Results are obtained by using FORTRAN G compiler Problem numbers are those assigned in Table 5. '4.1 42 From Table 5.4.5, it can be seen that the computational improvement due to the implementation of checking condition (5.3.7) for the E-sets in solving problems is roughly 30% for problems that need long computation time, such as problems 2, 3, and 4. It can also be seen from the table that checking condition (5.3.7) for the E-sets in solving problems does not improve the computational efficiency for problems which need only short computation time, such as problems 1 and 5. Since the amount of computation time spent in checking condition (5.3.7) for the E-sets is very small comparing to the time in solving problems (this can be seen from the computational results for problems 1 and 5 in Table 5.4.5), checking condition (5.3.7) for the E-sets is a very useful scheme for speeding up the enumeration. A3 6. AN HEURISTIC ALGORITHM FOR THE LARGE SCALE MINIMAL COVERING PROBLEM The minimal covering problem (P) formulated for minimizing the logic expression of a complicated switching function with the number of switching variables greater than or equal to 8 usually has a large constraint matrix A. An example is the problem number 6 in Table 5.4.1, which is a problem formulated for minimizing the logic expresion of an eight-variable switching function and has a 206 by 167 constraint matrix. It is estimated in [8] that the number of prime im- plicants for a nine-variable switching function with 384 true vectors can be as large as 448. In other words, the size of the constraint matrix A of a minimal covering problem formulated for minimizing the logic expression of a nine-variable switching function can be as large as 384 by 448. To solve such a large minimal covering problem is beyond the capability of any existing computer program. For handling large minimal covering problems, people developed heuristic algorithms. R. M. Bowan and E. S. McVey [22] published an algorithm for the fast approximate solution of large prime implicant tables. This algorithm has certain criterion to choose prime implicants and ends when a first feasible solution is found. R. Roth [23] publish- ed another heuristic method for the minimal covering problem. In this method, a feasible solution is further checked to see if any X colums * in the solution set can be replaced by other A -1 columns not in this solution set. * The solution set of a feasible solution is the set of columns whose corresponding variables are fixed to 1 in this feasible solution. 44 This chapter presents another heuristic algorithm for solving problems which require excessive computation time when they are to be solved by the implicit enumeration algorithm. This algorithm is a modification of the algorithm outlined in Section 5.3. It takes a reasonable amount of computation time and uses a reasonable amount of core memory in solving a large scale minimal covering problem. Computational results show high probability of obtaining optimal solutions for large scale problems by this algorithm. 6.1 The Heuristic Algorithm The basic idea of the algorithm introduced in Section 5.3 is that when a problem is difficult to solve directly, it is de- composed into smaller subproblems and each subproblem is solved indi- vidually. A subproblem is further decomposed into smaller subproblems if it is still difficult to solve. When the number of subproblems decomposed from the given problem is large, then it requires a large amount of core memory to store all these decomposed subproblems and requires a large amount of computation time to solve all these sub- problems. One way to reduce the amounts of the required core memory and the required computation time is to skip subproblems which have low probability of deriving any optimal solutions. The decomposition of a problem into subproblems can be represented by decomposition tree, which shows which subproblem is decomposed from which subproblem. Each subproblem in a decomposition tree is associated with a level number LEVEL which indicates the level of this subproblem in this decomposition tree, counted from the root of the tree. Usually the scale of subproblems at upper levels is greater i the scale ol those Si lower levels. Since an optimal solution of 45 of a small scale problem usually can be obtained by a simple selection criterion, such as the one used in [22], an heuristic algorithm which decomposes large scale subproblems into small scale subproblems and finds a feasible solution for each small scale subproblem by a simple selection criterion is developed. This algorithm is a modification of the algo- rithm in Section 5.3, and is described by only listing the modifications of that algorithm. MODIFICATION 1 - the modification of M4 step. The M4 step of the algorithm in Section 5.3 is modified as in the following M4 ' step. M4 ' Next subproblem . M4' .1 Get a variable from the top of stack XX and set it to 1. Suppose the variable is x.. Then delete all rows with their j-th element equal to 1. Update the current partial solution. M4' .2 Update the current value YY . for each row. l M4 f .3 Test if any of the testing sets in XQ satisfies YY, 2l 2 whenever y is in that set. If some of the testing sets satisfies the above condition, go to M6. M4' .4 Generate testing sets for this subproblem, and store them in a stack XQ. M4' .5 If the level number of this subproblem is less than the level limit , a positive integer specified by a user, then go to Ml. 46 M4 ' . 6 Solve this subproblem by an heuristic procedure HEURISTIC, which will be described later, and then got to M5' . m The heuristic procedure HEURISTIC used in M4 * . 6 is described as follows: PROCEDURE HEURISTIC : HI Using the three reduction operations in Section 3.2, reduce the matrix as much as possible. If the matrix is null, then the procedure is terminated, obtaining a feasible solution. H2 For each remaining column i, calculate w . . (i + JL+.^JL)*, 1 V. V. V It x i 12 r where i_ , i„, ..., i are the indices of the remaining 1 2 r rows covered by column i and v. is the number of non- \ zero elements on row i for k=l, 2, ...,r. K. H3 Choose the column i with the greatest w. and fix x. — o e 1 1 o o to 1. If there is a tie, the one with the smallest column index is chosen. H4 Delete all rows covered by the column chosen at H3 and go to HI. " ft Tli is formula is similar to the one used in [22] 47 MODIFICATION 2 - the modification of M5 step. The M5 step of the algorithm in Section 5.3 is modified as in the following M5 1 step. M5' Derivation of a feasible solution. M5 f .1 A feasible solution is obtained either through Ml step or through M4'.6 step. If the value Z of this solution is greater than the objective value ZBAR of the best solution obtained so far, then go to M6. M5' . 2 Apply a transformation procedure TR, which will be outlined later in this section, to this feasible solution to derive a better feasible solution if possible. M5' . 3 Let Z' be the value of the feasible solution obtained by the procedure TR. If Z' is less than the value ZBAR, then the value of ZBAR is replaced by Z ' , and the best solution obtained so far is replaced by the solution obtained by the procedure TR. M5' .4 Go to M6. ■ Before the transformation procedure TR is outlined, the concept of "covering weight" of a feasible solution is introduced. Let x = (x^, x , . . . , x ) be a feasible solution of the problem (p) . The covering weight of x, denoted by W(x) , is defined as the number of n i's such that YY. > 2, where YY . = Z a. x . Transformation procedure TR Procedure TR for a feasible solution x = (x. , x„, ..., x ) 12 n consists of the following steps: 48 n Tl Calculate YY . = Z a., x. for each 1=1, 2, . . . , m. 1=1 J T2 Check if there exists any x. = 1 such that YY . > 2 whenever row i is covered by column j. If there exists such ax., then x. is set to 0, and YY.'s are updated. The solution obtained by fixing x. from 1 to and the remaining all other variables unchanged is still a feasible solution. T3 Try to transform the current feasible solution into another feasible solution with greater covering weight, by substituting {x. - 0, x. = 1} for {x. = 1, x. = 0} in the i J i J current feasible solution for each pair of variables x. and x. . J T4 If a new feasible solution with improved covering weight is obtained in step T3, then update YY for each k = 1, 2, . . . , m, and go to step T2. Otherwise the procedure terminates. In step T3, for each index i, there are only few candidates for i such that the new solution is still feasible after {x. = 1, x. = 0} J 1 j is replaced by {x. = 0, x. = 1}. A procedure for performing step T3 for each x. based on this observation is described as follows. l T3.1 Find the first row r, covered by column a. such that k i YY, =1. k T3. 2 The variables corresponding to columns covered by row r, are candidates for the variable x. in step T3. Check only those candidates to see if a new feasible solution with improved cover weight can be obtained. ■ 49 From MODIFICATION 1, if the level limit specified is large enough not to be reached in solving a problem, then the best solution obtained is still guaranteed to be optimal. 6.2 Some Computational Results Some problems formulated from the logic minimization problem and some medium scale problems constructed by the author were tested by a FORTRAN program of this heuristic algorithm. They were solved on CDC Cyber 175 computer. Computational results are shown in Table 6.2.1. The number in the column under "VAL" is the best value obtained under the level limit shown in column "LEVEL LIMIT". All other figures in this table are the same as those figures in Table 5.4.1, except that those results were obtained under different level limits shown in their corresponding columns. "-" in the table shows no test is made for that case. "»" in the column under "LEVEL LIMIT" means no level limit is specified, and the best value obtained in this case is the minimal value of the problem. From this table one can see that the optimal solutions of the 5 test problems can all be obtained in a reasonable amount of computation time by specifying the level limit to 6. From this observation, this heuristic algorithm could be very practical for solving large scale minimal covering problem if level limits are appropriately specified. o £ m CO co 1 1 l l 1 1 co C on > H rH rH 1 I 1 rH U w w H H a co o ON o CN 1 1 l I 1 l CN CN H 2 • • • 1 l 1 • o M o o o O 6 co Ft. ^ O prJ E-j • W O pq z H m rH 1 1 1 1 1 I 1 1 1 NO CN S3 vO VO n£> 1 1 1 1 1 1 ^O o > rH rH rH 1 1 1 rH C 00 o w w S w vD NO CO 1 l 1 On O H O o H . > H H rH rH rH 1 rH c r^ CJ w w cm £ CO M H rH 00 o 1 rH 00 o j m H Z 1 . E H >H O O o rH co m O 2 rH Sf ^o r-^ -3- 1 CO rH rQ H CO H CN d NO •H a) a) H en ,D O cfl x: H 4J cu m CO CO u cu rQ rO O H Ph 51 7. SYMMETRIC MINIMAL COVERING PROBLEMS The use of the symmetric property of the given switching function in solving the minimal covering problem formulated for minimizing the logic expression of that switching function is first noted in [6]. In this chapter, the symmetric property of the minimal covering problem is explored in detail, and the utilization of these properties in the enumeration algorithm for this problem is discussed. Procedures for utilizing these properties are developed based on the theory of finite permutation group. By applying these procedures in solving symmetric minimal covering problems, the computational improvement of more than 10 times was gained for some problems. Furthermore it was confirmed that utilization of the symmetric properties was crucial in solving computationally difficult problems such as those reported in papers [15, 24] and large-scale problems formulated from the logic minimization problem. 7.1 Symmetric Permutations Let X = { Xl , x 2 , . . . , x } . A permutation n = X -*■ X is said to be a symmetric permutation of the minimal covering problem (P) if it has the following property: if (x^, x 2 , . . . , x ) is a feasible solution of the problem (P) , then (n(x ), n(x_), ..., n(x )) is also J- I n a feasible solution of the problem (P) . Of course, both feasible solutions must yield the same value to (P) . 52 Example 7.1.1. Let us consider the problem (P) with a constraint matrix 1/10000 ]> 2 10 10 3 110 A "4 100100 5 110 10 6 110 7 I 1 1/ Let n be a permutation defined on X = {x 1 , x„, x n , x. , x,., xA as 1 I 3 4 5 6 (7.1.1) r -> x n = "^ x 5 > (7.1.2.) X 5 V -7 X X ' E 2 . It is easy to see that (x r x 2 , x 3 , x^ x y x 6 > = (1, 1, 1, 1, 0, 0) is a feasible solution of the problem (P) with A in (7.1.1). Permuting this feasible solution according to the permutation n, (n (x, ) , n (x ) , n(x 3 ), n (x 4 ), n(x 5 ), n(x 6 )) = (x 3 , x^, x y x^ x ± , x 2 ) = (1, 1, 0, 0, 1, 1). Then (x^ x^ Xg, x^ x y x g ) = (1, 1, 0, 0, 1, 1) is also a feasible solution of the problem (P) . In order to prove that n is a symmetric permutation of this problem, we have to examine all feasible solutions of this problem and check if (n(x ), n(x ), ..., (x 6 )) (regarding it as (x^ x^ ..., x& )) for each feasible solution Xj . x^, ..., x ^). This is a cumbersome work. In Section 7.4, a 53 theorem (Theorem 7.4.1) which can be easily used to check whether or not a given permutation is symmetric will be given. ™ The minimal covering problem (P) is said to be symmetric if it has some symmetric permutations. If a given minimal covering problem is symmetric, then the symmetric property of this problem can be utilized in solving this problem by the implicit enumeration method as stated in the following theorem. Theorem 7.1.1 Suppose n is a symmetric permutation of the minimal covering problem (P) and x. = n (x ) . Then in solving (P) by the implicit enumeration method, variable x. can be fixed to without losing l e a better feasible solution (a feasible solution better than the best one obtained so far) in the subproblem with x. fixed to 0, if the subproblem with x. fixed to 1 has already been implicitly enumerated. Proof For any feasible solution (x n , x„ , ..., x ) with x = 1, 1 2. n i (n(x ), n(x ), ..., n (x )), where n(x.) = x. = 1, is also a feasible ±2 n j i solution, since n is a symmetric permutation. Both feasible solutions have the same value w. Since the subproblem with x. fixed to 1 has been already implicitly enumerated, the value w cannot be smaller than the value of the best solution obtained so far. So only the case with x fixed to in the subproblem with x. fixed to has to be considered 1 J after the subproblem with x. fixed to 1 has been considered. J Q.E.D. Theorem 7.1.1 is illustrated by the figure shown in Figure 7.1.1. The dotted triangle in Figure 7.1.1 can be skipped in the enumeration by Theorem 7.1.1. 54 Symmetric with respect to n. = Figure 7.1.1. Illustration of Theorem 7.1.1 (The dotted triangle can be skipped) . For any two permutations ri and r\ on X = {x , x , ..., x }, 1 2. 12 n define a permutation r) °r\ on X as n °n (x.) = n (n, (x.)) 2 1 i 2 1 i i for all x. in X. A permutation <1i°n o . . . °rT\ on X is denoted by n ■ Symmetric permutations have the following property. Theorem 7.1.2 If n and n are two symmetric permutations of the minimal covering problem (P), then n 9 °n-, is also a symmetric permutation of the problem (P) . Proof If (x , x , ..., x ) is a feasible equation of the problem (P) , (n, (x ), n,(x ), ..., n (x )) is also a feasible solution of (P) since ill/ in n, is symmetric. If (n (x. ) , n, (x ) , ,.., n,(x )) Is a feasible solution 1 l i 1 z in of (P), (n 2 (n 1 (x )), n 2 (n L (x 2 )), ..., n 2 (n 1 (x )) is also a feasible solution of (P), since ru is symmetric. Thus it can be concluded that if (x.j, x , . . . , x ) is a feasible solution of (P) , (n-Cn, (x )) , n 2 (n ] (x 2 )), ..., n 2 (n 1 (x ))) is also a feasible solution of (P) , i.e., r\ °r\ is synmift ric. Q.E.D. 55 Corolla ry 7.1.3 If n is a symmetric permutation of the minimal covering problem (P) , then n is also a symmetric permutation of the problem (P) for any given positive integer i. Proof Since n = Hon ' holds and n,n " are symmetric permutations for each k, n is also a symmetric permutation of (P) , by Theorem 7.1.2. Repeating this argument for increasingly greater i, the property holds for i. Q.E.D. 2 3 From the above corollary, n , r\ , . . . are all symmetric permutations of (P) if n is a symmetric permutation of (P). Since the number of different permutations of all variables in X is finite, there exists an integer a such that n° - I, where I is the identity permutation (i.e., the permutation which maps each variable to itself), as well known in group theory. Let a be the smallest positive integer such that n = I. n is called a generator of the symmetric 2 a -1 permutations n, n , ..., n ( and n' is said to be generated from n for each i = 1, 2 , . . . , a -1. o Example 7.1.2 Let us assume that the permutation n given in (7.1.2) is a symmetric permutation of the problem (P) with the constraint matrix (7.1.1). By Corollary 7.1.3, r k ~> x 2 ' V x 3 > "> X A' 56 is also a symmetric permutation. By definitions of n and n r x, 3 2 o n = n °n -> x 2 , ■> x 3 ' > x 4' ■> X A' V is the identity permutation. Thus n is a generator of symmetric 2 22 3 2222 permutations n and n . Since n °n = n°n = n , and n °n °n = H °1 - 3 2 2 n = I, n is also a generator of symmetric permutations n and n. • 7.2 Symmetric Permutations of The Problem Formulated From The Logic Minimization Probl em Let f(y , y , . . . , y ) be a switching function of variables" y , y_, ... y . A permutation A on Y = {y^, y^ ..., y t > is said to be a symme tric permutation of f (y^ , y 2 > •••» y fc ) if f(y r y 2 , ... y t ) - fUCy^, *(y 2 )> •••» Uy t )). ** A switching function f is said to be A-symmetric (or permutation symmetric) if there exists some symmetric permutations for this function. Notice that even if a switching function is A-symmetric, the function is not necessarily symmetric (partially or totally). ** Inequality variables are denoted with x.'s, whereas switching variables are denoted by y.'s. i In switching theory, a function f(y 1 , y ^ ••., y ) is symmetric in y , y , ..., y Li it is unchanged for every permutation of Vj , y 2 , • • • , y t • 57 For a switching function fCy^ y_, ..., y ) and a permutation X on Y = {y^ j^ t ..., y^} t X(f)( y;L , y^ ..., y ) i s used to denote fOCy^, X(y 2 ), ..., X(y t )). Example 7.2.1 Let us consider the switching function f(y r y 2 , y r y 4 ) » vVVVVV*i' y 3'V y i"VV (7.2.1) Let X 1 be a permutation on {y , y , y y } defined as r y 1 } y 2 , A i Y 2 ^ y 3' (7.2.2) y 3 ■> y 4 , ^ y 4 ^ y r Th en X x (f) (j v y 2 , y 3 , y 4 ) = f (X 1 (y 1 ), X^), X^y^, X^y^) = f (y 2 > y 3 > y 4 » y x ) ■ y 2 - y 3 - y 4 vy 3 . y 4 . y^. y 4 . y lV y 2 . y^ y ± - f (y 1 » y 2 » y 3 » y 4 >« Thus X is a symmetric permutation of f. ■ Symmetric permutations of a switching function f have the following property. Theorem 7.2.1 If X and X are two symmetric permutations of a switching function f, then the permutation ^,°^ ? defined by X °X (y.) = X (X (y . ) ) for i = 1, 2, ..., t is also a symmetric permutation of f. The proof of the above property is omitted here since it is just the same as that of Theorem 7.1.2. Similar to Corollary 7.1.3, it can be proved that if A is a symmetric permutation of f, then i X°X°--'°X is also a symmetric permutation of f for any positive 58 integer i. Since the number of different permutations of variables y , y , ..., y is finite, there is a smallest positive integer a a such that X = I, the identity permutation. Let us consider Example 7.2.1 again. As can be easily seen, A. A, = -> y- i = V x i : "> y> * y i ^ y 4 < •7 y- 3 2 X^ X,°X ^ y, 1 1 \ k*4 -> y. are symmetric permutations of the function ■> y i X 4 = X°X -7 y. -> -> y, f defined in (7.2.1), and V y 4 is the identity permutation. In the following, we shall show that corresponding to each symmetric permutation X of a switching function f, there exists a symmetric permutation X of the minimal covering problem formulated for the logic minimization problem of f. Define X(y.) = X (y . ) for i = 1, 2, ..., t. a 7.2.2 If X is a symmetric permutation of f(y.., y 9 , ..., y ) and q 7. 'Z ' X , where Z. = y. or y. for some j c{l, 2, ..., t}, * i J j 1 j i J i i 59 is an implicant of f; then X(q) = A(Z )-A(Zj A(Z,) is also an 12 k implicant of f. Proof Let us first prove this lemma for the case when f is a single- output switching function. Since q = Z 1 'Z 2 Z is an implicant of f, f can be written as f = f'VZ-'Z Z with some switching function f (y^» Y2> •••» Y t )' Since X is a symmetric permutation of f, f(y r y 2 , ..., y t ) = A(f)(y r y^ ..., y^ = f • ( A (y^ • A (y,,) , ..., A(y t ))v \(Z^)' \(Zy) A(Z ). The above equalities show that A(z l )-(z 2 ) ^ Z k^ is also an im P licant °f f(y-,» y 9 > ••-, y ). For the case when f is a multiple-output switching function, this lemma can be similarily proved. Q.E.D. Symmetric permutation A of a switching function has the following property. Theorem 7.2.3 If A is a symmetric permutation of f(y.., y , ..., y ) and q - = Z i' Z 2' " Z k' where z ± = y- or y- for some j e{l, 2, ..., t}, J i J i 1 is a prime implicant of f, then A(q) = ACZ^-A^) A (Z ) is also a prime implicant of f. Proof Let us first prove this theorem for the case when f is a single- output switching function. * either a single-output implicant or a multiple-output implicant. ** either a single-output prime implicant or a multiple-output prime implicant. 60 From Lemma 7.2.2, XCZ^-X^) X(Z ) is an implicant of f. If X(Z 1 )'X(Z 2 ) ^ Z ]P is not a P rime implicant of f, then there exists some term q' subsumed by X(Z )-X(Z ) X (Z ) such that q' is an implicant of f. Since q' is subsumed by X(Z )-X(Z ) X(Z ), q' = A (Z £ )**( z £ ) A(Z £ ) must hold with £ , £ •••, £ E {1, 2, 1 2 p P ', k}. Let a be a positive integer such that X a is the identity a-1 permutation. Since X is also a symmetric permutation (as it can be easily seen), and q' = X(z £ )'X(Z £ ) X(Z ) is an implicant of f. 12 p A a "V) = x a - 1 (x(z £ ))-x a - 1 (x ( z £ )) X^W, )), 12 p = x a (z )-x a (z ) x a (z ), 12 p = Z -z Z , 12 p is also an implicant of f by Lemma 7.2.2. Since Z, *Z„ Z subsumes 12 k Z £ * Z £ " ' z £ ' Z i' Z 2 Z i c is not a prime implicant of f. This 12 p contradicts to that Z 1 'Z 2 \ ±s a P rime implicant of f. For the case when f is multiple-output, this theorem can be similarly proved. Q.E.D. Symmetric permutation X has another property as stated in the following theorem. Theorem 7.2.4 If q ± and q are two different prime implicants* of a switching function f and X is a symmetric permutation of f, then X (q . ) , and A(q ) are two different prime implicants of f. Proof Let us first prove the case when f is single-output. single-output prime implicant or multiple-output prime implicant 61 From Theorem 7.2.3, A(q.) and A (q . ) are prime implicants of f. If A(qJ = A(q ), then both terms must have identical literals. Let literal A(Z ) of A(q ) be equal to some literal A(Z ) of A(q ). Since k J l x A is a permutation, Z. must be equal to Z . Thus each literal of a X k h ' i is equal to a literal of q., and each literal of q. is equal to a literal ^ J of q . Consequently, q = q.. This contradicts the assumption that q and q. are two different prime implicants of f. For the case when f is multiple-output, this theorem can be similarly proved, since a multiple-output prime implicant of a multiple- output function is defined as a single-output prime implicant of some product of those single-output functions of f. Q.E.D. From Theorems 7.2.3 and 7.2.4, each symmetric permutation A of f defines a permutation on the set Q = {q n , q oJ • • • q } of all the 12 n n prime implicants (or all the multiple-output prime implicants) of f as A(q.) = X(Z 1 )-A(Z 2 ) A(Z k ) if q. = Z^ Z . Now a permutation A of the minimal covering problem formulated for minimizing the logic expression of a switching function f (See Section 2.1) is defined as A(x ) = JC if and only if A (q . ) = q.. (7.2.3) -J *• J The symmetric property of A is shown in the following theorem. Theorem 7.2.5 Let Q = {q r q^ •••, q ^ } be the set of all prime implicants of a single-output switching function f (or the set of all multiple-output prime implicants of a multiple-output switching function f), and A be a symmetric permutation of f. Then the permutation A, 62 defined by (7.2.3), of the minimal covering problem (P) formulated for minimizing the logic expression of f is a symmetric permutation. Proof Let us first prove the case when f is single-output. Let (x n , x„, • • • , x ) be a feasible solution of (P) , and 12 n x , x , • ' • x be all the variables which are fixed to 1 in this 12 s feasible solution. Then f = q. Vq . V • •* Vq . . Let q. , q. , •••, q. X i X 2 X s 2 1 J 2 Js be the prime implicants of f such that X(q. ) = q. for k = 1, 2, •••, s. (7.2.4) X k J k Since A is a symmetric permutation of f, f(y r v 2 , ", y t ) = f(Hy 1 ), x(y 2 ), '•*, x(y t ')), = A(q. )vX(q. )V ••■ VA(q. ), l l l 1 2 s =q. Vq. V • • • Vq. . J l J 2 J s The above equalities show that {q. , q. , "', q. } is a feasible solution J l J 2 J s set of f. From (7.2.3) and (7.2.4), (XCx.), X (x ) , •'*, *(\)) is a vector with X(x. ) = x =1 for k = 1, 2, •••, s. Since J k J k {q , q , •••, q } is a feasible solution set and X (x . ) = 1 for h % \ J k k = 1, 2, •••, s, (X(x ), X(x ), •'• X(x )) is also a feasible solution of (P). For the case when f is multiple-output, this theorem can be similarly proved. Q.E.D. On the minimal covering problem formulated for minimizing the logic- expression of a swtiching function f, the symmetric permutation obtained from a symmetric permutation A of f is denoted by A. 63 Example 7.2.2 The permutation A defined by (7.2.2) is a symmetric permutation of the switching function f defined in (7.2.1). All the prime implicants of f are: q ± = y 1 'V 2 '7y <\ 2 = y 2 '>Y y 4' ^ = VV y 4' q 4 = VVV q 5 = y l" y 2'V q 6 = y i' y 2' y 3' q 7 = y l' y 3' y 4' q 8 = V y 3' y 4" All the true vectors of this function are: y = (1, 1, 0, 0), y 2 = (0, 1, 1, 0), y 3 = (1, 1, 1, 0), y A = (1, 0, 0, 1), y g = (1, 1, 0, 1), y 6 = (0, 0, 1, 1), y y = (1, 0, 1, 1), y 8 = (0, 1, 1, 1). The prime implicant table of this function is as follows: y l q l 1 q 2 q 3 q 4 q 5 1 q 6 q 7 y 2 1 1 y 3 1 1 A = y 4 1 1 y 5 1 1 y 6 1 1 y 7 1 1 y 8 1 1 1 (7.2.5) The minimal covering problem (P) formulated for the logic minimization problem of f is as follows: minimize x n + x_ + x_ + x, + x,_ + x, + x., + x subject to / v / \ x l 1 X 2 ! 1 x 3 : 1 1 X / 4 i > x 5 . 1 1 X 6 1 X 7 1 s X 3/ vlj 8 (7.2.6) x. = or 1, for i = 1, 2, 8. 64 The symmetric permutation A of this problem corresponding to the symmetric permutation A of f is r V "? x ~7 x ^ x 2, 3, 7, 5, (7.2.7) V Consider another permutation X defined on {y , y , y , y } fa y. V y 4 -7 y 2 » "> y r * y (7.2.8) 4' -> y- Then f(A 2 ( Yl ), X^), X 2 (y 3 ), A^)) = f(y 2 Y;L y 4 y 3 ) = y 2' y l' y 4 Vy l' y 4' y 3 Vy 2' y 4* y l Vy 2* y l' y 3* Findin 8 a11 prime implicants of f ^ y 2' y l' y 4 ,y 3^ by the iterated consensus method [30], all the prime implicants of f(y 2 » y , y , y ) are exactly the same as those of f (y r Y 2 , Y 3 , y 4 ). So f (y r y 2 , y 3 , y 4 ) = f (A 2 ( y;L ),X 2 (y 2 ),A 2 (y 3 ), A 2 (y^)) holds and X 2 is also a symmetric permutation of f. Corresponding to this symmetric permutation A of f, there is another symmetric permutation A defined as 65 f -) x 5 » "> x 8 , -> x 6 , "7 x !» -> x 4 , (7.2.9) v -7 « 2 * "7 x 3 > of the problem (7.2.6). ■ The following example shows that some minimal covering problems may have symmetric permutations that need more than one generator. The permutations A and A defined in (7.2.7) and (7.2.8) are symmetric permutations of the problem (7.2.6). From the definitions of A and A , it is easy to see that ~4 ~2 (1) A = A = I, the identity permutation, (2) A ^ A„ , for any positive integer i, (3) A ^ A , for any positive integer i. Thus the minimal covering problem of Example 7.2.2, has symmetric permutations that need more than one generator. Now let us show a property of symmetric permutations of the problem obtained from the logic minimization problem of a switching function f. If \ and X are two symmetric permutations of a switching function f, then *-,°A 2 is also a symmetric permutation of f, by Theorem 7.2.1. Corresponding to A °A of f, there is the symmetric permutation A^oA 2 of the problem (P) formulated for the logic minimization problem of f. A oL has the following property. 66 Theorem 7.2.5 If X^ and X are two symmetric permutations of a switching function f, then ^"^ = * 2 °*i' where \t * 2 and yL are symmetric permutations of the minimal covering problem formulated from the logic minimization problem of f corresponding to X , X and \ o\ respectively. Proof Let Q = {q^ q^ •-., q ^ } be the set of all prime implicants of f. From the definition of X , °X_, A l° X 2 (x j ) = X i if and onl y if X 1 ° x 2 (q.) = q.« (7.2.10) From the definitions of X and X , ^(xj = x £ if and only if ^(q ) = q. (7.2.11) and ^ 2 (x £ ) = x ± if and only if ^(qj = q . (7.2.12) By (7.2.10), (7.2.11), and (7.2.12), we have WV = X i if and onl y if S^o* = x - and A-(x.) = x n for some £, if and only if ^ 2 (q ± ) = q^ and ^(q ) = q. for some £, if and only if X °X_(q ) = q 1 2 i ^j if and only if X oX.(x.) = x. (7 2 13) 1 2 j i , N ■ ' From (7.2.13), ^oX = X^Tx Q.E.D. Example 7.2.3 The permutations X ± and X 2 defined in (7.2.2) and (7.2.8) are symmetric permutations of the switching function f defined in (7.2.1). By definition of X , X and X °X , r y 1 > y v y 2 > y,, X 2° A 1 v (7.2.14) y > y. 3' y 4 > y • 67 The symmetric permutation yx on the problem (7.2.6) corresponding to V A i is r x, ""7 x 7 » ~7 x «> V X 1 -> x 6 , "7 x 5 » V X V x c 4 ■7 x r -7 x 3' 8 7 " 2 by definition (7.2.3). From (7.2.7) and (7.2.9), A oA 1 2 f x. ■y x y , -) x ft , X 6' V x. "7 x 5 > "7 V -7 x 3 ' "7 X 9> (7.2.15) which is exactly the same as (7.2.15). ■ Suppose the problem (P) is obtained from the logic minimization problem of a switching function f. The question arises whether, for each symmetric permutation of (P) , there exists a corresponding symmetric permutation of f. The following example shows that the answer is negative. 68 Example 7.2.4 Suppose that f^Y-^ Y 2 > Y^* y^> Y 5 » Y & ) = y 1 -y 2 -Y3-Y 5 -y 6 vy 2 .y 3 .Y 4 -y 5 -Y 6 VY 1 -y3-Y 4 -y 5 -Y 6 vyi , y2' y 4 ,y 5^6' f 2 (y r y 2 , y 3 , y 4 , y 5 , y 6 ) - yi-V^'VVV^VV^ Vy r y 2' y 4 ,y 5' y 6 v VWVV and f(y l' y 2' y 3' V y 5' y 6 } = f l (y l» y 2' y 3' V y 5' y 6 )V f 2^ y l' y 2' y 3' y 4' y 5' y 6^ are given ' Tt is eas y to see that r s "7 y. * 'i ■> y, -> ^ y 6 > y 5 is a symmetric permutation of f 1 , and ( y- i Y 2 y 4 y 5 * y 6 7> ^ y y 4 1 -> y. ^ y, -> y. (7.2.16) (7.2.17) is a symmetric permutation of f . It is also easy to see that A, is not a symmetric permutation of f„ and A 9 is not a symmetric permutation of f . Thus A, and \ are not symmetric permutations of f . Let us consider the logic minimization problem of the switching function f. All the prime implicants of f are: q ± = y x -y 2 7 3 7 5 'y ^ q 2 = y 2 ^3*^4 *y 5 *y 6 » q 3 = VW^YV q 4 = y l' y 2' y 4 ,y 5' y 6' q 5 = y l" y 3' y 2 ' y 5" y 6' q 6 ■■ y 2 -y 3 -y A -y 5 -y 6 . q 7 ■ VVVVV q 8 = y r y 3 - y 4' y 3' y 6' y/yo-y/y/y, ■ y, 717,7,7 1 y 2 y h y 5 y 6' '10 '1 y 2 y 3 y 5 y 6' M ll '1 '3 y k y 5 y 6' ■ y/y,7,7c7, 12 yo7 1 7,7,7, y-, 7 7/ -y c 7 2 '3 '4 '5 '6' Vi >l y 3 '4 >5 '6' M 14 'l y 2 y 3 y 5 y 6' ■ y, 7,7,7,7, 69 q l5 = y l' y 2' y 4' y 5' y 6' q l6 = y 2' y 3* y 4* y 5' y 6' A11 the true vectors of this function are: y ± = (1, 1, 0, 0, 1, 0), y 2 = (0, 1, 1, 0, 1, 0), y 3 = (1, 1, 1, 0, 1, 0), y A = (1, 0, 0, 1, 1, 0), y 5 = (1, 1, 0, 1, 1, 0), y 6 = (0, 0, 1, 1, 1, 0), y ? = (1, 0, 1, 1, 1, 0), y g = (0, 1, 1, 1, 1, 0), y g = (1, 0, 1, 0, 0, 1), y iQ = (0, 1, 1, 0, 0, 1), y u = (1, 1, 1, 0, 0, 1), y 12 = (1, 0, 0, 1, 0, 1), y l3 = (0, 1, 0, 1, 0, 1), y u - (1, 1, 0, 1, 0, 1), y i5 = (1, 0, 1, 1, 0, 1), y 16 = (0, 1, 1, 1, 0, 1). The prime implicant table for f is as foil ows : A = y l ! i q 2 q 3 + , q 5 q 6 q 7 q i ^9 1 1 q lO q li q i2 q i3% q i5 q i6 I y 2 y 3 1 1 1 1 1 1 ] i y 4 1 i 1 | y 5 1 1 ° 1 ' y 6 y 7 1 1 1 1 1 1 y 8 1 1 | y 9 1 1 1 y io y il 1 1 o ! i o 1 1 1 y i2 1 J ! ° 1 y i3 1 , 1 ° 1 y i4 1 1 1 o 1 y i5 1 | I 1 V 1 1 o| 1 ° 1 (7.2.18) It is coincident that the numbers of prime implicants and true vectors of f are the same in this example. 70 The logic minimization problem for f is as follows: minimize x + x + subject to 16 < 16J 'l N 1 (7.2.19) x. = or 1 for i = 1, 2, ••• 16. i ' It is easy to see that q^ q^ q 3 , q^, q g , q q q are prime implicants of f ± and q 5 , q & , ^ , q g , q^, q.^, q^, q^ are prime implicants of f . Let n be a permutation of the problem (7.2.19) obtained by permuting {x^ x^ x^, x^ , x , x , x , x ._} according to A, and permuting {x^, x t x_, x g , x , x , x , x } according to A , i.e., n is defined as > 11 -> x i; > 10 ^ > *7 4 x 10 11" 12' 13" \ -7 X, > «l -> x L3 "14 X 15" ^• X 16" 4 x 15 ^ X 16* (7.2.20) 71 It will be proved in Section 7.4 that n is a symmetric permutation of the problem (7.2.19). But there is no symmetric permutation X of the switching function f (i.e., a symmetric permutation among switching variables y!s) corresponding to n of (7.2.20). ■ 7.3 Complete Characterization Of Symmetric Permutations The symmetric property of a minimal covering problem can be described by many symmetric permutations. If permutations n , n , •••, n h are used to describe the symmetric property, this symmetric minimal covering problem is said to be characterized by n , n , • • • , n A 12 h symmetric minimal covering problem is said to be more explicitly characterized by r^, n 2> •••, n h than by symmetric permutations n', n ' , , n^ if each n! can be expressed as a concatenation of n , n , •••, Example 7.3.1 Suppose r^, n 2 and n are defined by f ni = x, A 5 ^ X 6 ~> x 2 , -> x 3 , "7 x x , ~7 x 5 » "> L 6' "7 x 4 , (7.3.1) r :< V X 6 -> '2' "7 x 3 > -7 *j> > x 4 , -r x 5 , ^ X 6' (7.3.2) 72 r. n 3 -> x 2 , "^ x 3 , -^ x 5 > -> (7.3.3) V V x 6 ^ V are three symmetric permutations of the problem (P). Since ^1 ~ r l2° ri 3' the problem (P) is more explicitly characterized by n 9 and n ^ than by When a variable is fixed to 1 in a symmetric problem, some symmetric permutations may be destroyed and other symmetric permutations may be preserved. It will be shown later by Lemma 7.6.9 (Section 7.6) that if n is a symmetric permutation and n (x ) = x , then the symmetric k k o o permutation n is preserved in the subproblem with x, fixed to 1. As an k o example, let us consider Example 7.3.1 again. If x is fixed to 1 in the problem (P) , then symmetric permutations n, and n 9 are destroyed in the subproblem with x fixed to 1 and symmetric permutation n, is preserved since n~(x ) = x . If the symmetric problem (P) is characterized by n, only (without n and n ) in Example 7.3.1, then the symmetric property in the subproblem with x fixed to 1 cannot be detected. A symmetric permutation n of the problem (P) is said to be c ompletely characterized by symmetric permutations n,> 1 9 , '"» n, of (P) if (O n :: n. °n. ° ••' °n. for some j , j , J 1 J 2 1 r x ' 2 2 (1, 2, ..-, k}, J r 73 (2) n cannot be expressed as a concatenation of symmetric permutations other than n, Hii n 2 » "', n, and their concatenations. Example 7.3.2 If i\ , n 2> n 3 in (7.3.1), (7.3.2), (7.3.3) and their concatenations in Example 7.3.1 are the only symmetric permutations of the problem (P) , then (1) r\ 1 = n 2 °n 3 , (2) n, cannot be expressed as a concatenation of symmetric permutations other than n,. n 2 > and n^> since they are the only symmetric permutations of the problem (P), by assumption. So n, is completely characterized by n and n . " Nov; let us assume that a symmetric permutation n of the problem (P) is completley characterized by some other symmetric permutations n i' n 2' '*'' n k' When a variable x k is fixed to 1, some of n, » n 9 > o "' n k are P reserved and others are destroyed in the subproblem with x^ fixed to 1. If n can be expressed as n . °n . ° ••• *n . and n. , o Ji J 2 J p ^1 H 4 , '» 1, are preserved in the subproblem with x, =1, then n is *■ P o also preserved. If some of n . , r\ .,•••, n . is destroyed in the J l J 2 J p subproblem with x fixed to 1, then it is not known whether n is still preserved in this subproblem since n may still be expressed as another concatenation of n , , n , " ' , n , • If some of n, , n , • • • , n is 12 k J l j 2 j r destroyed in the subproblem for every concatenation n °n o • • • on j l j 2 ^r such that n = n. °1 ° •'• °n . , then n is no longer preserved in the J l J 2 J r subproblem. 74 From the above discussion, it is completely dependent on the particular set of symmetric permutations n, , n_ , '"» H whether the 12 k permutation r\ is preserved in the subproblem with a variable x fixed o to 1. Therefore, in solving the problem (P) by the implicit enumeration method, if the symmetric problem (P) is already characterized by n, > n~> '**, n, , it is not necessary to add the symmetric permutation n to characterize the problem (P) , since n. is completely characterized by n > v •■■■ v If symmetric permutations n > n~, ••*, n_ of the problem (P) are i 2 n completely characterized by symmetric permutations n' , n ' , ..., n ' , 12 k then the problem (P) is more explicitly characterized by n' » n 9 , ■*', n' than by n,> n 9 » •'*, n • For any given symmetric permutations n, > n 2' ""' \ ° f the P roblem ( p )> it: is difficult to find symmetric permutations ni > n~, ••*, n/ that completely characterize n , n 9 , *"", \* But ^ it: is P oss i- D l e to find symmetric permutations n 1 , nl , •••, n' more explicitly characterizing this problem than rv, , n~» "', H, » ' K I 2 h then by using n ' , nl» •••, n/ as symmetric permutations characterizing this symmetric problem, the symmetric property of this problem will be more utilized in solving this problem by the implicit enumeration method. The symmetric property of a A-symmetric switching function can be described by many symmetric permutations. If A , A , "'', A are used to describe the symmetric property, then the A-symmetric switching function is said to be characterized by A , A , •••, A . A A-symmetric switching function f is said to be more explicitly characterized by symmetric permutations \ , \ •••, A (among switching variables) than by symmetric permutations \. f Xi» ,""'•> -C if7 each A! can be expressed as a concatenation oi / j , > ,j, • • • , A . 75 Example 7.3.3 The switching function f of (7.2.1) is more explicitly characterized by X ± of (7.2.2) and A 2 of (7.2.8) than by L»L of (7.2.14). ■ From Theorem 7.2.5, it is easy to see that if a A-symmetric switching function f is more explicitly characterized by symmetric permutations X-, A,,, •", A (among switching variables) than by A', A', "*•» ^» then the minimal covering problem (P) formulated from the logic minimization problem of f is more explicitly characterized by A , a , ..., A h than by A| , Xj, •••, X£, where X ± , X 2 , ■••, ^ and X ± , X^, "-, A^ are symmetric permutations (among inequality variables) of the problem (P) corresponding to A^ X^ •••, ^ and A j , X^, •••, X» respectively. Example 7.3.4 The minimal covering problem (7.2.6) is more explicitly characterized by symmetric permutations A of (7.2.7) and A of (7.2.9) than by symmetric permutation A °A of (7.2.15), since A °A = A °A . ■ The complete characterization of a symmetric permutation A of a A-symmetric switching function f by another symmetric permutation A -^> A 2' * A of f is similarly defined as in the symmetric minimal covering problem case. Example 7.3.5 The switching function f of (7.2.1) is A-symmetric only in Xj of (7.2.2), A of (7.2.8) and their concatenations. The symmetric permutation ^ 2 ° X i of ( 7 - 2 -14) is completely characterized by A ? and A since ^oA^ cannot be expressed as the concatenation of symmetric permutations other than A , A and their concatenations. * The interchange of only two variables among y , y "', y is called a transposition on y 1 , y ^, •••, y From group theory [34], any permutation on y^ y 2 , •, y^ (which is not necessarily a symmetric 76 permutation) can be expressed as a concatenation of transpositions on y , y , •'*, y • From the definition of a symmetric switching function, each permutation on switching variables y , y~, ■•'•, y of a symmetric switching function f (y , y„, •••, y ) is a symmetric permutation of f. Since each transposition on y , y , •••, y of a symmetric switch function f(y,, y 9 , **'» y ) is a permutation, each transposition on y , y , ••*, y of f(y, , y 9 , ••*, y ) is also a symmetric permutation of f. Symmetric permutations of a symmetric switching function have the following property. Theorem 7.3.1 If f(y,, y~. "", y ) is a totally symmetric switching function then each symmetric permutation is completely characterized by transpositions on y , y , **", y • Proof By group theory, each permutation on y , y . •••, y can be expressed as a concatenation of transpositions on y , y„, •••, y . Let X be a symmetric permutation on y., y 9 , •'*•, y . Since each permutation on y , y , ", y can be expressed as a concatenation of transposition on y r y 2 , •••, y t , (1) A can be expressed as a concatenation of transpositions on y x> y 2 , •■•, y fc . (2) A cannot be expressed as a concatenation of symmetric permutations other than transpositions on y , y_, •••, y and their concatenations, for the following reason: If A is expressed as a concatenation of symmetric permutations other than the transpositions of y , y , ' ' " , y , then each symmetric permutation in this expression can further be expressed as a concatenation 77 of transpositions of y , y , y . Thus A is expressed in a concatenation of transpositions on y l' y 2' ' ' "' y t and their concatenations. By definition, A is completely characterized by transpositions on y v •••■ v Q.E.D. Let us show an example of how a permutation, which is not necessarily symmetric, can be expressed as a concatenation of transpositions . Example 7.3.6 r y i -7 y 2' < y 2 7 y x » (i.e., exchange of y and y ) -> ^ '3' '3' -7 Y 2 ' (i-e., exchange of y and y ) /y 2 > T,. -7 y 3 , (i.e., exchange of y and y ) ■^ yo» are three transpositions on j^, y^ y^ The permutation, which is not a transposition, -> '2' V 4 : \ -t* y 3' can be expressed as ^ = A.,^ and the transposition A 3 can also be expressed as A = \ »\ »\ 78 Suppose X is a symmetric permutation (among switching variables) of a switching function f and is completely characterized by symmetric permutations A,, A„, •*', A, of f. Now the question arises whether the symmetric permutation A, corresponding to A, of the minimal covering problem (P) for the logic minimization problem of f is guaranteed to be completely characterized by A , A., •••, A, , where X., A~ •••, A, are the symmetric permutations of (P) corresponding to symmetric permutations A , A„> •••, > . Since, for some f, the problem (P) has symmetric _1_ c. K. permutations (among inequality variables) with no corresponding symmetric permutations (among switching variables) on f, the answer is negative as the following counter example shows. Example 7.3.7 The only symmetric permutation of the switching function f given in Example 7.2.4 are r ± 4 ^ ^ > ^ y 6 -> r ■> ■t> y. > 4 y, -> (7.3.4) (7.3.5) ^ y 6 and i heir concatenations. Since A ^ A for any positive integer i, A^ etely specified by itself. The symmetric permutation corresponding to X of the problem (7.2.19) is as follows: x '10 ll~ 12~ '13 14~ '15 16" "> x ( "> x ■> 11 12 -> ^ '10 "> x ( ■> ^ "> "> -> 16 15 14 13' It will be proved later in Section 7.4 that 79 (7.3.6) f 10 ll~ 12 13' 14 X 15" V x i6" -* x, ■> -> > 11 12 10 ^ x, "> x, -? > ■> > ^ > 13 C 14 15 16 (7.3.7) 80 ''\ 10 11" 12" 13" 14" 15" 16" \ \ ~7 V \ 4 14 : 13 15 ; l6 ! 10" 11' 12 (7.3.8) /"v 10 11" 12" 13" 14" 15" W "> x io ^> 15 16 14 13 11 12 * X 8 » x y > *5 (7.3.9) 81 are symmetric permutations of the problem (7.2.19). From (7.3.6), (7.3.7), (7.3.8) and (7.3.9), X l = VW Thus A is derived fr by A itself. (7.3.10) om n 1> n 2 andn 3> and X is not completely specified 7.4 A Necessary An d Sufficient Condition For A Permutation To Be Symmetric In this section, a necessary and sufficient condition for a given permutation to be symmetric is given. For a given permutation n: X ■+ X, where X = {x , x , •••, x }, 12 n let n(A) be the matrix obtained from A by permuting the columns of A according to the permutation n, i.e., the columns of n(A) and A have the following relation: (7.4.1) b. - a. if and only if n(x ) = x 1 J i J where b ± is the i-th column of n(A) and a. is the j-th column of A, Example 7.4.1 Let the constraint matrix A of a given problem be column no. 12 3 4 5 6 1 i 1 i I 1 | I I o; i i ' 1 i ' 1 i i 1 ; 1 i : ; i ! ; o I o ! ! 1 i 1 ( 1 1 1 (7.4.2) and a given permutation n on X = {x. , x„, x_, x. , x c , x, } be -L ^ J 4 D 6 82 ( x- "> x 2 , ■> * 3 > ^> 1' (7.4.3) "> x A , "> x 6 , * v Then column no, 3 4 / 1 ' • -i ' ' ' n(A) = i 1 1 1 1 1 1 1 1 ' f 1 ; o 1 i 1 1° ' \ 1 1 (7.4.4) old column no. 2 3 14 6 5 ■ A necessary and sufficient condition for a permutation n: X -> X to be a symmetric permutation of the problem (P) is stated in the following theorem. Theorem 7.4.1 A permutation r\ : X ->- X is a symmetric permutation of the problem (P) if and only if each row of A dominates some row of n(A). Proof First let us prove that if each row of A dominates some row of n(A), then n is a symmetric permutation. Let (x , x„, '**, x ) be a feasible solution of the problem (P). Then v "S (7.4.5) J 83 Rewrite inequality (7.4.5) as n(A) n(x 1 > n(x ) n(x ) 1 (7.4.6) L 1 y Since each row of A dominates A . n(x r n(x 2 ) some row of n(A) 1 1 n(x ) k n I (7.4.7) k X J i.e., (n(x 1 ), n (x 2 ), •••, n(x n >) is a feasible solution of (P). Thus n is symmetric. Next let us prove that if there exists some row of A not dominating any row of n (A), then n is not symmetric. Let the row of A not dominating any row of n(A) be fs a ... \ , tHa; U ±1 , a. 2 , , a )f where , a ..., a be J l 1J 2 1J r the non-zero elements (i.e., r s) . In the following let us construct fl feasible solution (x^ x^ •••, ^ of the problem (p) such fchat (n(x 1 ), n (x 2 ), •••, n (x n ))is not a feasible solution of (P) . Let us find a feasible solution of the following constraints: 1 ' n(A) . n> (7.4.8) / ; z. = or 1 for i = 1, 2, ..., n 84 Since (a , a , •••, a. ) does not dominate any row of n(A), each row of n(A) still has at least one non-zero element if columns b b j l j 2 •••, b. (note that the ith elements in these columns are not necessarily J r l's) have been deleted from n(A). In other words, even if we set z. = for k = 1, 2, • • • , r in the constraints (7.4.8), constraints (7.4.8) is ~U J- still feasible. Let (z, , z , •••, z ) with z. =0 for k = 1, 2 12 n i, ' ' ' J k be a feasible solution of (7.4.8). So n(A) . z n (7.4.9) -1 * * ;V Let n be the inverse permutation of ' n and (x, , x„, •■* x ) be obtained 12 n c t * * *\ 1 * * * -1 from U, , z , •••, z ) by permuting z , z , •••, z according to n , i ^ n ± z n * * * * * * i.e., (.x^, x 2 , •••, x^j and (z , z , •••, z ) have the following relation (n(x ), n(x ), •••, n(x ) = (z , z' l z n 1 2 , . >. From (7.4.9), A • l ! (7.4.10) Inequality (7. 4. JO) shows that (x_\ i„ ■••, z ) is a feasible solution of 12 n the problem (P) . Since the only non-zero elements in row i of the matrix ire a H ' a i-. ' ""■ a a "d Z. > z. , '••, z* are 0, 1 1 J 2 ' r J 1 J 2 J r 85 * * i.e., (z 1? z 2 , • solution of (P) . * ft * Z , z n ) = (n(x x ), n(x 2 ), (7.4.11) , n(x )) is not a feasible n Q.E.D. Example 7.4.2 The constraint matrix of the problem in Example 7.1.1 is A = 1 1 i 2 1 1 3 1 1 4 1 1 5 1 1 1 6 1 1 7 1 n 1 (7.4.12) The permutation n on this problem is r v \ -y x 3' > > > V V '6' ^ 1' "> x, (7.4.13) Then, by the definition of n(A), 86 l 1 1 2 1 1 3 1 1 n(A) = 4 1 1 5 1 1 1 6 1 1 7 1 1 (7.4.14) Comparing matrices A and n (A) , it is easy to see that each row of A dominates some row of n(A) , so n is a symmetric permutation of this problem. M Now it can be proved that the permutations n, n » H 9 , ri defined in (7.2.20), (7.3.7), (7.3.8), and (7.3.9) are symmetric permutations of the problem defined in (7.2.19). From definition, n(A) = / ' 1 1 |o 1 2 1 ! o 1 3 1 1 1 4 1 1 1 — 1__ -°J -- 5 1 ! ° 1 6 1 1 7 1 Jl 8 9 1 1 | i 1 1 10 1 i 1 11 1 o ; 1 12 -U- - 1 13 1 1 1 14 1 1 1 15 1 ' 1 16 V 1 | 1 o, (7.4.15) 87 n 1 (A) 1 1 1 1 ' 2 1 1 1 1 3 1 1 1 1 4 1 1 4- - 1 5 1 1 1 6 1 1 1 1 7 1 1 1 8 9 1 1 1 1 il 1 1 10 1 rO 1 1 11 Q '0 l 1 1 12 '0 1 1 13 '0 1 1 1 14 |0 1 1 15 I 1 1 16 h 10 1 1 1 (7.5.16) n 2 (A) l 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 o ; ! i s 1 ' 1 i 1 1 : 1 1 1 ! 1 ; o 1 1 o; 1 : t 1 0' 1 1 1 1 , 1 o . 1 (7.4.17) 88 n 3 (A) = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 ! 1 1 1 0' 1 V 1 1 1 1 1 1 ; u 1 1 1 1 1 1 1 u 1 1 1 1 1 1 1 u 1 u 1 1 1 1 1 1 1 \ u 1 1 1 o 1 (7. A. 18) Comparing the matrices A of (7.2.18), n(A) of (7.4.16), n^A) of (7.4.17), n_(A) of (7.4.17) and n-(A) of (7.4.18), we can see that each row of A dominates one row in each of n(A), ruCA) , n 2 (A) and n^A). Table 7.4.1 shows which row in n(A), n (A) , n 2 (A), and r^CA) is dominated by row i of A for each i. For example, the sixth row of n (A) is dominated by the second row of A. A 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 n(A) 4 6 7 1 5 2 3 8 9 10 11 12 13 14 15 16 n 1 (A) 1 4 5 2 3 6 3 7 9 10 11 12 13 14 15 16 n 2 (A) 1 2 3 4 5 6 7 8 10 9 11 13 12 14 16 15 n 3 (A) ] 2 3 4 5 6 7 8 12 13 14 9 10 11 15 16 Table 7.4.1 Row domination rt-lations among A and n(A), n,(A), n,,(A), n^A 89 Other properties of a symmetric permutation of a problem are given in the following theorems. Theorem 7.4.2 If n is a symmetric permutation of the problem (P) , then each row of n(A) dominates some row of A. Proof Assume r\ \ I, where I is the identity permutation (If n - I, this the- orem is trivial). Let q be an integer such that n q " I. By Corollary -. . . q-1 . /.l.J, n is a symmetric permutation of (P). Now rewrite the probl (P) as follows: em minimize z, + z + 1 2 subject to + z A' . 1 z ± = or 1 for i= 1, 2, •••, n where A' = n(A) and z ± - t](x ) for i - 1, 2, * * * , n. Since n q_1 i is a symmetric permutation of (P) , each row of A' = n(A) dominates some row of n q (A') = n q ~ (n(A)) = n q (A) = A by Theorem 7.4.1. Q.E.D One can compare (7.4.12) and (7.4.14), and see that each row of n(A) also dominates some row of A. Theorem 7.4.3 If there is no row of A dominated by another row of A, then r\: X ■*• X is a symmetric permutation of the problem (P) if and only if for each row of A, there exists the identical row of n(A). Proof 0n ly the "only if" case has to be proved, since the case of "if" is obvious from Theorem 7.4.1. 90 Assume y^ is a row in A with no identical row in n (A) . Since n is symmetric, there exists a row y] in n(A) such that (1) y. dominates y!» 1 J (2) Y . * Y- 1 J (7.4.19) (7.4.20) By Theorem 7.4.2, there exists some row y in A dominated by Y*. From (7.4.19) and (7.4.20), i =\ k and y. dominates v, in A, which J l k contradicts the assumption that there is no row of A dominated by another row of A. Q.E.D. From Theorem 7.4.3, it is easy to obtain the following corollary. Corollar y 7.4.4 If there is no row of A dominated by another row of A and if n is a symmetric permutation of the problem (P) , then there is one-to-one correspondence of identity between the rows of A and those of n(A). Example 7.4.3 Let us reconsider the problem (P) with matrix (7.4.12). In this matrix, row 5 dominates row 3 and so there is no row in n (A) identical to row 5 of A. If row 5 is deleted from matrix (7.4.12), then matrix (7.4.12) becomes s A = 1 1 1 10 10 1 1 1 10 110 1 (7.4.21) / where no row is dominated by another. For the permutation n defined in (7.4.13), 91 / n(A) - 1 1 1 1 1 1 1 1 1 1 1 1 (7.4.22) I Comparing the matrices A in (7.4.21) and n (A) in (7.4.22), we can see that there is one-to-one correspondence between the rows of A and those of n(A). ■ 111 — Preservation Of A S ymmetric Permutation During Program Backtracking Let n be a permutation on X = {x. , x_, ..., x }, and r be the J- £ n smallest positive integer such that n r (x ) = x From Corollary 7.1 3 6 k if n is a symmetric permutation of the problem (P) , then n, r? , •••, r - 1 are also symmetric permutations on (P) . After the subproblem with x^ fixed to 1 has been enumerated, each of n (x ), n 2 (x ) o k k " * ' r-1 (x ) can be fixed to in the subproblem with x fixed to k o without losing a better feasible solution, by Theorem 7.1.1 (regard n* and x^ as n and x. in Theorem 7.1.1 respectively). o J Theorem 7.5.1 Let n be a symmetric permutation of the problem (P) . Then after the subproblem with x fc fixed to 1 has been enumerated, variables o nU k J,---, n (x ) can be fixed to in the subproblem with x =0 k Q without losing a better feasible solution. Proof Since (1) the subproblem with variable x . n(x ) k ' lv k ' ' i-1 ° ° n (x k ) fixed to is a subproblem of the subproblem 92 with x, fixed to 0, k o and (2) by Theorem 7.1.1, n (x ) can be fixed to in the K o subproblem with x fixed to without losing a better • k o feasible solution, n (x ) can be further fixed to without losing a better feasible solution r-1 in the subproblem with x , n (x ), *"*, n (x ) fixed to for i = 1, o o o 2, ', r-1, if the subproblem with x fixed to 1 has been enumerated. K. r— l Thus x, , n(x, ),..., n (x, ) can be fixed to without losing a better k k k o o o . feasible solution by repeatedly fixing ri (x ) to in the subproblem with i-1 x , n(x ),..., n (x ) fixed to for i = 1, 2, •"*, r -l, if the O O O subproblem with x fixed to 1 has been enumerated. AC o Q.E.D. Theorem 7.5.2 Let (P ' ) be the subproblem obtained from (P) by fixing r-1 variables x, , ri(x, ),'"', n (x, ) to and let X' = X - {x, , n(x, ), k k k k k o o o o o r— 1 •■*, T| (x )}. If x is a variable in X', then n(x„) is also a variable o l l in X' , where n is a permutation (not necessary be symmetric) on X = 1 X- , X , ***, x i . 12 n Proof If n(x ) is not a variable of X' , then n(x ) = n (x ) for some •_i ° i > 0. Then x = r\ (x ), which shows that x is not a variable of X'. I k Q £ Q.E.D. From Theorem 7.5.2, a permutation n ' on X' can be defined by denoting n(x ) as n ' (x ) , i.e., n ' (x ) = n(x ), for all x in x'. n' is said to be obtained from n by restricting it to (P') (i.e., restricting n from (P) to (P') )• The following theorem shows that n ' is a symmetric permutation of (P') if n. is a symmetric permutation of (P). 93 Theorem 7.5.3 Let n be a symmetric permutation of problem (P) and r be the smallest positive integer such that n r (x, ) ■ X, . If (P ' ) is k k o o the problem obtained from (P) by fixing variables x, , n(x ), •••, k k o o r-1 n (x ) to and n' is the permutation obtained from n by restricting o it to (P'), then n' is a symmetric permutation of (P'). Proof Let A' be the constraint matrix of (P'). Since (P ' ) is obtained r-1 from (P) by fixing x , n(x ),...., n (x, ) to 0, A' is a matrix K k k o o o obtained from A by deleting columns a, , a, , .... a, of A, where k k n k ., o 1 r-1 x k - n (x ) for i = 1, 2, ..., r-1. From the definition of n(A), i o columns b , b , ..., b of n(A) are columns a , a , ..., a om a^ of A. From the definitions of n' and A', n ' (A ' ) is obtained fr o n(A) by deleting columns b. , b, , . . . , b, k k k n o 1 r-1 Now let us show that each row of A' dominates some row of n'(A') and thus n' is a symmetric permutation of (P ' ) , by Theorem 7.4.1. For each row y. of A', its corresponding row y, in A (the row with the same index) dominates some row v. in r\(A) , since n is a symmetric permutation. Let v! be the row obtained by deleting k -th, k ,-th, ..., k -th elements from v . Then v! is a row in n'(A'). Since y' is a r L J 1 i obtained by deleting k -th, k -th, ..., k -th elements from v , Y 1 o 1 r-1 'i' 'i row dominates v' J Q.E.D. The discussion in this section is illustrated by Figure 7.5.1, / \ ° \ 94 Symmetric with respect to n {x k = n(x k ) = ••• =T]r ~ 1 (^ > ■ 0} {x k = 1} A ° ° o \ \ Symmetric with respect to n' Figure 7.5.1 Illustration of a symmetric problem after the subproblem With \ fixed to 1 is enumerated, o ^^Preservation Of A S ymmetric Permutation During The Three Reduction Operations This section shows that a symmetric permutation n can be preserved during the following three reduction operations mentioned in Section 3.2: (1) Deleting dominating rows in the constraint matrix. (2) Deleting dominated columns in the constraint matrix and fixing their corresponding variables to 0. (3) Fixing the variables corresponding to essential columns to 1 and deleting all rows covered by these columns. " 7 - 6,1 Let n be a symmetric permutation of the problem (P) and let be the problem obtained by deleting dominating rows from the constraint matrix of the problem (P) . Then n is still a symmetric permutation of (PI) . Proof We only have to show that if (x x v ^ * c 11 ^ 1 , x 2> ..., x ; is a feasible solution 95 of (PI), then (n(x 1 ), n(x 2 ), ..., n (x )) is also a feasible solution of (PI). From the definition of (PI), (x, , x_, . . . , x ) is a feasible 12 n solution of (PI) if and only if (x x . . . , x ) is a feasible solution J- z n of (P). Since n is a symmetric permution of (P), ( n (x ), n (x ), n(x n )) is a feasible solution of (P) . Thus (n(x.), n(x ), ..., n (x )) 12 n is a feasible solution of (PI). Q.E.D. From the above lemma, any given problem with some symmetric permutations can always be reduced to a problem with no dominating row in its constraint matrix and with the same symmetric permutations. Lemma 7.6.2 Suppose the constraint matrix A does not contain any dominating row. If n (x k ) = x k for a symmetric permutation n, then o 1 the numbers of non-zero elements in columns a and a of A are the same. K. K. o 1 Furthermore, if a dominates a, , then a = a . 1 o 1 o Proof By definition, the k Q -th column of n(A) is the k -th column of A. Since n is symmetric, the number of non-zero elements in the k -th column of A must be the same as that in the k -th column of A. Otherwise, there will not be a one-to-one correspondence of identity between the rows of A and (A), contradicting to the fact that n is symmetric. (This fact is due to Corollary 7.4.4). Thus the numbers of non-zero elements in a^ and a are the same, o 1 If a k also dominates a R , then a~ k = a fc because the numbers of 1 o 1 o of non-zero elements in these two columns are the same. Q.E.D. 96 An example is the constraint matrix in (7. A. 21), where the number of non-zero elements in column 1 is the same as that in column 3. Lemma 7.6.3 Suppose the constraint matrix A contains no dominating row. If (1) column a is dominated by some other column a , K. S o o (2) n(x. ) = x, and n(x ) = x for some symmetric k k., s s 1 o 1 o 1 permutation r\, then column a, is dominated by a k l S l Proof By definition, the k -th column b, of n(A) is the k -th column J ok 1 o _i. _i- -V a of A and the s -th column b of n(A) is the k -th column a of A. k os 1 s-. o o 1 If a, is not dominated by a in A, then b, is not dominated by b k, ' s. k s 1 l o o in n(A). So there must exist some row v. = (b . .. , b._, ..., b. ) in n(A) i xl lZ xn with b., =1 and b. =0. Since a, is dominated by a in A, there is lk is k s o o o o no row in A identical to row v. in n(A). B Y Corollary 7. A. 4, this contradicts the fact that n is symmetric. Q.E.D. Lemma 7 . 6 . A Suppose the constraint matrix A contains no dominating row and r is the smallest Dositive integer such that n (x ) = x for a o symmetric permutation n. If (1) column a of A is dominated by some oth- K. 2° r-1 er column, and (2) x = n(x ) , x = n (x k ),..., x k = n (x^ ), 1 o 2 o r-1 o then a, is also dominated by some other column for each i = 1, 2, ..., r. k . l Furthermore, if a, is dominated by some column other than k o a, i a. , ..., a , then a, is dominated by some column other than a, , k k, k k . k o r-1 i o 97 1 r-1 Proof By Corollary 7.1.3., n , n 2 , ..., n**" 1 are symmetric permutations. Since a k is dominated by some other column, a is dominated by some o \ other column for i - 1, 2, .... r-1, by repeatedly applying Lemma 7.6.3. Furthermore, if a is dominated by a , where s is different o S o from k Q , k r ..., k r _ r and x g = n X (x s ) for i - 1, 2 r-1, then i o a k is dominated by a g for i = 1, 2, . . . , r-1, by repeatedly applying i i Lemma 7.6.3. Now let us prove that s. is different from k , k , ..., k r-l Cor 4 " 1. 2, ..., r-1. Assume s ± = k. for some j such that 1 <_ j <_ r-1. Since n is a one-to-one mapping, we have s. , = k. 1 i-l j-1 S i-2 = k j-2 which contradicts that s is different from k , k . ... k o o' 1' ' r-1' Q.E.D. Lemma 7.6.5 Suppose the constraints matrix A has no dominating rows and r is the smallest positive integer such that /(x ) = x for a symmetric o o permutation n- If (1) column a k is dominated by a for some p such that o p 1 IP 1 r-1, (2) x k = n 1 (x k ) for 1 - 1, 2 r-1, i o 98 then for each i = 1, 2, ..., p-1, a = a for some s. such that i s . l p < s . < r. — l Proof Since n is a symmetric permutation of (P) , a = a by Lemma k k p o 7.6.2. Since n is a symmetric permutation and a, is dominated bv a , k J k ° P a k is dominated by a fc f or i = 1 , 2, . . . , p-1 by Lemma 7.6.3. Since i i+p 11 ^ X k ^ = n ^ X k ^ = X k anci a k is dominated by a , i o p+i i k i+p a k = \ f ° r i = 1 ' 2 ' ••*' p_1 - (7.6.1) i i+p by Lemma 7.6.2. If 2p <_ r, then the lemma is proved by setting s. = p+1. If 2p > r > p, then there exists some i <_ p-1 such that p+i > r. For those i such that p+i >_ r, in the following of the proof, it will be shown that there exists some j such that (1) £ j < i, (2) \ + . -V' P+i J (3) p <_ j+p < r. Then the equalities a k = a* k = a = a hold (the last equality i p+i "j j+p is obtained by replacing i in (7.6.1) by j), proving this 1 emma, Let j. = p+i-r. Since x k = H P+i (x k ) = n P+i ~ r • n r (x k ) = P+i o o P+i-r, . jl , . U k ; = n (x k ) = x k , we have a - a . Since p < r and o o j p +i j P+i > r, <_ j < i. If j+p < r , then j satisfies (1) < J a < i, (7.6.2) (2 > \ = \ > (7.6.3) P +i .1 j (3) p <_ J+p - r, (7.6.4) ' )] i:; ,1 "' 1 to !).• found. If j+p ^ r, then by letting 99 J 2 = P + Ji~ r and repeating the same argument applied to j , the following two formulas result. (1) <_ J 2 < J 1 < i. (7.6.5) (2) \ " V • (7.6.6) P+J l J 2 From (7.6.3), (7.6.1), and (7.6.6), V. = v = \ - \ • (7 ' 6 ' 7 > P +1 3 X P+J 2 j 2 If J 2 +p < r, then from (7.6.5) and (7.6.7), j is the j to be found. Since number i is a finite number, we must obtain some number i q satisfying (1) °^ j q * j q-l K •*• < h < *• P+X J l P+ h » + Vl \ (3) p < j +p < r, if the argument applied to j is repeatedly applied to j j , •••, j Q.E.D. Lemma 7.6.6 Suppose the constraint matrix A has no dominating row and r is the smallest positive integer such that rf (x ) = x for a symmetric o o permutation n. If (1) x k = n 1 (x k ) for i = 1, 2, ..., r-1, i o (2) column a is dominated by a for some p such that o p 1 1 P 1 r -!> then the problem (P2) obtained by deleting columns a , a , a . k o k l k 2 -a- a k from A (i.e., fixing variables x , x , ..., x to 0) is P" 1 k o 1 p-1 symmetric under the permutation n' defined by 100 n 1 : / * * (7.6.8) X I \ nl r-1 — ^ X p n(x ), if H k is £ r-l Proof The k , -th column b, of n(A) is the k -th column a, of A p-1 k p k p-1 p and the k , -th column b, of n(A) is the k -th column a, of A. r-l k - o k r-l o P P -*■ -*■ Since n is symmetric and x, = n (x. ) , a, = a, in A by Lemma 7.6.2, k k k k p o o p ->■ -> and consequently b = b in n(A). p-1 r-l Let n be a permutation defined by x > n(x„), if Hk ,, k ,, l ' ""'V' ' ' p-1' r-l n : < X k . ^ X k ' (7.6.9) p-1 o x. - — -? X . k .. k r-l p Then, from the definitions of ri and n(A), ri(A) is a matrix obtained from n(A) by exchanging the k -,-th column b, and the k , -th column b, p-1 k r-l k p-1 r-l in r\(k) . Since b, = b , matrix n(A) is identical to fi(A). Since p-1 r-l n is symmetric, rj is also symmetric by Theorem 7.4.1. Since ri(x ) = n (x, ) = x , k k k, 2 o o 1 n (x, ) = n°n(x k ) = nU k ) = n(x k ) = x^ , o o 1 1 2 n (x k ) = Vn (x k ) = n(x k ) = n(x k )= x k o o p-2 p-2 p-1 n (x ) = n°n (x. ) = n(x. ) ■ x. , k k k n k o o p-1 o p is the smallest positive integer such that ry (x ) = x . It is easy K. K o o ;ee that the permutation n' °f (7.6.8) is obtained from the symmetric permutation f\ of (7.6.9) by restricting it to (P2). By Theorem 7.5.3, Le a symmetric permutation of (P2). Q . E . D . 101 It will be proved in Theorem 7.6.7 that the following proce- dure DCDP (Dominated Columns Deletion Procedure) can be applied to reduce problem (P) . Procedure DCDP : Dl . Delete dominating rows from the constraint matrix A. D2. Find a dominated column a, . If there is no column — k o dominated by another, then the procedure terminates. 2 r-1 D3 . Let n(x k ) = x , n (x k ) = x , ..., n (x ) = x^ o 1 o 2 o r-1 and n (x ) = x . k k o o D3. 1 If a, is not dominated by any of a , a, , . . . , o k l k 2 a , then a , a , . . . , a are de- r-1 o 1 r-1 leted. D3. 2 If a is dominated by a for some p such that o p 1 < p < r-1, then n is updated to n, where n is defined by r*, -n (x,,), i * k r _ v k p _ r «:/ x »■ X , k p' r-1 r ^ X p-1 * \ ' o and then a , a , ..., a, are deleted, o 1 p-1 D4 . Update n to the permutation n ? by restricting n (or r\ if n is updated in step D3.2) to the problem reduced at step D3.1. Go to step Dl. ■ 102 Theorem 7.6.7 Let (P) be a problem with symmetric permutation n. Then the procedure DCDP can be applied to reduce problem (P) , and the last updated permutation n in the procedure DCDP is still a symmetric per- mutation of the reduced problem (P3) . Proof This theorem is proved by showing that (1) columns a, , a, , .... a, in step D3.1 can be deleted k k n k , o 1 r-1 without losing all the optimal solutions of problem (P) , (2) columns a , a , ..., a in step D3.2 can be deleted o 1 p-1 without losing all the optimal solutions of problem (P) , (3) symmetric permutation is preserved during steps Dl and D3 of the procedure. From Lemma 7.6.1, symmetric permutation n is preserved when the procedure goes through step Dl . In step D3, if a is dominated o by some column other than a , a , . . . , a , then a is dominated k l k 2 r-1 i by some column other than a, , a , . . . , a , for i = 1, 2, ..., r-1, k l k 2 r-1 by Lemma 7.6.4. So a , a , ..., a can all be deleted without o 1 r-1 losing all optimal solutions of (P) since they are dominated columns. By Theorem 7.5.3, this reduced problem (the problem obtained by delet- ing a , a , . . . , a, ) is symmetric under the permutation n ' ob- o 1 r-1 tained by restricting n to the reduced problem. If a is dominated o by a for some p such that 1 : p < r-1, then, for each i ■ 1, 2, ..., P p-1, we have a, = a, for some s. such that p < s. < r by Lemma k . k l — l l 8 . l 7.6.5. So a, , a , ..., a can all be deleted without losing all o 2 p 103 optimal solutions of (P) since they are columns dominated by others (i.e. , a = a ). k. k i s . 1 By Lemma 7.6.6, the reduced problem (the problem obtained by deleting a^ , a^ , . . . , a ) is symmetric under the permutation o 1 p-l n' obtained by restricting n to the reduced problem (n 1 is the one de- fined in Lemma 7.6.6). Thus the symmetric permutation n is preserved during step D3. Q.E.D. From the above theorem, if the DCDP procedure is applied to a given problem (P) with a symmetric permutation n, then the reduced problem (P3) will have no dominating row and dominated column in its reduced constraint matrix. Also, (P3) is symmetric under a symmetric permutation n' which is obtained from r\ by the corresponding reduc- tion. Lemma 7.6.8 Let n be a symmetric permutation of problem (P). If (1) a , is an essential column of the constraint matrix A, o (2) \ = n 1 (x ) for i = 1, 2, ..., r-1, where n r (x ) i o k o = X k ' o then columns a k , a k , ..., a are all essential, o 1 r-1 Proof This lemma is proved by showing that if x = p (x ) for some t k o symmetric permutation p, then a is also essential. Then since n n 2 r -! • - n * " i •••> n are symmetric permutations, a , a , a , ..., o k l k 2 a are all essential. r-1 104 Since a, is essential, there exists some r. = (a , a , k 1 11 iz o ..., a ) of A such that a., =1 and a.. = for j ^ k . Since p ' in ik ij o o is a symmetric permutation of (P) , r. dominates some row r' = (b , b , ..., b ) in p (A) , by Theorem 7.4.1. Since a = 1 is the only o non-zero element in r . , b„ . =0 for all i ^ k and b„. = 1. Since l £i o Ik o x = p (x. ), the k -th column b, of p (A) is the t-th column a of A, t " k o k t o o by the definition of p (A) . So a„ = b . = 1 and a„ . = for all y £t £k £i j 4- t, i.e., a is also essential. Q.E.D. Lemma 7.6.9 Let n be a symmetric permutation of problem (P) . If i r x =1 (x ) for i = 1, 2, ..., r-1, where x = n (x ), then pro- K. , K K. K. 1 o o o blem (P4) obtained by fixing variables x, , x, , . . . , x, to 1 and k k.. k . o 1 r-1 deleting all rows covered by columns a , a , . . . , a , is sym- K. K.-. K. - - o 1 r-1 metric under the permutation n' obtained by restricting n to the pro- blem (P4). Proof Let A' be the constraint matrix of (P4) . First, we will show that each row r! = (a! , a' ..., a! ,) of A' dominates some row of i ij i2 in n'(A') Let r. = (a , a. , . . . , a. ) be the row in A such that J J 1 J 2 J n — k r 1 is obtained by deleting the k -th, k -th, ..., k -th element 1 o 1 r-1 from r Since r' is a row of A', a , a , ..., a must be J o ■' 1 J r-1 (otherwise, r.' will not be a row of A'). Since n is symmetric, r - f ]r,rr)irM ! ,me row v = (b , , b _ , . . . , b ) of n(A), by Theorem s sl s2 sn 105 7.4.1. Since a , a ..., a are 0, b b b J o J 1 J r-1 o K i r-1 are also 0. Let v' be the row obtained by deleting b , b , .. s sk sk n ' o 1 b from v . Then r! dominates v'. sk r -l si s Now let us show that v' is a row of n'(a'). s Since n' is the permutation obtained from n by restricting it to (P4), n'(A') is obtained from n(A) by deleting the k -th, k-,-th,-.., k ,-th columns and deleting all the rows covered by the k -th, k,-th....j r-j. ■> o 1 k r _i" th columns. Since b , b , ..., b are 0, v 1 is a row of *■ ■*- SK SK. SK. _ S o 1 r-1 n'(A'). Thus, r! of A' dominates v' of n'(A'). By Theorem 7.4.1 (P4) is symmetric under n ' . Q.E.D. It will be proved in Theorem 7.6.10 that the following procedure ECFP (Essential Columns Finding Procedure) can be applied to reduce problem (P) . Procedure ECFP El. Find an essential column a, of the constraint matrix. k o If no essential column exists, then the procedure terminates. E2. Let n(x k )=x n 2 (x k )=x n'- 1 ^)-^ , ° 1 o I o r-1 r . S k X k ' Fix X k ' X k ' ' ' ' ' x k t0 1 and de ~ o o o 1 r-1 lete all rows covered by a , a , . . . , a k k n k , o 1 r-1 E_3. Update n to the permutation n ' obtained by restricting n to the problem reduced at step E2, and go to step El. 106 Th eorem 7.6.10 Let (P) be a problem with a symmetric permutation r\. Then the procedure ECFP can be applied to reduce problem (P) , and the last updated permutation n in the procedure ECFP is a symmetric permutation of the reduced problem (P5) . Proof From Lemma 7.6.8, if a is essential, then a^ , a^ , ..., o 12 a, are all essential. So x, , x , . .., x can all be fixed r-1 o 1 r-l to 1 in step E2 without losing any optimal solution of (P) . From Lemma 7.6.9, the reduced problem (problem obtained by fixing x fe , o v , ..., x to 1) is symmetric under the permutation n. ' obtained 1 r-l by restricting n to this reduced problem. Q.E.D. If the ECFP procedure is applied to a given problem (P) with a symmetric permutation n, then the reduced problem (P5) will not have any essential column in its reduced constraint matrix. From Theorems 7.6.7 and 7.6.10, if the procedures DCDP and ECFP are repeatedly applied to the problem (P) with a symmetric per- mutation n, then the reduced problem (P6) , where none of the three reduction operations described in the beginning of Section 7.6 can be applied is still a problem with symmetric permutation n, ' » which is obtained from n by updating it as described in Theorems 7.6.7 and 7.6.10. 7.7 Preservation Of Symmetric Per mutations With Different Generators In this section a problem with symmetric permutations of more than one generator is considered. Throughout this section it is 107 assumed that i^, ry ..., n h are different generators of symmetric permutations of the problem (P) . It is also assumed that r is r. i the smallest positive integer such that n. 1 is the identity permu- tation for i = 1, 2, . . . , h. An example of a problem with symmetric permutations of more than one generator has already been shown in Section 7.2. In the following another example is shown. This example is used for the illustration later in this section. Example 7.7.1 Consider a problem with a constraint matrix A - / "1 V I \ i- C ' B I I L / (7.7.1) where / B = -I -t s D, E, F, G are arbit show all 0's. \ C - S (. G I F t \ \ > rary n * n zero-one matrices, and the blank areas Define a permutation n on X = {x , x , . . . , x } as 108 ^ X i ■x..,, if l- x. . if 3 < i < 9n. i-3n n — (7.7.2) Then, from the definition of n (A) , r\ 1 (A) = N (7.7.3) Comparing ri 1 (A) with A, it is easy to see that each row of A domin- ates some row of n (A) . So n is a symmetric permutation of this problem, by Theorem 7.4.1. Define another permutation n_ on X = {x. , x„, ..., x A } z 1 z yn as x. l ->- x. _ if 1 < i < n, 3n < i < 4n, 6n < i < 7n, i+2n, — — — — ■*■ x. , if n < i < 3m , 4n < i < 6n, 7n < i < 9n. l-n — — — (7.7.4) By the definition of n ? (A), 109 E i D | n 2 (A) 1) E \ G I F I 1 F G F --■\-- G | D G f----: ' E (7.7.5) Writing A explicitly in D, E, F, G, we have A = (7.7.6) 110 Comparing A in (7.7.6) with r\ (k) in (7.7.5), it is easy to see that each row of A dominates some row of n „(A) . So r\ is also a symmet- ric permutation of this problem, by Theorem 7.4.1. From the definition of n, a ^d n 2 » H 1 ^ n o for any P osi_ tive integer i and n ^ r? for any positive integer j. Thus, the problem with the constraint matrix in the form (7.7.1) is a problem with symmetric permutations of two different generators. For a given positive integer I, define H , ..., (x ) n l n 2 n h o to be the set of variables such that each variable x in it can be v expressed as x =n. on. o...on. (x.) v \ J k-1 J l k o for some k <_ I and some n. , n. , •••> n . £ (n-,, ^2 » •••» n u' * J l J 2 J k (the identity)}. Note that some of n. , n , n in the above J l J 2 J k definition may be the same, and the variable x is a variable in o H £ (x. ), since x. = I (x ) . As a special case, H V2 ■•• \ k o k o k o n l n 2 •'• n i (x, ) is defined to be the set {x }. o o Example 7.7.2 From the above definition, if n, and T) are defined as in (7.7.2) and (7.7.4), then H n x (X 1 } = {X 1' X 6n + 1 } ' H n 2 (x i } = {x r x 2n + i } ' Hn i n 2 ^^ {Xl ' X6n+1 ' X2n+l} ' 2 H n L (X 1 } ' { V X 6n+1' X 3n+1 } ' Ill 2 \ (X 1 } = {X 1' X 2n+1' Vl 1 ' ^ (X 1 } = {X 1' X 6n + 1' X 2n + 1' X 3n + 1' \ + V X 8n+1 } ' H n x (x i } = {x r X 6n + 1' X 3n+1 } ' H n 2 (X 1 } = {X V X 2n+1' X n+1 } ' 3 ^r^ (X 1 } 1' X 6n+1' X 2n+1' X 3n+1' X n+1 ' X 8n+1' X 5n+1, X 7n+1}' ^ ( V "^ (x l>> H* (x ) = Yi 3 (x. ) , n 2 i n 2 1 4 \ n 2 (Xl) X ' X6n+1 ' X2n+1 ' X3n+1 ' Xn+1 ' X8n+1 ' X5n+1 ' X 7n+1' x 4n+l } ' H^ (x ) = H 4 (x_). 12 n l n 2 ■ For a given positive integer £, define D V 2 -% ( V = H V 2 — h ( V- H V 2 -n h ( V- (7.7.7) As a special case, define D° ( x . ) = {x, }. From (7.7.7), 12 h ° o H V 2 ... \\ )m ^'X- \ (x k )+D n,n, ...n h (x k>- -L^ ho 12 ho 12 ho (7.7.8) Theorem 7.7.1 If SL > , then each variable in D £_1 (x ) n n ... n k 12 ho H-l is mapped from some variable in D (x ) by some symmetric Hi in ... n, k. 12 ho 112 permutation n . where 1 _< i j< h. £ Proof Each variable x in D (x, ) can be expressed as t ni n 2 ... n h k o H. o n o ... o n. (x ) for some j , j , ..., j e {l, 2, ...,h}, J £ J £-l J l o L Z * by (7.7.7). Let x = n, o ... H. (x, ). Then we have to show V J £-l J l o £-1 £-1 that x is in D (x, ) . If x is not in D (x, ), n l n 2 ■•• \ k o V n l n 2 ••• \ k o £-2 then x must be in H (x, ) by (7. 7. 7) , i.e. , x can be ex- v r\ n . . . n, k v 12 ho pressed as n . , o n . , d . . . o r\ . . (x. ) for some k < £ - 2 and J k Vi H k some j' j', ..., j' in {1, 2, ..., h}. Then x_ = n. on. o . . . o n. (x ) , t J £ J £-l J l k o = n, (x, ) (7.7.9) J £ k o = n . n . i o . . . o n . , (x ) J £ J k h k o £-1 where k < £ - 2. Equalities (7.7.9) shows that x e H (x, ) , t V2 ... n h k^' £ £ contradicting the assumption that x e D (x, ) = H 12 ho 12 h (x, ) - H 8 -" 1 (x ). k n 1 n 9 ... n, k o 12 ho Q.E.D. The relation between H (x, ) and D n,n o • • • n. k DtH, • •• n, 12 ho 12 h (x ) for each i is shown in Figure 7.7.1. o 113 n 1 n 2 H~ n 1 n 2 H n l n 2 n 1 n 2 H n 1 n 2 (x, )=G X k o n l n 2 \ (x u ) h v k (x k ) n K } n k h o n (x, ) = {x, } % ( v h o Figure 7.7.1 The relation between H n l n 2 •'• % k o and D (x, ) . 12 no Example 7.7.3 If n and r\ are defined in (7.7.2) and (7.7.4), then D i x (x l' " {x 6n + l } > "^ 1 > • • • > 1, be symmetric permutations of the 1 z h problem (P) and G (x, ) = {x, , x, , . . . , x. } . Then n,n 9 . . . n ■ k k k n k 12 ho o 1 q after Lhe subproblem with x, fixed to 1 has been enumerated, variables k ' o 115 x^ , x^ , . . . , x k can be fixed to without losing a better feasible 12 q solution in the subproblem with x =0. k o Proof Since (1) the subproblem with variables x , x .... x. k k.. k. , o 1 1-1 all fixed to is a subproblem of the subproblem with x, fixed to 0, k o and (2) x^ can be fixed to without losing a better feasible solu- i tion in the subproblem with x fixed to by Theorem 7.1.1, x can o k i be further fixed to without losing a better feasible solution in the subproblem with x fc , x fc , . . . , x fixed to if the subproblem 1 i-1 With X k fixed to 1 nas been enumerated. The above argument can be o applied for i - 1, 2, ..., q. Thus, after the subproblem with x k o fixed to 1 has been enumerated, x , x , ..., x can be fixed to k k k o 1 q without losing a better feasible solution of (P) by repeatedly fix- ing variable x to in the subproblem with x , x , . . . , x -f k k ' k. ., 1 o 1 1-1 fixed to for i-1, 2, ...,q. Q.E.D. Let (P7) be the subproblem obtained from (P) by fixing vari- ables x x ..., x to and let X' = X - G ( x ) k o k i \ Vi 2 ... n h k/' Theorem 7.7.3 If x is not a variable in G Cx ) then 1 n,n 9 ... n, v k '■ 12 ho n. (x ) is not a variable in G (x ) for i = 1 2 h n,n 9 ... n, k k 1 2. ho Proof Let r ± be the smallest positive integer such that n. 1 is the 116 identity permutation. If n.(x„) is a variable in G (x ) 1 * n l n 2 '" \ ' k o P k P k-1 P l then n. (x ) = n. on. o ... o n. (x ) for some positive in- 36 J k J k-1 3 1 k o tegers k, p^, p,>, ..., p^ and some j , j , ..., j Obviously r i-l r ' P. "on. (x ) = n. (x ) = x holds and this can be rewritten as x = I r i-l p k p k -l p l t , n. on. on. o . . . o n. (x. ) , J k J k -1 J l k o which shows that x n is a variable in G (x ) . This contra- il T\ r\ .. . n v k ' 1 z ho diets that x„ is not a variable in G (x ) . I n,n .. . n k 12 ho Q.E.D. From Theorem 7.7.3, for each i = 1, 2, . . . , h, a permutation n! on X' = X - G (x, ) can be defined as n 1 n 2 ...n h \ n i (X £ ) = n i (X £ ) f ° r a11 X £ in X '* (7.7.10) n \ , r\l, . . . , n/ are said to be obtained from n, , n 9 , . . . , n. by re- stricting them to (P7). In the following Theorem 7.7.4, nj , ni •••, n' are proved to be symmetric permutations of (P7) . Theorem 7.7.4 If (P7) (or (P8) )is the problem obtained from (P) by fix- ing all variables in G (x, ) to (or to 1), then permuta- n,n 9 . • . n, k p 12 ho , . . . , tions t)' t]' ..., n' which are obtained by restricting n 1 , n 9 ri, to (?7) (or (P8)), are symmetric permutations of (P7) (or P8)). Proof Let n be one of n, , n ? , ..., n,. In the following, we will prove that n' is a symmetric permutation of (P7) (or (P8)). Then, [» '> ••-, n' will be symmetric permutation of (P7) (or (P8)). 117 Let x. be a variable in G (x, ) . Then n l n 2 '•• \ k o P k P k-1 p l x - n. on. o ... o n. (x ) for some positive integers k, o J k J k-1 J l o P 1 , P 2 , . . . , p k and some j , j 2> ..., J E {1, 2, ..., h}. Since £ £ P k P k-1 P l £ n (x ) = n o n. on. o ... o n. (x. ), n„ (x. ) is a vari- 1 X o t \ J k-1 J l k o C k o r able in G (x ) f or £ = 1 , 2, . . . , r - 1, where n is n 1 n 2 • • • n h K Q t t r the identity permutation. If variables x. , n (x ) n t_ i t i ' ' 't o o (x i ) are fixed to (or to 1) , then the reduced (P ) is symmetric o * * under the permutation n. obtained from n by restricting it to (P ) , by Theorem 7.5.3 (or by Lemma 7.6.9). If all variables in G n 1 n 2 ... n h (x^ ) are already fixed to (or to 1), then (P7) = (P )(or (P8) = (P*)) ft and (P7) (or (P8)) is symmetric under n! = n • If there exist t t some variables in G (x. ) not fixed yet, let x. be one of them. n l n 2 "• n f k o X l By the same argument applied to x. , another reduced problem (P ) with o more variables in G (x, ) fixed to (or to 1) than before n l n 2 ••' n h k o ** is obtained and this reduced problem (P ) is symmetric under the per- ** ** mutation n obtained by restricting n to (P ). Repeating the above process, problem (P7) (or P(8)) will be obtained, since the number of variables in G (x ) is finite. 12 ho Problem (P7) (or problem (P8)) is symmetric under n' since the problem 118 obtained after each process discussed above is symmetric under the permutation obtained by restricting n to it. Q.E.D, All the discussions in this section are illustrated in Figure 7.7.2. Symmetric permutations : r\ , ru > •••> h. {All variables in G (x ) r) r\ ... n. k j n -\ 12 h o are fixed to 0) Symmetric permutations : n' T)' 2 , ..., n^ (n| is obtained from n . by restricting it to this subproblem for each i) . Figure 7.7.2 Illustration of a problem with more than one symmetric generator when program backtracks. The following is an efficient procedure to find G n l n 2 ' ( :■: ) for a given x, , based on Theorem 7.7.1, k k o o Procedure GF (G run, . . . n. k 11 ho (x, ) Finding Procedure) : Fl. H «- (x, }, D ■*■ {x, }, Dl «- empty. — k k o o F2 . For each i = 1, 2, ..., h and each x in D, if n. (x ) is not a variable in H or D, then store i v n. (x ) in Dl. i v 119 F3 . H * (Union of H and D) , D ■*■ Dl, Dl «■ empty. F4. If D is empty, then the procedure terminates. Otherwise go to step F2. g By applying the above procedure to a given variable 3c , o G n n ...i, ( x k ) is obtained as the set H in the above procedure. 12 h o In the following we shall show that n, ,n , ..., n are pre- 12 h served during the three reduction operations stated in Section 7.6. Lemma 7.7.5 Suppose there is no dominating row in the constraint matrix A of (P) and n^, n 2 , •••, n are symmetric permutations of (P). If column a k is dominated by some other column, then column o a v is dominated by some other column for every x £ G i k i V 2 ••• \ ( v = { \ ■ v ■••■ x k } - o 1 q Proof Since x e G ( x ) , x , = r, . k o n.*" 1 o ... o k. ni n 2 ... n h k k. j k ] W P l f ^ , n. U, ; tor some positive integers k, p , p ..., p and some J l o 12k P k P k-1 P-, 1-^y 3 2> •••» J k e il» 2, ..., h}. Since n . o n . o ... o n . is 3 k J k-1 h symmetric and a fc is dominated by some other column say a , then o s o p k p k-l Pl a is dominated by a , where x = n. o n o . . . o n 1 fx ) i s i s i J k \-i Ji s by Lemma 7.6.3. Q.E.D. Lemma 7.7.6 Suppose there is no dominating row in the constraint matrix A of (P) and n^r^,..., ^ are symmetric permutations of (P) such that column a g f does not dominate column a , for every pair of 120 variables x , and x , in H^ t n 1 n 2 (x k ). If th h o ere exists a pair variables x and x in H s t then £+1 V2 • h o / such that a c dominates a^ , (1) a = a , s t' (2) one of Xg and x fc , say Xg , must be a variable in £+1 n 1 n 2 n (x R ) , (x may or may not be in D h o £+1 n l n 2 (x k )), o (3) there exist symmetric permutations n , n , of (P) such that (a) each n . is obtained f rom n i by the following modi- some variable x in v fication: If there exists £ Vn, ... tl ( \ } SUch that \ <*> - * then I I ho -l v s 1. is defined as * n. (x d ), if x d / x u , X v' n. : ( x i > u V (7.7.10) where x^ is the variable such that n. (x ) = l u Otherwise n = n i i (b) M £ "| ri 2 ' £ £ (x ) >• H * * n K k ll O n l n 2 * I (<) ( x ) ,, D ~ * * V 2 ••• \ k f) V 2 * (x k ) (7.7.11) ll O • (\ ) (7.7.12) ll <) 121 (d) x s* D S ••• < J [ » J 2» •••' J ^ e 1 1 , 2, ..., h > , where k £ £ + 1 , and k' <_ I + 1. Since n. on. o...on. and n., o n.» 0...0 J k J k-1 J l J k' J k-1 n.t are symmetric, the numbers of non-zero elements in each of a , J 1 k 1 o "*" 1 "*" -*■-*■ a , and a are the same, by Lemma 7.6.2. Since a dominates a and st st the numbers of non-zero elements in a and a are the same, s t * s = a fc (7.7.15) Thus, (1) is proved. Since a , does not dominate a , for any pair of variables s t I x , and x , in H (x ), at least one of x and x must be a s t n x n 2 ••• n h k Q s t member of D (x. ) . (This is because D £+1 (x ) n l n 2 "• \ k o V2 ••* \ V £+1 % = \n ? ... n K (x k } " \n 9 ... n u (x k }) ' Thus ' (2) is P roved ' -L^ ho 12 ho Z+l Let x be a member of D (x, ) . Since x is in s n,n ••• n u k s 12 ho _Jl+l , . D n (x ), the following equations are true: ' i ' 1 ... ni_ K 1 2 h o (i) k = £ + 1, where k is the subscript of j such K. that x =n. on. o...on s 3 k Jk-l h 122 (ii) x = n. on. o ... o n. (x, ) is a member of k-1 J k-2 J l o I D (x, ) (by Theorem 7.7.1). n 1 n 2 ... n h k (7.7.16) (Hi) x - n. on. o ... o n. (x. ) = n. (x ) . s J k J k-l J i k o \ v in the above equation (iii) , n. is one of 1 , n ..., H, . J k 1 2 h Let it be n. and let x be the variable such that n. (x ) = x . De- i u l u t fine n. as ->- n. (x ) , if x ? x , x id a u v n. : < x l \ u ^ X v ^ V (7.7.17) ->■ x . t — - — *- ?5 Since a = a , matrix n. (A) is identical to matrix n. (A) by the * definitions of H. and n.- Thus, H. is a symmetric permutation of (P) , (7.7.18) by Theorem 7.4.1. Next, let us show that (A) x is not a member of H I n 1 n 2 (x k ), (7.7.19) h o (B) H J n 1 n 2 n i-l n i n i+l \ (X k ) - h o V 2 h o (7.7.20) 1^2 "i-1 n i n i+l ••• \ ( \ } = n i n 2 h o (7.7.21) 123 (C) D* +1 * (x ) V 2 ••• Vi \ Vi ••• \ \ e D (x ) . (7.7 22") V 2 ••• \ V r . Proof of (A) Let r be an integer such that n . 1 is the identity permu- r ± -l r.-l r. tation. Then n. (x ) = r\ ( n . (x ) ) = n . x ( x ) = x and 1 t 1 1U 1UU V 1 r i"l r i r -l n i (x s ) = n i ^i^u^ = n i ^ x v ^ = V Since n ' is a syro^tric permutation and a dominates a. , a dominates a , by Lemma 7.6.3. Bv S t V u J a (7.7.16), x e D (x ) . Then x is not a member of 'l '2 "• n h o U I H n n n (x, ) , because, if x is a member, we will have x and 'l n 2 * ' ' n h o u u x v in H n n n ^ x l- ) such that a dominates a , contradicting that 'I-|'I/j»..I|iK. V U T'2 '•• "h o a , does not dominate a , for any pair of different variables x , and ^ c s I V in H n n n (x v >• Thus ' ( A ) ls Proved. n l n 2 '•• \ o Proof of (B) Since x k E l (x ) u T n,n 9 ... n, k ' -L *■ ho n i** ° i** ° ••• ° n.* * x (7.7.24) J k J k-1 J l v for any positive integer k <_% - 1 and any j , j * . . . , j ** in Cl, 2, ... f}. For any j where 1 <_ k < I - 1, if n!* is defined J P as (7.7.25) 124 By the definition of n . ,,. in (7.7.25), by the definition of n.j in (7.7.17), and by the fact that x, k x or x , k u v o n . * on..,. o . . . o n . ., { o n . * (x ) J k * J k*-i J 2 J i K = n on, o ... o n * o n (x ) J k* J k*-1 3 2 J l o for any k* such that 1 ^ k* * I and any j* j*, ..., j* in 1 Z k* { 1, 2, .1 . , h} . By the definition of n. A in (7.7.25), and by the * J 2 definition of n . ... in (7.7.17), J 2 » ! t n . .,. o ... o n .... o n .... o n . .. c (x ) J k* J 3 J 2 J i o = n * o ...o n,, { o n.* o n .* (x ), J k* J 3 J 2 3 1 o for any k* such that 2 £ k* ^ £ and any j* j* ..., j* in X Z. rv {1, 2, ..., h} , since, by (7.7.23) and (7.7.24), n . . (x, ) k x or x . i * k ' u v 1 ° 1 * By the definition of n.. in (7.7.25), and by the definition of n., J k J k in (7.7.17), t n o n o ...o n (x ) J k J k-1 J l o ^iA ^* O ...o n A (x ) J k J k-1 J l o for any k* such that l^k^fi, and any j* j*, ..., j* in { 1 , 2, ..., h} , since, by (7.7.23) and (7.7.24), n . . o . . .o n td (x, ) U or x J k*-1 J* k o for any k* such that V k**l . Thus we have 125 T I If n,* °n i * o ...o n o n * (x ) Hu ° n .* o ...o J k* J k*-1 J 2 J l o "i* ° n iA o ...o n .* o n .* (x ) J k * J k *_i J 2 J l o n^* ° n^ o ...o n o n (x ) 3 k* J k*-1 J 2 J l o n-i* on,* o ...o n ,* o n (x ) J k* J k*-l J 2 J l o for any k* such that 1 <- k'!5£ and any j* , j*, ..., j* i n — 1 z k* *1, 2, . .., h* . Therefore, H * (x ) = H ^ (x ) n 1 n 2 ---n i _ 1 njn i+1 • • -n h k ni n 2 ... n ' k h and D * (x. ) = D (x ) n 1 n 2 ...n 1 - 1 n i n i+1 ...n h k n 1 n 2 ...n h k Q are proved, i.e., (B) is proved. Proof of (C) Let us show that each variable in D (x, ) is also a variable in ni n 2 •••n i _ 1 nfn 1+1 ■•• \ k D l+1 (x, ). Since D* (x, ) n l n 2 "• n h k o ^2 ••• n i-l n l n i+l "'\ k o 126 £ = D (x, ) from (7.7.21), and every variable in T) r) . . . n k 12 ho D (x, ) is mapped from some variable in D (x ) 12ho IZho by some symmetric permutation n where 1 < e < h by Theorem 7.7.1, £+1 we only have to show that if n*(x,,) is in D * „ (x. ) 1 d n l n 2--- n i-l n i n i+l--- n h k o I for some x,, in E (x. ) then n * (x , ) is also in d' n^. ... \ k i d' 12 ho D (x. ) . n n . . . n k 12 ho Since x is not a member of H (x ) (from (A)), x n 1 Ti 2 ...n h k o is not a member of D (x, ) , by (7 . 7 . 7) . Since x , is in n lV n h k o d D * ^ (x k ), x d , k x u . If x d , ¥ x v , then n *< v ) = n^,) 1 2 h o is in D £+1 (x. ), by (7.7.17). If x , , = x then n*(*i- f ) n lT i 2 ...n h k o d v i d = n*(x ) = x , by (7.7.17). In the following, we shall show that x i v t t is a variable in D (x. ) . n 1 n 2 ...n h k Q Since n?(x,,) = x is in D * (x ) and 1 d * ^i ri 2--- n i-i n i ri i+r" ri h k o D* +1 * (x, ) = H £+1 * ( X ) - H * (x, ) , x is not in n 1 n 2 -..n i _ 1 n i n i+r ..n h k Q t H * (x ) . From (B) , x is also not in n l n 2-" n i-l r, i n i+] '••% k o l 0+1 H (x ) . Since x is in H ' (x, ) and x is not in r ']V f h V n t \*l'~\ k O t 127 H n t n (x v >■ x «- must be in D i+1 < x , >. by (7.7.7). Thus (C) n l n 2-" n h \ C n l n 2"* T, h k o is proved. From (B) , H l (x. ) and D l (x, ) are not Ti 1 n 2 "-n h k Q n 1 n 2 ---n h \ £+1 changed after n . is modified to n *. From (C), D (x ) will 1 n 1 n 2 ---n h k contain no more new variable after n . is modified to n *. If all n l 'i 'i i such that n. (x ) = x for some x in D £ (x, ) are modified to 1 U V n l n 2--' r 1h k o n* as defined in (7.7.17) and denote the modified n,,n„,...,n a s 1 2 'h n *»n * ' •••» n?» then from (A) and (7.7.17), X s \ H nf 1 n 5 ...n* (7 ' 7 - 26 > 1 I ho proving (d) . From (7.7.20), (7.7.21), and the definition of n * n * n* V and n i n 2-- T 'h k o ^r-^h \ D n 1 n 2 ...n h ( V =D n ! n ? ... n * ( V proving (b) and (c) respectively. From (7.7.22), (7.7.26) and the definition of n *, n * . . . , n * Q.E.D, 128 Lemma 7.7.7 Suppose there is no dominating row in the constraint matrix A of (P) and n , r\ , ..., n are symmetric permutations of (P) such that column a , does not dominate column a , for every pair £ of variables x , and x . in H (x, ) . If there esists some t ni n 2 .-. n h \ pair of variables x aad x in H (x. ) such that a dominates F st ri.r\ ... n Is. s 1 2 ho + + + u *-u - a , then there exist symmetric permutations n,, H-, ••• ,n, such that (1) r\ , n 2 » •••> % are obtained from n i' n 2' •'■' n h by repeating the modification described in (a) of Lemma 7.7.6 until no pair of variables x , and x , such that a . dominates a exist in £+1 H * * .,.(x ), (2) a , does not dominate a , for any pair of variables x , s t s £+1 and x . in H (x ) , t nfri$--.n$ \ (3) H £ (x. ) = H * (x. ) and n lV n h k o n t n ^' n S k o D l (x, ) = D * ,(x, ), n l n 2- ' n h k o n f n ^'- ri S k o £+1 , x c (4) D , , L (x, ) £ D_ _ „ (x, ). Proof Modifying n -, > n 2 > •••» n , according to the modification described in (a) of Lemma 7.7.6, one obtains symmetric permutations f,y, nSi •••> n,' v of (P) such that x is not a variable in 1 2 n s v+l II . jCx, ), by Lemma 7.7.6. Then n* , n£ , ..., n* are symmetric :!...ri* k ' 12 h I ^ ho 129 permutations satisfying properties (1) (3) and (4) of this lemma, by Lemma 7.7.6. If there exists no pair of variables x . and x . s* t* in H . ^ A (x ) such that a dominates a , they satisfy njn 2 '•• n h o s r -f. + property (2) also. Regarding these n *> n£> •••» r\* as n , » n » ..., 12 n l 2 n, , this lemma is proved. If there exists a pair of variables x n s 1 a +l ~* -* and x t in H . . . (x. ) such that a dominates a , then one h nf n5--- ng % s ± t ± can repeat the same modification in (a) of Lemma 7.7.6, regarding n?> iJi . • . , n* as ru, n • ••> tk • Each time the modification is 12 n 1 £■ , n £+1 made, at least one variable is deleted from D (x, ) , by 12 no £+1 Lemma 7.7.6. Since the number of variables in D (x ) is finite, ni n r ..n h k o + + + this process will lead to symmetric permutations n , ru, •••> n, 12 h such that a , does not dominate a . for any pair of variables x . and s t s x , in H V" (x. ). Since H £ (x, ) and D* (x, ) njnj... n£ \ n iV n h k o n lV-' n h k o are not changed at each modification, ni n 2 ...n h k o njnj.-.nf (x fc ), o and ^£ / v £ , D (x. ) - D* (x, ). n 1 n 2 ...ri h k Q n}n{...nj k Q Since D \\ * (x, ) c D £+1 (x, ) for each modification, n 1 n 2 ...ri h k o * n 1 n 2 ..-n h k Q 130 ( x i. ) ^r n l T1 2' (x, ). h o Q.E.D. It will be proved in Lemma 7.7.8 that, the following procedure can be used to update symmetric permutations n 1 >n 2 » • • • » \ of (P) such that the resulted permutations fLtfi 2 » ■•-» \ are sti11 symmetric permutations of (P) . Procedure GF (x ) : o Fl H< — empty, D *- { x }, Dl< — empty o F2 For each i = 1, 2, ..., h and each x y in D, do the following: F2.1 x ■<— n . (x ) . S XV F2 2 If x is not a variable in H or in D and a does s s not dominate a for any x in H or in D, then store x in Dl. s F2.3 If x is not a variable in H or in D and a s s dominates a for some x in H or in D then update n to n* defined as l l v-^W' if x ^ x u' v n i : { X u"^ X v' V where x is the variable such that n (x ) = x U 1 U L 131 F3 H "< L — (union of H and D) , D< — Dl, Dl 4— empty. FA If D is empty, then procedure terminates. Otherwise go to step G2. ■ Lemma 7.7.8 Suppose there is no dominating row in the constraint matrix A of (P) and n 1 »n 2 > •••» n, are symmetric permutations of (P) Then (1) for any given variable x , the last updated permutations o H 1 , n 2 ,»--i n h , denoted by f^, f^,..., n h> by the above GF(G lV" fl , Finding) procedure are symmetric permutations of (P) . (2) column a , does not dominate column a , for any pair s t of variables x , and x . in G V2..A ( \ } - o Proof From Lemma 7.7.6, the permutation n* of step F2.3 is still a symmetric permutation of (P) . So f^, f^, ..., f\ are symmetric permutations of (P). In the following we shall show that the set H in the procedure FG is the set G ( x . ) after the procedure ^2 "•% k o -~^ _^ FG is applied to x . Then, since column a , does not dominate a , k o s t for every pair of variables x , and x , in H (from the way it is constructed) the lemma is proved. Suppose H = H l ~ (x. ) and D = D Z (x ) n,n 9 . . -n, k n 1 n --.n, k l^ho 12ho before we go into step F2 of the FG procedure. Two cases may occur in step F2. Case (^ None of permutations r^, n 2 , ..., n is updated in step F2.3. 132 In this case, the procedure FG goes through F2.1 and F2 . 2 only in F2. After the procedure FG goes through step F 2, the set D (x ) is the set Dl, by Theorem 7.7.1. So after the procedure J±l"2~.'"h k o FG goes through step F3, H = H £_1 (x, ) + D l (x. ) = H £ (x ), n 1 n 2 '--n h k n 1 n 2 ---n h \ n 1 n 2 ---n h \ £+1 and D = D (x ) . ^l^-'^h k o Case (2) Some of the permutations n , , n , . . . , n, are updated in LI h step F2.3. In this case, the procedure FG goes through F2.1 and F2.3 + + + only in F2. The updated symmetric permutations r\ n , n_, ..., n, have 12 h the following properties: H i + .J*,, ) = H ^ (*, ), (7.7.27) njnj---n{ ^ n 1 n 2 ...n h \ and D I + + (x. ) - D £ (x. , (1 7 _ Q . 4 n 2'-- n h k o n lV n h V' (7.7.28) by Lemma 7.7.7. From (7.7.28) and Theorem 7.7.1, D + + +( x , ) n lV" n h k o is Dl. So, after the procedure FG goes through step F3, H = H 9 ~ 1 (x, ) + D * (x, ) = H l (x. ) n 1 n 2 .--n h \ n 1 n 2 -..n h k Q n 1 n 2 ...n h k Q - H *. 4,(X, ), and D = D , (x. ) 133 o D = D From cases (1) and (2), if H = H l 1 ( x , ) and n i n r ..n h k Q «"« n ^*V ^ ' then after the procedure FG goes through steps £+1 F2 and P3, H - ^..^J and D - D™ . ^(x^) , „ her e p r p,, .... p h are ^, ^ .... ^ or ^, n +, ..., n + In step Fl of the procedure FG, H is initialized as the empty set and D as {x k }. After the procedure goes through steps o F2 and F3 for the first time, H = {* } = H ° (x ) and k n,n 9 . • .n. k o 1 I h o ° = D n n • n (X k } - If D is em Pty, then n l n 2 n h o G a a a (x u ) = G (x, ) = {x, } = H. If D ^ empty, by repeating steps F2 and F3, the procedure will arrive at symmetric permutations f| , f\ , ..., a , where J- I h D = \V'-\ (Xk o ) ^ GmPty ^ S ° me k ' SUCh ^^ °A A --.fl U k } H fi f, . . .fl ^ x k ^ = H ' since the number of variables in X = {x , x , .... x^} is finite and the arguments in cases (1) and (2) can be repeatedly applied to sets H and D such that H = H £_1 (x ) Pl P 2 ...p h k o and D = D J n « n (\ ^ for some P osi tive integer l and some p l p 2*" p h o symmetric permutations p , p , ..., p , 12 h Q.E.D. 134 It will be proved in Theorem 7.7.9 that the following GDCDP ( General Dominated Column Deletion Procedure ) can be applied to reduce Problem (P). Procedure GDCDP ( General Dominated Column Deletion Procedure) : GDI Delete dominating rows from the constraint matrix A. GD2 Find a dominated column a, . If no dominated column k o is found then procedure terminates. GD3 Apply the GF procedure to x and then update n .. , r\ , o . . . , n h to f| , f) 2 , . . . , fi h « GD4 Delete all columns with their corresponding variables in G „ (x, ) and set all variables in ¥2- "\ k o G A A . (x. ) to 0. ni n 2 ...n h k o GD5 Update r)-,. n 2 »"*> ^ h to n£> H 2 > ■•'» H^'j which are obtained by restricting r\,, ru> • • • » n, to the problem reduced at step GD4 and then go to step GD2. ™ Theorem 7.7.9 Let n , » n 9 ' •••» n h be symmetric permutations of Problem (P). Then the above GDCDP can be applied to reduce Problem (P). If (?9) denotes the reduced problem by applying the GDCDP procedure to (P) , then the last updated permutations n-, » n ? > •••> n are symmetric permutations of (P9) . Proof Symmetric permutations n, > n ? » •••» n, are preserved during step GDI, by Lemma 7.6.1. In the step GD4, since a is dominated o 135 by some other column, each of the columns with their corresponding variables in G. is also dominated by some other column, by n 1 n 2 -.-n h 7.7.5. (Note that some of these columns may dominate each other.) But because, by Lemma 7.7.8, column a , does not dominate column a , s t' for any pair of variables x , and x , in G A (x ) , there is t W'"*h k o no subset of columns in G (x, ) which dominate each other. ¥2- ,f| h \ Thus all columns with their corresponding variables in G (x ) can be deleted, without having wrong solutions by deleting columns which dominate each another. By Theorem 7.7.4, the permutations n{> ri2' •••» n^ obtained by restricting r^, n 2 » •••» n h to the problem reduced at the step GD4 are symmetric permutations of this reduced problem. Q.E.D. From the above theorem, if the GDCDP procedure is applied to a given problem (P) with symmetric permutations n, » n , •••, n , 12 h then the reduced Problem (P9) will have no dominating row and dominated column in its reduced constraint matrix. Also the reduced permutations n^.n^ •••» \ obtained from r^, n 2 » •••> \ by the corresponding modification are symmetric permutations of (P9) . It will be proved in Theorem 7.7.10 that the following procedure GECFP (General Essential Column Finding Procedure) can be applied to reduce the Problem (P) . 136 Procedure GECFP : GE1 Find an essential column a. of the constraint matrix. k o If no essential column is found, then the procedure terminates. GE2 Apply the procedure FG to variable x and obtain o G (x. ) . Fix all variables in G (x, ) n lV"\ k o n lV'-\ k o to 1 and delete all rows covered by the columns with their corresponding variables in Gn..ru...n 1 (x, ). 12 h k o GE3 Update r^, r) 2 > •••> \ to n|> ru> ••-, n/ obtained by restricting n, , ru , . .., n, to this reduced problem (i.e., the problem obtained by fixing all variables in G (x. ) to 1) and go to step GEl . ■ n n 9 ...n, k 6 v 1 I ho Theorem 7.7.10 Let (P) be a problem with symmetric permutations n,» r\ , ..., n . Then the above GECFP can be applied to reduce the problem (P). If (P10) is the reduced problem obtained by applying the GECFP to the problem (P) , then the last updated permutations n, , ru* •••> n, are symmetric permutations of (P10) . P k P k-1 P l Pro °f H. ° n. o ... o n. is a symmetric permutation of problem J k J k-1 J l (P) for any positive integers k, p , p , ..., p and any n. , n. » ..., n. in {n,, n » •••. n.^ • Therefore, if a. is an essential ji, l ^ n k K o column, then all columns with their corresponding variables in 137 G (x ) are essential, by Lemma 7.6.8. Thus in solving (P) V2*-'\ o all columns with their corresponding variables in G (x ) can n lV n h k o all be fixed to 1. By Theorem 7.7.4, the permutations n ' , n~> ••• -L 2 > n h obtained by restricting n,, r\ , ..., n to the reduced problem (problem obtained by fixing variables in G (x, ) to 1) are r],r) 9 - • .n, k 12 ho symmetric permutations of this reduced problem. Q.E.D. From Theorem 7.7.9 and 7.7.10, if the procedures GDCDP and GECFP are repeatedly applied to the problem (P) with symmetric permutations n,, n 2 > •••♦ n,> then the reduced problem (Pll) , where none of the three reduction operations can be applied, is a symmetric problem with symmetric permutations ri', ?i' . .., n ', which are 12 h obtained from n , n , ..., n, by the corresponding modification as described in Theorems 7.7.9 and 7.7.10. 7.8 Some Computational Results The symmetric property of the minimal covering problem in the implicit enumeration algorithm discussed in this chapter are utilized in the algorithm of Section 5.3. A detail description of this is given in [32]. This section gives some computational comparison on solving problems with and without using the symmetric property of the given problem. Seven symmetric problems are tested. The constraint matrices of problems 1 and 2 are of the form (7.7.1), 138 where D = E = F = G - 10 1 110 110 11 110 ^ 1 10 10 10 10 10 10 10 10 1 / N 110 10 1 10 10 110 10 1 for the problem 1 and 139 D = E = F = ' 1 i o N 10 1 10 10 10 10 10 1 10 10 / 1 o o' 10 10 10 10 1 / 10 10 10 10 10 1 / G = f 1 1 N 110 10 10 10 1 11 110 / 140 for the problem 2. Problem 3 is the testing problem IBM No. 9 in [15]. Its constraint matrix is where A = B = B | D i o ] C ' D 1 \ B D | C D D | B D 1 C D ' D | D k ' ) 110 11 110 110 110 1 1 N 10 10 10 10 10 1 10 10 141 D = S 10 10 10 10 1 / = It can be proved by Theorem 7.4.1 that the permutation defined by -► X i+10 -» x i-5 if IS i 5 5 if 6 £ i 1 15 is a symmetric permutation of this problem. Problem 4 is the smaller* one of the two difficult problems reported in [24]. Its constraint matrix A is l 27 _N_!_°_ J_°. i A o : o J a 9 A 1 .!.?..'/. 1 i . -| . i . ' . i i -i-- I _ i. „ I _ • _ c y : i p y * See the footnote in Section 9.2, page 162 142 where K 011100000 101010000 110001000 000011100 000101010 000110001 100000011 010000101 001000110 100100100 010010010 001001001 x V ~ k I is the 9x9 Identity matrix, C is a 9x9 zero-one matrix with elements in the k-th column equal 1 and all other elements equal ~k for k = 1, 2, ...,9, P is a 9x9 permutation matrix for k = 1, 2, . .., 9, and is the matrix with all elements. A complete description of this problem is given in [24]. It can be shown by Theorem 7.4.1 that the permutation n defined by i ± — > x , if l£ i ^6, 10 ^i £ 15, 19 £ i £ 24, :. -* x. ., if 7^ i £ 9, 16 £i £ 18, 25 £ i 4 27, l i-3 143 is a symmetric permutation of this problem. Problem 5 and 6 are problems obtained from minimizing the logic expression of seven- variable switching functions. Both switching functions are partially symmetric in variables y , y , and y . There are 3 symmetric permutations (derived from exchanging pairs of switching variables (y-L* Y 2 )» (y-L> y 3 ) a ^d (y 2 , y 3 )) for each of these 2 problems. Problem 7 is obtained from minimizing the logic expression of the totally symmetric six-variable switching function S (a switching function whose value is 1 if exactly 2, 3, or 4 input variables are 1) There are 15 symmetric permutations (derived from exchanging each pair of variables) provided for this problem. The computational results of solving these seven problems, both with and without using the symmetric property of the given problem , are shown in the Table 7.8.1. The computer used for obtaining these results is IBM360/75J. The column under "No Of ITER.", "No Of BKTRK", "TIME IN SEC." are explained in Table 5.4.1. "?" in the table shows that the figure in that field is not known. From this computational comparison, one can see that utilization of the symmetric property of the given problem yields better computational results. Computational improvement through the utilization of the symmetric property is more than ten times for problem 7. 144 H w o P-, u H Cd H W >< co O H CO 3 u W W S CO H H 2 i— i o> CM CM CM H r-«. o 00 CO co m co 00 CO CM CO to ^ o 3 H • W O PQ 23 CM o LO 00 co CM o CM rH rH o O E in CO H CM H rH CO rH rH o m PROB. NO. . r— I C 1 CO cd H rH rd O U a c > •H CU •u r-l a) a o u a o •H u 4J CU e CU rC oo c •H CO 3 3 r-l O rC 4-1 •H en CU en co a I* ■u 14-1 o c o CO •H )-i rd P- S o u CU _ J= C 4J o •H <4-i 4-1 O CO u u CU CO ex x: O 4-J 0) c X tO EH -C ■u • M en cu C 4-> o en •H CO 4J 14-1 CO >-i en cu cu w B •H -H 4-1 >^ . e x r-l CO -H CU 6 en •U 3 4-J C a cO co Fi x: x: o 4-1 4-1 CJ C CU LO 3 r-l r^- r^ O rH e — ^ o r-l 4-) T3 CU cu rQ m 4J >> r-* co o -H 6 "»*, -H u r-l 4J Q cu en CJ jD cu r^ CU u en X •H 4-J U O rJ c U CU o 4J C 3 en o a T3 s C en o o -a u u B CU O U0 en CJ I— • CU rH U r~^ en -^ cu 00 U 4-J . rH CU 3 CM rM xi a ^O o> >, 6 u o 4-1 4-1 CJ 3 3 CJ O Q *-i X x> u m CO rd r^ y-i ~^. ^J r^ O O o O vO o O T3 CO 4-J 4-1 CU cu X 4J U D.M H m en m ■!< -K •♦: 145 8. PERMUTATIONAL PRECLUDING PROCEDURE Let n be a general permutation (not necessarily symmetric) on X ■ {x. , x_, ...» z } . In solving the problem (P) by the implicit enumeration method, if the subproblem with x. fixed to 1 has been enumerated and if x. ■ n(x.), then in the subproblem with x. fixed to and with x. fixed to 1, a better feasible solution (a feasible solution better than the best solution obtained so far) can only be found as (x , x , ..., x ) such that its corresponding ( n (x ), n(x ), . . . , n (x ) ) is not a feasible solution. In this J- £ n chapter, properties of this kind of feasible solutions are persued. Then these properties are used to preclude some subproblems where no better feasible solution can be found. The discussion in Section 5.2 for precluding subproblems in the enumeration procedure becomes a special case of this chapter. 8.1 Generalized E-sets Theorem 8.1.1 Let n be a permutation (not necessarily symmetric) on i x , x , . . . , x } . If (x , x , . . . ,x ) is a feasible solution ■*■ *- n i z n of the problem (P) such that (n (x. ) , n(x_), . . . , n (x ) ) is not a 1 z n feasible solution, then there exists some row r = (a, , , a a ) k kl' k2' ' kn of A such that (1) r does not dominate any row of n (A) (see Section 7.4 for the definition of n(A)), (2) if a, =1 in r, , then n (x ) = 0. k£ k n Furthermore if r\ (x.) = x. = 1, then a. . is 0, 1 j ki Proof Since (x n , x„ , ..., x ) is a feasible solution of (P) , 12 n 146 ^x ^ X l x„ X \ n (8.1.1) Rewrite the above inequality as n(A) n( X;L ) n(x 2 ) n(x ) n (8.1.2) Since (n(x.), n(x ), ..., n (x )) is not a feasible solution, 12 n n(x 1 ) n(x 2 ) n(x ) ny \ i 1 1 (8.1.3) i.e., there exists some row r, of A such that k a kl' r|( V + a k2' n(x 2 ) + •'• + a kn' ri(X n ) = °' (8 ' 1 - 4) 147 The above equality shows (1) r does not dominate any row of n(A). (Otherwise from (8.1.2), a .n(x ) + a._.n(x ) + ...+ a, .n(x ) > 1 holds) kl 1 k2 2 kn n (2) if a = 1, then n(x £ ) = 0. Furthermore if n (x . ) = x. = 1, then a, .must be 0, since a =1 i J ki ki implies t\(k.) = by (2). Q.E.D, For a given permutation n and a given index j let E ( j ) be the set of rows of A satisfying condition (1) of Theorem 8.1.1 and having their i-th components equal 0, where i is the index of the variable x. such that n (x ) = x. . E„ (j ) is defined as the i i j 1 generalized E-set of column a. with respect to the permutation n. Example 8.1.1 Let the constraint matrix A of the problem (P) be s 110 10 10 10 1110 110 10 110 1 11 (8.1.5) Define two permutations n and r\ on X = { x n , x_, x_, x, , x r , xJ l l I/J456 as i x l X 3' X 2' x r X 2 *3 X 4 ^~ X 4' " C 5 V V X 6 r (8.1.6) I x, x~ — i X, -V X -»- X 3 ' 6 ' E l ' <4 * h ' ■> x. From the definition of H (A) and ^ (A) , 148 (8.1.7) 11 x (A) = 110 10 10 10 10 110 1110 110 1 11 (8.1.8) n 2 (A) = s 10 11 10 10 10 10 1 1110 110 10 10 1 (8.1.9) Then E (3) = {r , r,} and E (3) = {r c , rj i i D n_ 5 6 8.2 Precluding Of Subproblems Precluding of subproblems using the properties stated in Theorem 8.1.1 is discussed in this section. Let n " denote the inverse mapping of n and let n (a.) denote the column a. such that n (x.) = x.. .1 i J Kx.-imple 8.2.1 Let n be the permutation defined in (8.1.7) and A be the constraint matrix (8.1.5), then 149 x ,-1 X A ■* X -*■ X 3 ' 5 ' 1 ' l 2 ' (8.2.1) and n 2 (a 2 ) = a* 5 . ■ Theorem 8.2.1 Let E (j ) be the generalized E-set of a with respect n j to permutation n and S be a partial solution with x , x x J V J,' ■"•• x.. fixed to 1. If each row in E n (j ) is covered by some of n (a" ), J l -1 - -1 _^ (a ),..., n (a ), then no row in E (j ) satisfies the condition 2 J r n (2) stated in Theorem 8.1.1 for any feasible completion of S. Furthermore, if x. = r)(x.) = 1, then (nCx^, n (x 2 ),..., n(x n )) is a feasible solution of (P) for every feasible completion (x , x , . . . , x ) of S. J- ^ n Proof Suppose r fc is a row in E (j) satisfying condition (2) of Theorem 8.1.1 for some feasible completion (t, x,, .... x ) of S -1 1 2 n Define n (j ) to be the index of the variable x, such that k -1 t n (x ) - x fc for t = 1, 2, . . . , r . Since each row in E ( j ) is t t _ n covered by some of i, (a ) , n \a. ) , . . . , rf 1 ^. ), row r, is -l X i 2 i J r k covered by some of n (a ), n _i (a ), ..., rf 1 ^. ). So at least J l J 2 J r one of a a , -1(1 )' -1 , -.., a , , say a , must be 1 kn V V kn '(j J kn \j ) kn _1 (j ) z r °i Since a -1 ■ 1, n(x -1 _-, ) = n(n (x. )) = x. must be 0, by k n U ± ) n x (j ) J i J i the condition (2) of Theorem 8.1.1. This contradicts that x. , x. , 150 ..., x. are fixed to 1 in S. J r Furthermore, if x. = n(x.) = 1, and if there is a feasible completion (x , x , . . . , x ) of S such that (n (x_ ) , n (x.) , ..., n (x )) ■J- ^ n i z n is not a feasible solution, then, by Theorem 8.1.1, there exists some row r of A satisfies the conditions stated in that Theorem. Since x. = h(x.) = 1, a = by Theorem 8.1.1. By definition, row r, must J i ki k be a row in E ( j ) . Since there is no row in E (j) satisfies condition n n (2) of Theorem 8.1.1, r. can not be a row in E (i). This is a k n contradiction. Q.E.D. Let n be a permutation on { x. , x. , ..., x } such that 1 I n n(x ) = x.. After the subproblem with x. fixed to 1 has been enumerated, in the subproblem with x. fixed to and x fixed to 1. 1 j one can test whether each row in the generalized E-set, E ( j ) , of a n j with respect to n is covered by some of columns n~ (a. ), n (a. ), -1 _ j l j 2 •••> n (a. ), where j , j , ..., j are indices of the variables J 12 r r which are fixed to 1 and are not equal to x. in the current partial solution S. If each row in E ( j ) is covered by some of n~ (a ), -1 -1 " ^ j l n (a. ),..., n (a ) , then (r) (x ) , n (x.) , ..., n (x )) is also a j 2 J r n feasible solution for every feasible completion (x. , x_, ..., x ) of 12 n the current partial solution S, by Theorem 8.2.1. Since the subproblem with x ± fixed to 1 has been enumerated, and (n (x ) , n (x ) , n )) is a feasible solution with n(x.) = x. = 1, every feasible completion of S can not be better than the best solution obtained so 151 far. So the current subproblem can be skipped without losing a better feasible solution. As a special case, let us consider the permutation defined by ■> x, x. j> x (8.2.2) X i ' -* x if k ^ i, j for some i and j. From the definition of n , n 1 (a.) = a. , ^ _1 (a. ) = a. , if J z r" i, j (8.2.3) Theorem 8.2.2 The generalized E-set, E (j ) , of the column a with n J respect to the particular n defined by (8.2.2) is a subset of E , ij the E-set of column a. with respect to column a. (See Section 5.2 for the definition of E-set). Proof Fr °m the definition of E ( j ) , each row of E ( i ) does not n n dominate any row of n(A) and each row of E (i) must have its i-th n element equal to 0, where i is the index of variable x such that i n(xj - x . From the definition of n, (8.2.2), the only rows of A that may not dominate any row of n (A) are those with their i-th and j-th elements different (i.e., one is and the other is 1). Since each row of E^ ( j ) must have its i-th element equal to 0, the only 152 rows that may be in E ( j ) are those with their i-th element equal to and the j-th element equal to 1, i.e., those rows in E... Q.E.D E xample 8.2.1 Let us consider a problem with the constraint matrix \ A = 1 1 10 2 - 1 10 3 1 1 4 1 1 5 1 10 6 11 7 110 ^ J From the definition of E „, E _ = (r , r , } . Define a permutation L 2. 1 Z J D n on fx , x , x , x , x } as r x. x ^2 ' \ ■ l 3 » C 4 ' 5 ' then n(A) = _0_1_1_0_0 _0 1_0_1_0 _1_1_0_0_0_ _1_0_0_0_1_ _1_0_1_0_0_ _0_0_0_1_1_ 110 153 Since r, of A dominates r. of n(A), E (2) = { r } . From (8.2.3) and Theorem 8.2.1, we obtain the following Corollary. Corollary 8.2.4 Let 1 be a permutation defined on { x, , x_, . . . , x } as ^ X. v X. , 1 J / x j ► x. , (8.2.4) x^ > x £ , if I fe i, j If S is a partial solution with x., x. , x. , ..., x. fixed to 1, 3 h J 2 J r ^ and if each row in E (i ) is covered by some of columns a. , a. , .... n V J 2 a. , then (n (x n ) , n(x_),..., n(x )) is a feasible solution of (P) 1 l n J r for every feasible completion (x. , x.,..., x ) of S. 1 I n Proof From (8.2.3), n~ (a. ), n~ (a*. ), ..., n~ (a. ) in Theorem _ J 1 _ J 2 _, : r 8.2.1 can be replaced by a. , a. , ..., a. . This corollary is Jl J 2 _ J r _ x , proved by replacing a. , a. , ..., a. for n (a. ), n (a .),... , _ J l ^2 3 r J l J 2 n (a. ) in Theorem 8.2.1. J r Q.E.D. Since E ( j ) is a subset of E.. for the permutation n of n ij (8.2.4), the two conditions of Corollary 8.2.3 are more easily satisfied than the two conditions for E.. given in Theorem 5.2.1. ij Theoretically, the computational efficiency of the algorithm described in Section 5.3 can further be improved if the conditions given in Corollary 8.2.3 are checked instead of the conditions given in 154 Theorem 5.2.1 for each partial solution S (i.e., E.. is replaced by E ( j ) in the testing set generated in step M4.4). But in actual programming experiment, it was found that most of the time the generalized E-set, E ( j ) , with respect to the particular permutation n of (8.2.4) for a subproblem with x. = and x. = 1 is not different from the E-set E.. for that subproblem. Consequently no example of actual computational improvement was found through the implementation of this generalized E-set with respect to this particular permutation. No experiment concerning the checking of generalized E-sets with respect to general permutations has been done. This is would need further research. 155 9. THE MINIMAL COVERING PROBLEM WITH PARTITONED CONSTRAINT MATRIX In this chapter, we consider solving the minimal covering problem (P) with a constraint matrix of the following form: A = i i i 1 A • • 2 , I i 1 A • r c i ; c 2 » — ! (9.1) where A. is a m. by n. zero-one matrix for i = 1, 2, . .., r, C. is a c by n. zero-one matrix for 1-1, 2, ... ,r, and all other parts of A are all zero elements. The two problems reported in [24] are problem of this type. A structure of the following form A = I f - I A . I 'II 2' 'c i c I r-li r (9.2) 156 is considered as a special case of the structure (9.1), in which A is a matrix with m =0. r r In solving the logic minimization problem for a multiple-output switching function, if the rows (corresponding to the true vectors) of the prime implicant table are rearranged such that true vectors implying more output functions are placed at the bottom of the table, then the constraint matrix of the formulated minimal covering problem will be of the form (9.2). Example 9.1 Let us consider the problem of Example 2.2.2 again. If the true vectors Yy?^ Y 5 > y ? . y n » y 12 > ^4' ^6 aVe moved to the bottom of the prime implicant table, then this table becomes as follows: n l 1 y .15 y- e 1 1 l 12 'l6 q 3 V q 5 1 1 '8 H 9 4 10 M ll M 12 n ] 1 1 1 1 1 f) i ; I °l 1 I 157 The utilization of this special structure of constraint matrix in solving problem is discussed in this chapter. Some comp- utational efficiency improvement through this utilization is shown by examples. 9.1 Upper Bounds On The Values Of Groups Of Variables A minimal covering problem of this type can be restated as follows: minimize x.+x.+. . .+x +x +. . .+x + . . .+x , 12 n 1 n^+1 n^+ n^ n subject to '1 N A r x i 1 "i i A 2 .x 2 > ITlr) 'l N A .x > r r — "V 158 C l- X l + C 2- X 2 + • + C .x > r r where x. = or 1 for i = 1, 2, l x _i_i * n +1 j x x = n +2 , . • . x = Z I . r n l +n 2 n 1+ n 2+ . ,+n ,+1 r-1 To utilize the special structure of the above problem, variables x , x , ..., x are first grouped into r groups as G 1 = { x 1 , x 2 , .. ., x }, G ? = * X „ 4-1 • • • , x . 2 n 1+ l, n 1+ n 2 } , G r = {X n 1+ n 2+ ... + n r _ 1+ l, . . . , x n } . then an upper bound of the value of each group will be found and these upper bounds are used to preclude some unnecessary search in enumerating the problem. Let us first see some definitions and a theorem which will be used later. For k 1, 2, ..., r, let P denote the following problem: 159 _ , for k = 1, 2, ...,i-l, i+l,...,r, "V is satisfied by only fixing variables in G. to 1. In order to satisfy these constraints, at least z., +z,_+ ...+ z. , +z._ +...+ Z 12 l-l l+l r variables not in G. must be fixed to 1. If U. variables of G. are l ii 160 already fixed to 1 in S, then any feasible completion of S must have value greater than or equal to U.+Z + Z +. . .+Z. +Z. , +...+Z , i 1 2 l-l l+l r which is equal to ZBAR, by (9-3). Q.E.D. Now the utilization of the special structure of the problem is described as follows. In enumerating the problem, if there exists some i such that u.-l variables of G. are fixed to 1 in the current partial 11 solution, then all free variables in G. must be fixed to 0, by Theorem 9«1» in order to get a feasible solution with objective value smaller than ZBAE, an upper bound of the optimal value of the problem. From this, one can see that U. is an upper bound on the value of the group G. for each i. The current subproblem may become infeasible when free variables in G. are fixed to 0. In this case, the current l subproblem cannot have any feasible completion with a value smaller than ZBAR and program backtracks . If there exists some i such that more than U. - 1 variables l of G. are fixed to 1 in the current partial solution, then the program may backtrack immediately, since no feasible completion with objective value smaller than ZBAR can be found under the current partial solution, by Theorem 9.1. When an improved upper bound ZBAR on the optimal value of the problem is found in the enumeration procedure, u. for each group 161 G. is updated by (9-3) for i = 1, 2, ..., r. The above diccussion of the utilization of the special structure of the problem can easily be incorporated into the algorithm of Section 5.3. A detail description of this incorporation is given in [32]. 9.2 Some Computational Results Three problems of the type discussed in Section 9-1 are tested by this incorporated algorithm. Computational results are shown in Table 9-2.1, along with the results for the same three problems obtained by using algorithm in Section 5.3. Programs are coded in FORTRAN and compiled by FORTRAN H compiler. Problems are tested on IBM360/75J computer. All the columns in this table are explained in Table 5.^.1. PROB. NO. CHECKING UPPER BOUND FOR EACH GROUP VARIABLES WITHOUT CHECKING UPPER BOUND FOR EACH GROUP VARIABLES NO. OF ITER. NO. OF BKTRK TIME IN SEC NO. OF ITER NO. OF BKTRK TIME IN SEC 1 5647 2963 77.25 161.83 6321 4818 3063 3204 94.14 200.89 2 4195 2738 3 8765 5008 246.39 12425 6554 356.37 1 Table 9.2.1 Comparison of two cases: with and without checking upper bound for each group of variables. 162 The first problem tested is the smaller one, A27, of the two difficult problems reported in [2h]*. The constraint matrix of this problem is as follows: o I a 2 ; o C 2 S where A n , A , A are the same matrices of size 12x9 each, and [C , C , C ] is a 8lx27 matrix, z , z , z for the three smaller problems P , P , P are 5 ■ The constraint matrix of the second problem is as follows: f f .f 2 i 12 A I It is obtained from a prime implicant table of a six-variable switching function with two outputs f and f by permuting its rows. Here C^ is the prime implicant table of the switching function f\-f p ' The optimal value of the larger one, A*+5, of the two problems in is proved by this program to be 30 in about 135 minutes, with 1 9 6l6 Lterations. If the symmetric property (see Chapter 7), of this m Ls taken into consideration, it can be proved in about 90 ■1 about 159,500 iteration: 1. 163 It is a 8^x36 matrix. \-a-] is the concatination of the prime implicant tables of f and f . It took only few centiseconds to get Z = 8 for the problem P . In solving this problem, Z must be set to 0. The third problem tested is constructed by the author, its constraint matrix is A. where A ± is a 50x60 matrix, A g is a 39x55 matrix and [C , C ] is a 11x115 matrix. It took about 2.-k seconds to get z =15 for the problem P and 0.1+ seconds to get z^ = 8 for the problem P . 1 2 2 From the computational results shown in Table 9.2.1, about 30% of computation time can be saved in solving the minimal covering problem with partitioned constraint matrix if the number of variables fixed to 1 in each group is checked to see if it exceeds its upper bound in the enumeration procedure. 10. THE GENERAL COST MINIMAL COVERING PROBLEM 164 This chapter discusses a generalization of the algorithm described in the previous chapters for the general cost minimal covering problem. The general cost minimal covering problem (GP) is defined as follows: minimize c, x +r x +. . .+c x , 112 2 n n (GP) subject to X l A 1 x. = or 1, for i = 1, 2,...,n, where A =(a..) with a.. = or 1, and c . is a non-zero positive integer. c. is called the cost of the variable x.. The minimal l i covering problem (P) defined in Chapter 3 is a special case of this problem, where the cost of each variable is 1. In implementing a switching function using PLA, the size of PLA is first minimized by minimizing the number of terms used in expressing this function. Then depending on different technologies 165 used for implementation [30], one may want to minimize or maximize the number of contacts required at the intersections of horizontal and vertical lines. ( If contacts between horizontal and vertical lines are formed, MOSFETs or diodes at the intersections become responsive to their input voltages. ) The minimization ( or maximization ) of the number of contacts improves reliability of PLA. Suppose the switching function f = (f, , f „,..., f ) to be L 1 2 u implemented is expressed in the following disjunctive forms: f = q V q ... V q L l X 2 1 3(1) f~= q. V q. ... V q. J l J 2 J 8(2) (10.1) f u = \ V q k ' * ' V q k U k l R 2 R B(u). Not all these terms in the above expressions are distinct. Let q , q ,..., q be all the distinct terms appeared in the above expressions. The number of contacts required in implementing f according to the above expressions (10.1) is L + e. + e + ... + e , 12 r where L is the sum of the numbers of literals in q , q_, ..., q , and e. is the number of times q. appeared in the expressions (10.1) for each i=l, 2, ...,r. To minimize ( or maximize ) the number of contacts required after minimizing the size of PLA in implementing a switching function, one may formulate the logic minimization 166 problem into the general cost minimal covering problem. Formulation of the logic minimization problem into the general cost minimal covering problem (GP) can be done in a manner similar to that into the minimal covering problem (P) , except the assignment of a cost for each prime implicant. In this formulation, each prime implicant q. is assigned a cost ( e. + £. + WW ) or r _ e. - 1. + WW ) depending on whether the problem is to minimize or 1 i to maximize the number of contacts required in implementation, where e. is the number of output functions implied by q , £ is the i * J i i number of literals in q. , and WW is a sufficiently large fixed integer to ensure that the number of terms, i.e., the size of the PL- 4 ., in the optimal solution is minimized. Many other important problems [1, 2, 3, 4, 5, 7, 17, 18, 21] can also be formulated into the general cost minimal covering problem. They can then be solved by the generalized algorithm introduced in this chapter. No comparisons with other existing programs on the computational efficiency have been made. Computational results show that the algorithm introduced in this chapter is efficient in solving problems formulated for the logic minimization problem. 10. 1 Generalization Of The Basic Algorithm This section discusses the generalization of the basic Tithm described in Section 3.3. Il is a 1 ready known [18] that, with a slight modification of the operation 2, the three operations stated in Section 3.2 can be used \<> reduce tin- constrainl matrix A for the general cost minimal 167 covering problem. The operation 2 is modified as in the following operation 2 ' . Operation 2 . If a. is dominated by column a. and C j~ C i' then column a -j can be deleted from the matrix and the variable x corresponding to column a. is fixed to 0. J J The method used in the algorithm of Section 3.3 for calculating a lower bound ZMIN of a subproblem with partial solution S can also be generalized as follows. For each free variable x , let g. = c./£., where i . is the ■J J J J J number of non-zero elements in column j. Arrange g. s in an increasing order: J l J 2 J 3 \ Let h be the number of unsatisfied constraints by the current partial solution and r be the greatest integer such that a. + a. +...+£. > h . 3 1 J 2 J r Then ZMIN is calculated by ZMIN =c + c + + c +w j l j 2 '" J r where w is the value of the current partial solution S, i. e., w = I ex. kes k The efficient way described in Section 4.1 for checking ' domination relation among columns or rows can still be applied to the general cost minimal covering problem. 10.2 Precluding Of Subpro blems It is easy to see that the general cost minimal covering problem also has "the reducing property" as the minimal covering problem does in Chapter 5. "The excluding property" in the case of the general cost minimal covering problem is stated in the following theorem. Theorem 10. 2.1 Let E.. be the E-set of a. with respect to a , and ij J i let x be a feasible solution of the general cost minimal covering problem (GP) with x. = 0, x. = 1 and x, = 1 for i = 1, 2 r i j k J ' (other variables are assigned either or 1) . If each row in E ij is covered by some of columns a , a , . . . , a , then x' , which K-. k_ k. _^ 1 2 r is obtained from x by replacing x. = 0, x =1 with x =1, x =0 1 J i J and the remaining variables unchanged, is also a feasible solution of (GP). Furthermore, if c.> c., then the objective value of x' is J i greater than or equal to the objective value of x . Proof The first part of this theorem is exactly the same as Theorem 5.2.1. If c.> c., then, from the way x obtained, the objective value of x is greater than or equal to that of x . Q.E.D. From the above theorem, it is easy to see that, among all the subproblems generated at the same time in step M3.2 of the basic algorithm, if the one corresponding to the variable with the smallest cost is enumerated first, then the procedure discussed in Section 5.3 for precluding subproblems is still applicable to the i the general cost minimal covering problem. Based on the above rvation, the criterion used in step M3.1 for selecting a row is mod ili fo] lows : '*• 1 • 1 For each row in the constraint matrix, find the 169 smallest cost among all the costs of the columns covered by this row. The number of non-zero elements covered by the column corresponding to this smallest cost will be referred to as "the choosing weight" for this row. 3.1.2 Select the row x^ith the greatest choosing weight among all remaining rows in the constraint matrix. * This selection criterion is to find a feasible solution for the problem at the earlist possible iteration under the rule that the subproblem corresponding to the variable with the smallest cost is enumerated first among all the subproblems generated at the same time in step M3.2. According to the modification discussed in the last and this sections, a generalized algorithm for the general cost minimal covering problem was developed [32]. Some problems were tested by this algorithm. These problems were formulated for the logic minization problem or constructed by the author. By the program for this algorithm, coded in FORTRAN, problems were tested on the CDC Cyber 175 computer. Computational results are shown in Table 10.2.1. Computational results of solving the same problems by using the generalized basic algorithm with no "reducing property" or "excluding property" are also given in this table. Figures in this table are explained in Table 5.4.1. From this table, one can see that the use of "the reducing property" and "the excluding property" in this generalized algorithm does help in speeding up the enumeration in solving problems. The computational improvement is about 30 % on average. 170 PROB NO 1 PROB. ! >IZE 1 USING NEW PROPERTIES WITHOUT USING PROPERTIES NEW m n m' n' NO. OF ITER. NO. OF BKTRK TIME IN SEC NO. OF 1 NO. OF ITER. BKTRK i — 1 | TIME IN SEC 1.94 47.36 0.98 - 39.87 150.62 5.09 1 60 6o 43 52 45 83 _. .. 89 i 50 76 43 549 249 1.39 850 330 i 16882 j 6421 2 3 4 60 55 112 80 44 79 83 12410 242 5587 39.39 146 i 0.93 288 7920 153 3636 10599 366 73 ■I 77 ! 6155 3354 35.24 5 1 114 15732 10092 118.87 23178 6 | 166 i 156 — j 87 L 94 636 ! 319 3.87 .!"_] Table 10.2.1 Comparison of some computational results on two cases - with and without using new properties in solving the general cost minimal covering problem. Problems 1 and 2 were randomly generated by the author. Other problems were formulated for the logic minimization problem. The cost assigned for each prime implicant is the number of literals in it in these four problems. From Table 10.2.1, it is easy to see that about 30 % of it inn time is saved through the implementation of the new procedures mentioned in this section for precluding subproblems. These four logic minization problems were further formulated 171 as the minimal covering problems and were solved by the algorithm outlined in Section 5.3. The solutions obtained and the time spent in these two approaches are compared in Table 10.2.2. The column under "NO. OF VAR." shows the number of variables of the switching function whose expression is to be minimized. The columns under "m" , "n", and "TIME IN SEC" are explained in Table 5.4.1. The column under "NO. OF Terms" shows the smallest number of products which may epress the given switching function. The column under "NO OF LITR." shows the number of literals in the set of terms found by each algorithm. NO. OF VAR. ■ — TABLE SIZE r- FORMULATED AS (P) FORMULATED AS (GP) m n TIME IN SEC NO. OF TERMS NO. OF LITR. TIME IN SEC NO. OF TERMS M0. OF LITR. 6 f 55 44 0.2 12 40 0.93 12 38 7 ►— 112 79 6.56 18 67 35.24 18 66 7 114 83 25.10 15 53 118.87 15 I 1 ' ■ ■ *fr 53 8 166 156 j 0.58 45 254 3.87 45 254 Table 10.2.2 Comparison of solutions of the logic minimization problem obtained by two different approaches - formulated as the minimal covering problem or the general cost minimal covering problem. From these results, the differences in the running time of this two algorithms are very large while the differences in the numbers of literals in the two solutions obtained by these two algorithms are very small. Unless the minimization or the maximization of the number of literals after minimizing the number of terms is very 172 important in implementing a switching function, solving the logic minimization problem by the minimal covering problem is preferrabl* in terms of computation time. 10.3 The Symmetric Property Of The General Cost Minimal Co vering Problem ~ ' a ~ This section discusses the symmetric property of the general cost minimal covering problem. A permutation n on { x ± , x^ . . . , Xr } is said to be a symmetric permutation of the general cost minimal covering problem (GP) if the following conditions are satisfied: (1) ( nCxJ, n(x 2 ),..., T l(x n )) is a feasible solution of (GP) whenever (x r x^ ..., x^ is a feasible solution of (GP). (2) c = c. if x.= n (x ). i J i j From the above definition, it is easy to see that the objective values for both feasible solutions (x , x , . . . x ) and 12 ' n r l(x 1 ), H(x 2 ),..., n(x n )) are the same. It is also easy to see that the symmetric permutation defined for the minimal covering problem (P) is a special case of the above definition. In the following, let us see an example of a symmetric permutation of the general cost minimal covering problem. Let A be a symmetric permutation of a switching function f. It is known in Section 7.2 that A defines a permutation on the set of all prime implicants { q , q q } G f f as J- ^ n ''V = A( Z] ,).A(z 2 )....A( 2k ) if q ± - VV'-'V Where Z - = y- li 2,..., k. From this, we see that the numbers of eralfl in n. and >(q A ) are the same for each prime implicant q 173 of f. Since A is a symmetric permutation of f, if q. is an implicant of some output function f. of f, then A (q ) is also an implicant of the output function f., by Lemma 7.2.2. Conversely, if X(q.) is an implicant of some output function f. of f, then X ( (q.)) = q. is also an implicant of f., since A , the inverse of , is also a symmetric permutation of f. Thus the numbers of output functions implied by q. and A(q.) are the same for each prime implicant q. of f. Let the problem (GP) be the general cost minimal covering problem formulated for the logic minimization problem of f (either minimizing or maximizing the number of contacts required in implementation after the size of PLA is minimized), and let A be a permutation of this problem defined as follows: A(x.) = x. if and only if A(q.) = q. . (10.3.1) Since the numbers of literals in q. and A(q.) are the same and i J the numbers of output functions imolied by q. and A(q.) are the same x 1 for each prime implicant q. of f, the cost c. of variable x. and 1 XI the cost c. of the variable x. are the same for every pair of J J variables x. and x. such that A(x.) = x.. From the definition (10.3.1) of , ( A(x. ) , A(x„), ..., A(x )) is a feasible solution i 2. n of the program (GP) whenever (x n , x , ..., x ) is a feasible solution 1 z n of (GP) , by Theorem 7.2.4. Thus A defined by (10.3.1) is a symmetric permutation of the general cost minimal covering problem (GP) formulated for the logic minimization problem of f if Us a 174 symmetric permutation of f. Now let us return to the consideration of the general cost minimal covering problem (GP). By the same argument, Theorem 7.1.1 is also true in the case of the general cost minimal covering problem. A necessary and sufficient condition for a permutation to be a symmetric permutation of the general cost minimal covering problem (GP) is modified in the following Theorem 10.3.1. This modification is due to the fact that the condition (2) in this theorem, which a symmetric permutation must satisfy in the case of the general cost minimal covering problem, is always true in the case of the minimal covering problem. The proof of this theorem is exactly the same as the proof of Theorem 7.4.1 and is not repeated here. Theorem 10.3.1 A permutation n on { x^ x^ .,., ^ } is a symmetric permutation of the general cost minimal covering problem (GP) if and only if (1) each row of A dominates some row of A and (2) c - c 1 J for every x., x. such that n(x ) = x . 1 J i J Following the same discussion, all the theorems in Section 7.4 through 7.7 for the case of the minimal covering problem (P) are also true in the case of the general cost minimal covering problem (GP) . So, in solving the general cost minimal covering problem by the implicit enumeration method, the symmetric property of the problem can be utilized in the same manner as it is utilized in the case of the nimal covering problem to speed up the computation if there are some innetric permutations of the problem. procedure is yet implemented for the utilization of '" P r °Perty in the case of the general cost minimal covering 175 problem. From the experience with the minimal covering problem, a great improvement in the computational efficiency is expected if the procedure for utilizing the symmetric property of the general cost minimal covering problem is implemented. 10.4 Heuristic Approach For The Large-scale General Cost Minimal Covering Problem The idea in the heuristic algorithm in Chapter 6 for the large-scale minimal covering problem is applied to develop an heuristic algorithm for the general cost minimal covering problem in this section. Similar to the heuristic algorithm for the minimal covering problem, this heuristic algorithm also decomposes large-scale subproblems into small-scale subproblems and heuristically solves small-scale subproblems by finding a feasible solution for each of them. The decomposition of large-scale subproblems is done in the same way as it is done in the case of the generalized algorithm in Section 10.1 and 10.2. The small-scale subproblem is solved by the following heuristic procedure. Procedure HG (Heuristic for the General cost minimal covering problem) : HG1 Choose some column a. by the criterion which will be J o described later. HG2 Delete all rows covered by the column a. from the o constraint matrix. HG3 Reduce the constraint matrix as much as possible, using the three reduction operations stated in Section 10.1. If the constraint matrix is null, then a feasible 176 solution is found. Otherwise the algorithm's control goes to step HG1 . ■ The criterion used in step HG1 for choosing a column a. J o consists of the following steps: HG1 . 1 For each remaining column a. calculate the "cost per row", W. = c • ; „ » where c. is the cost of the variable x. and I J J J is the number of non-zero elements covered by column a . . J HG1 . 2 Choose the column a. such that w. is the smallest. J o J o If there is a tie, choose the one with the smallest column index. B This heuristic algorithm is modified from the generalized algorithm in Sections 10.1 and 10.2 for the general cost minimal covering problem in the same manner as the heuristic algorithm in Chapter 6 for the minimal covering problem is modified from the algorithm in Chapter 5 for the minimal covering problem. It has the same characteristic as the heuristic algorithm for the minimal covering problem does: if the level limit specified for a problem is sufficently large not to be reached in solving this problem, then the best solution obtained is still an optimal solution of this problem. Three general cost minimal covering problems randomly generated by the author were tested by this heuristic algorithm. By the program for this heuristic algorithm coded in FORTRAN language, 177 results are obtained by solving problems on the CDC Cyber 175 computer, Computational results are shown in Table 10.4.1. Values of m, n, m' and n' are explained in Table 5.4.1. The columns under "LEVEL LIMIT", "TIME IN SEC", and "VAL" are the same as those in Table 6.2.1. The column under "NO. OF ITER" shows the number of times the algorithm went through the step Ml of the algorithm under the specified limit shown in the column under "LEVEL LIMIT". "-" in the Table shows that no test was made in that case. "°°" in the column under "LEVEL LIMIT" means that no level limit was specified in the test and the best value obtained in this test was the optimal value of the problem. From this table one can see that reasonably good solutions can be obtained in a reasonable amount of computation time by specifying the level limit equal to 6 in solving these three general cost minimal covering problems. One can also see that the optimal solutions of these three problems can all be obtained if the level limit specified is 8. From this observation, this heuristic algorithm can be very useful in solving large-scale general cost minimal covering problem if an appropriate level limit is specified. 171 r~^ 1 r | ! h LO l-J o> < 00 . 00 00 r-> r-~ I > i i ■ »• o> 1 e LO Pn i CM CO 1 O ffj H r^ LO 1 • H co ! m ! ^o o o> rH o> rH On rH CM 1 — 1 CM "e LO fn o O | O Pt 0> ON r-{ co 00 CO 00 r~- ; CO ! W W o CO o> O i co o; 2 m rH CM CM • • e v£> M • • • r-~ ! On H S o O O^ CM j CO H i "id — - — CM in rJ ! On > o> On 00 i 00 E CO ■■ .IM 1 Ph O Pi w CO en 00 rH i CM CM m 1-1 m i • o O M 1 c vD Z r> LO CO CM o S en O m H I a* i P-, w 1 hJ , _ O 43 co H e •H • c n •H -^ s -H 4-> Vj CO OJ O X3 o >, u cfl o M Q 01 c; 3 0) QJ 00 42 4-1 0) x. c 4-1 o M 6 O cO m S-i 00 e o rC V4 4-1 p. •H H 00 O c 50 •H rH C3 C3 C 3 Cj U •H 4-1 to CO 43 •H M T3 3 01 CD 3 -43 •H CO 01 4-1 42 rO 4-1 O 4-J 01 O M CO to 4-J en rH 4J 3 rH CO 3 0) to r< QJ U r-\ CO • »v 3 6 o QJ •H r-t 4-1 43 CO O 4J >-i 3 cx ft e 60 o 3 u •rl )-i 0) QJ § > o O en CJ 179 11. CONCLUSION Efficient implicit enumeration algorithms for the minimal covering problem are presented in this thesis. These algorithms are developed mainly for minimizing the logic expression of the switching function. They are extensions of the Quine-McCluskey method described in [6] for solving the prime implicant table. The most powerful procedure of the Quine-McCluskey method in solving a logic minimization problem is the repeated use of the problem reduction. An effective procedure for reducing the computation time in the problem reduction is devised in Chapter 4. "The excluding property" of the minimal covering problem in Chapter 5 is introduced to speed up the enumeration in solving the minimal covering problem. Another property, "the reducing property" of the minimal covering problem is also introduced in Chapter 5, even though the conditions of this property are rarely satisfied by partial solutions in solving actual problems. Computational improvement is about 30 % through the implementation of the procedure described in Section 5.3 for only implementing "the excluding property". The heuristic algorithm in Chapter 6 is an extension of the algorithm in Chapter 5. Solutions examined by this heuristic algorithm are evenly distributed in the decomposition tree, which represents the decomposition of the given problem into subproblems in the case when this problem is solved by the implicit enumeration algorithm of Chapter 180 5. This heuristic algorithm is a practical algorithm for the large -scale minimal covering problem. The symmetric property of the minimal covering problem is introduced in this thesis. This property and its utilization in the implicit enumeration are extensively explored in Chapter 7. The relation between the symmetric property of the switching function and the symmetric property of the minimal covering problem formulated for the logic minimization problem is also discussed in Chapter 7. Utilizing the symmetric property in solving a symmetric minimal covering problem by the implicit enumeration method, the computational improvement is more than ten times for some problems. The computationally difficult minimal covering problems in [15], [24] are symmetric. Minimal covering problems formulated for minimizing the logic expression of symmetric or partially symmetric switching functions are also symmetric. In Chapter 8 more properties of the minimal covering problem which may be used to speed up the implicit enumeration in solving the minimal covering problem are discussed, through it needs further exploration to effectively utilize these properties in solving the minimal covering problem. The concept of an upper bound on the value of a group of variables is introduced in Chapter 9. If the constraint matrix of the given minimal covering problem has the partition structure shown in (9.1), then the variables of this problem can be grouped into groups and an upper bound on the value of each group can be found. These upper bounds can be checked in the enumeration procedure to speed up 181 the implicit enumeration in solving problems. Computational improvement is about 30 % through the checking of an upper bound for each group of variable in the enumeration procedure. The implicit enumeration algorithm and its extension to the heuristic algorithm discussed in the previous chapters for the minimal covering problem are generalized in Chapter 10 for the general cost minimal covering problem. This generalization is mainly for solving the problem of minimizing the size of PLA required as the first criterion and minizing or maximizing the number of contacts as the secondary criterion in implementing a switching function by PLA. General cost minimal covering problems formulated for other problems [1, 2, 3, 4, 5, 7, 17, 18, 21] can also be solved by the generalized algorithm or by the generalized heuristic algorithm in Chapter 10. The basic structure of this generalized algorithm is different .from the algorithm in [12] in that the value obtained from the relaxed linear programming problem of each subproblem is used as a lower bound on the value of that subproblem, while a very simple procedure is used to estimate the lower bound of each subproblem in this generalized algorithm. "The reducing property" and "the excluding property" are further incorporated in this generalized algorithm to speed up the enumeration. Since the algorithm in [12] is also an implicit enumeration algorithm, "the generalized excluding property" may also be incorporated into that algorithm to improve its computational efficiency. (The algorithm of [12] contains a kind of "the reducing property". ) The utilization of the symmetric 182 property of the given problem discussed in Chapter 10 may also be applied to that algorithm. Computational improvement in solving symmetric problems is expected if the procedure discussed in Section 10.3 for utilizing the symmetric property of the given problem is incorporated in both algorithms. No computational comparison of both algorithms has been made. Since the algorithm in [12] uses linear programming to find the lower bound of the value of each subproblem, its efficiency completely depends on that of linear programming method used in it. This algorithm may have difficulty in solving problems with symmetric properties such as the two problems reported in [24] or the problem IBM 9 reported in [15], since solving these kind of problems by the implicit enumeration method usually requires a large number of iterations. The heuristic algorithm described in Section 10.4 is useful in solving large-scale general cost minimal covering problems. In using this heuristic algorithm, if one specifies a value no greater than 10 as the level limit, then one usually can obtain a reasonably good solution for a large-scale general cost minimal covering problem in a reasonable amount of computation time. The programs developed based on the algorithms discussed in Chapters 5, 6, 7, 9 and 10 are available in [32]. These programs are further incorporated into the ILLOD-MINSUM system [33] for the automated design of two-level AND/OR optimal networks. 183 REFERENCES 1. Arabyre, J. P., J. Fearnley, F. C. Steiger, and W. Teather, "The Crew Scheduling Problem: A Survey," Transp. Sci. 3, 140-163 (1969). 2. Bellmore, M. , H. J. Greenberg, and J. J. Jarvis, "Multi-Commodity Disconnecting Sets," Management Science 16, B427-433 (1970). 3. Day, R. H. , "On Optimal Extracting From a Multiple File Data Storage System: An Application of Integer Programming," Operations Research 13, 482-494 (1965). 4. Garfinkel, R. S., and G. L. Nemhauser, "Optimal Political Districting by Implicit Enumeration Techniques," Management Science 16, B495-508 (1970). 5. Balinski, M.L. , and R. Quandt, "On an Integer Program for a Delivery Program," Operations Research 12, 300-304 (1964). 6. McClusky, E. J., "Introduction to the Theory of Switching Circuits," McGraw-Hill (1965). 7. Ibaraki, T. and S. Muroga, "Synthesis of Network with a Minimum Number of Negative Gates," IEEE Transaction on Computer, Vol. C-2D, No. 1, Jan. 1971 pp. 49-58. 8. Cobham, A., R. Fridshal, J. H. North, "A Statistical Study of the Minimization of Boolean Functions Using Integer Programming, '' IBM Research Report, RC-756 (1962). 9. Balas, E., "An Additive Algorithm for Solving Linear Programs with 0-1 Variables, "Operations Research 13, 517-546 (1965). 184 10. Geoffrion, A. M. , "An Improved Implicit Enumeration Approach to Integer Programming," Operations Research 17, 437-454 (1969). 11. Ibaraki, T., T. K. Liu, C. R. Baugh, and S. Muroga, "An Implicit Enumeration Program for Zero-One Integer Programming," International Journal of Computer and Information Science, Vol. 1, No. 1, March 1972. 12. Lemke, C. E., H. M. Salkin and K. Spielberg, "Set Covering By Single-Branch Enumeration with Linear Programming Subproblems, " Operations Research 19, 998-1022 (1971). 13. Bellmore, M. , and H. D. Ratliff, "Set Covering and Involutory Bases," Management Science, Vol. 18, No. 3, November, 1971, pp. 194-206. 14. Cobham, A., R. Fridshal, and J. H. North, "An Application of Linear Programming to the Minimization of Boolean Functions," IBM Research Report RC-472, 1961. 15. Haldi, j . , "25 Integer Programming Test Problems," Working Paper No. 43, Graduate School of Business, Stanford University, December 1964. 16. Kolner, T. N. , "Some Highlights of a Scheduling Matrix Generator System," United Airlines, Presented at the Sixth AGIFORS Symposium, Sept. 1966. 17. Wagner, W.H., "An Application of Integer Programming to Legislative Redisricting," Presented at the 34th National Meeting of ORSA, November, 1968. 18. Balinski, M. L. , "Integer Programming: Methods, Uses, Computation," Management Science, Vol. 12 (1965), pp. 253-313. 185 19. Shapiro, J. F. , "Group Theoretic Algorithms for the Integer Programming Problem-II: Extension to a General Algorithm," Operations Research 16, 928-947. 20. Trauth, C. A. and R. E. Woolsey (1969), "Integer Linear Programming: A Study in Computational Efficiency," Man. Sci. 15, 481-493. 21. Ibaraki, ., "Gate-Interconnection Minimization of Switching Networks Using Negative Gates," IEEE Transaction on Computers, June 1971, pp. 698-706. 22. Bowman, R. M. , and E. S. McVey, "A Method for the Fast Approximate Solutions of Large Prime Implicant Charts," IEEE Transaction on Computers, Feb. 1970, pp. 169-173. 23. Roth, R. , "Computer Solution to Minimum-Covering Problems," Operations Research 17, 1969, pp. 455-465. 24. Fulkerson, D. R. , G. L. Nemhauser and Trotter Jr., "Two Computational Difficult Set Covering Problems That Arise in Computing the 1-width of Steiner Triple Systems," Mathematical Programming Study 2 (1974), 72-81, North-Holland Publishing Company, 25. Trotter, L. E. Jr. and C. M. Shetty, "An Algorithm for the Bounded Variable Integer Programming Problem," Journal of the Association for Computing Machinery, Vol. 21, No. 3, July 1974, 505-513. 26. Standard EDP Report, AUERBACK INFO, INC., 1972. 27. Computer Characteristic Quarterly, 1968. 28. Lawler, E. L. , "Covering Problems: Duality Relations and A New Method of Solution," SIAM J. Appl. Math. 14, 1115-1132, 1966. 29. Lemke, C. and K. Spielberg, "Direct Search 0-1 and Mixed Integer Programming," Operations Research 15 (1967), pp. 892-914. 186 30. Muroga, S., "Logic Design and Switching Theory," to be published in 1979 by John Wiley. 31. Cutler, R. B. , "MINSUM: A Library of Subroutines for Finding Irredundant Disjunctive Forms or Minimal Sums for Switching Functions - Subroutine Descriptions," to appear. 32. Young, M. H. , "Program Manual of Programs for Minimal Covering Problems: ILLOD-MINIC-B, ILLOD-MINIC-BP , ILLOD-MINIC-BS , ILL0D- MINIC-BA, ILLOD-MINIC-BG," Report No. UIUCDCS-R-78-924 , Dept. of Computer Science, University of Illinois, 1978. 33. Young, M. H. and R. B. Cutler, "Program Manual for the Programs ILLOD-MINSUM-CBS, ILLOD-MINSUM-CBSA, ILLOD-MINSUM-CBG, ILLOD- MINSUM-CBGM, To Derive Minimal Sums Or Irredundant Disjunctive Forms for Switching Functions," Report No. UIUCDCS-R-78-926, Department of Computer Science, University of Illinois, 1978. 34. Carmichael, R. D. , "Introduction to the Theory of Group of finite order," p. 8, Dover Publications, Inc. 1956. bTbliocraphic data SHEET I. Report No. UIUCDCS-R-79-966 2. 3. Recipient's Accrssnin No. l """ t jnJ Suri n It- THE MINIMAL COVERING PROGLEM AND AUTOMATED DESIGN OF TWO- 5. Report Date March 1979 LEVEL AND/OR OPTIMAL NETWORKS 6. t. Auth- Ming Huei Young &• Performing Organization Kept. No - UIUCDCS-R-79-966 1. Pcrlorming Organization Name and Address University of Illinois at Urbana-Champaign 10. Pro)cct/Taslc/Worlc Unit No. Department of Computer Science Urbana, Illinois 61801 11. Contract /Grant No. NSF MCS77-09744 | - .^nsoring Organization Name and Address National Science Foundation Washington, DC 13. Type of Report & Period Covered Ph.D. Thesis 14. plementary Notes 16. Abstracts Efficient implicit enumeration algorithms for the minimal covering problem are presented in this thesis. These algorithms are developed mainly for minimizing the logic expression of the switching function. They are extensions of the Quine-McCluskey method. "The reducing property" and the "excluding property" of the minimal covering problem are introduced to speed up the enumeration in solving problems. Symmetric property of the minimal covering problem is extensively explored. Procedures for utilizing this property in the implicit enumeration algorithm are developed based on the theory of finite permutation group. The concept of an upper bound on the value of a group and of variable is also introduced in this thesis. Programs developed based on these algorithms are incorporated into a system for the automated design of two-level AND/OR optimal networks. 17. Key lords and Document Analysis. 17o. Descriptors Minimal covering problem, AND gate, OR gate, Logic design, Optimal network, Symmetric switching function, 0-1 variable programming problem, Permutation. 17V. Identifiers Open-Ended Terms 17* ' OSATI Field/Group • liability Statement Unlimited 19. Security (.lass (1 his Report ) . UNMAf'MMli.p 20. Security (lass (I his Page 1INC l.A.sMl II. 1> 21. No. of 1 192 22. Price r '»" H ' 5- H I '0-701 USCOMM-OC 4032B-P7I { 1979 AUb lb