LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 0.OO.2. Digitized by the Internet Archive in 2013 http://archive.org/details/branchandboundal438naka April 1971 ~yK^4i A BRANCH- AND- BOUND ALGORITHM FOR OPTIMAL NOR NETWORKS* (The Algorithm Description) by Tomoyasu Nakagawa Hung- Chi Lai DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN URBANA, ILLINOIS Report No. U38 A BRANCH-AND-BOUND ALGORITHM FOR OPTIMAL NOR NETWORKS* (The Algorithm Description) by Tomoyasu Nakagawa. Hung-Chi Lai Department of Computer Science University of Illinois Urbana, Illinois 6l801 This work was supported in part by the National Science Foundation under Grant No. NSF GJ-503 CONTENTS Introduction 1, Definition of the basic form of the algorithm 2. Heuristics for the set of SCUC and IPPC 3- Pruning non-optimal solutions by redundancy check k. Description of the entire algorithm Introduction The branch-and-bound algorithm is applied by E.S. Davidson to the synthesis problem of optimal combinational networks for arbitrary switching functions using NAND gates, by introducing the desirability order and other speed improvement gimmicks [l]. (In our expository paper [2], we summarized his algorithm, explaining the details of these gimmicks. ) We implemented his algorithm for NOR gates by using simpler heuristics than Davidson's and also by incorporating a new gimmick called 'redundancy check'. This. report presents the description of our version of the branch -and -bound method. 1. Definitions and the Basic Form of the Algorithm We want to solve the synthesis problem of optimal combinational networks with NOR gates for m Boolean functions of n variables. The criterion of optimality is the minimization of the cost function C defined by C = A X R + B X I, where R is the number of gates, I is the total number of inputs to gates (i.e., the sum of connections of external variables and interconnections among gates), and A, B are non-negative coefficients (i.e., weights). Different combinations of the weights A and B imply different optimization problems: A > and B = implies the minimization of the number of gates, A » B > implies the minimization of the number of gate inputs after first minimizing the number of gates, and so on. In this report, however, we will be concerned with the case A » B > unless we mention otherwise. In the algorithm, we will represent given m output functions of n external variables in terms of a truth table. Let x., i= 1, ..., n, ben external variables, and let f , h = 1, ..., m, be the given m output functions. The x^, I = 1, . . • , n, and f , h = 1, . . . , m, are expressed in the following way: X iT ( 4 "■' 4 _1) ' iml > "" 4 (!.!). „2 n -l> and f h = (f h , ..., f h ), h = 1, m For example, in the case of m = 1, if the output function f^ is x^ v x^x^ then x- -(00001111) *> x 2 =(00110011) x =(01010101) ) (I- 2 ) and f = (01011000) We will exclude from our discussion the case where some of the output functions among the m functions are identical to other functions or external variables. Therefore, we always need at least m gates to realize m output functions f. Let us assign one NOR gate labeled h to each f , h = 1, . . . , m. We call this network (i.e., m isolated gates whose outputs are assigned f , h = 1, . . . , m) the initial solution . Fig. 1.1 is the initial solution to the problem of an output function f , f = x -i x o ^ x -i x p x o> f° r a special case of m = 1. atel ( ) (01011000) Fig. 1.1 The initial solution to f^ = x ± x satisfies. Take two gates, i and k. Let (P., ..., P. ) and (P,, ..., P, denote the outputs of gate i and gate k, respectively. If gate i is connected 1 J to gate k, then the components P. and P" must satisfy the following condition, no matter whether or not gate i and gate k have other inputs, because the gates perform NOR operation: pj 3 = for all j such that P? = 1 k l and P? = for all j such that P? - 1 l k gate k (~) (0 1 l) ' : O gate i ( ) (l 1 0) (1.3). Fig. 1.2. If there are 1-components in a NOR gate, then corresponding components in its immediately preceding/succeeding gates must be 0. A similar condition must also hold between the output of gate k and an external variable Xp, when Xp is connected to gate k, regardless of other inputs to gate k: k = for all j such that x» = 1, and x 3 p = for all j such that pJ = 1 (i.*0. If the assignment of binary values to the components r's of the output 2 n -l (P , ..., P ) of a gate satisfies the above condition with respect to all of jjomediately preceding/succeeding gates and all connected external variables, 2 n -l N then we call this assignment of (P , ..., P ) a feasible assignment . 7 (Notice that some of components 3r 's could be *. ) Using the concept of feasible assignment, let us define an "intermediate solution". Definition (Intermediate solution) . A network of R gates, R > m, with external variables x, is called an intermediate solution if the network satisfies the following set of conditions: (i) The entire network has no loops. R gates are numbered 1 through R, as gate 1, . . . , gate R. (ii)-a The first m gates (i.e., gate i, for i =1, ..., m) are assigned m output functions. In other words, the output of gate i is (f., ..., 2 n -l f ), for i = 1, . . . , m. These gates may or may not be connected to other gates yet. (ii)-b The outputs of the remaining gates (i.e., gate i, i m + 1, ..., R), if any, are completely or incompletely specified. Each gate i for i = m + 1, . . . , R, is connected to at least one of other gates in the network. (iii) The assignment of the output of each gate is feasible. Notice that the initial solution defined previously is a special case of an intermediate solution. The network corresponding to an intermediate solution may or may not realize the given set of functions f, , h = 1, . . . , m. An intermediate solution whose network realizes the given set of functions is said to be a feasible solution . A feasible solution whose cost is the least among all feasible solutions is an optimal solution . In order to construct feasible solutions, we introduce the concept of "cover". Definition (Covered component and uncovered component ) J o A component P = of gate k is said to be covered , if gate k has i£ ~ ■"— "™*"^ — at least one input (i.e., the output of gate i or external variable xJ whose j -th component (P. or x, ) is 1. P, = of gate k is said to be uncovered , J 'o if P is not yet covered. K. Fig. 1.3 is an example of intermediate solution, where some components are already covered. (The covered components are shown with underlines.) Clearly an intermediate solution is a feasible solution if all output -j components P = in all gates k are covered. f ± = X.jX v x^x (0 1011000 ) (1010000 0) (g) XH (* * 1 * *) (00001111) x (* 1 0) W ^J (* 1 * * * * *) x 2 x 3 (0 0110011) (0 101010 1) Fig. 1.3 Example of an intermediate solution for f = x x v x x x , where some components are covered and others are not. (The covered components are underlined.) Let us introduce the concept of possible covers for an uncovered component. As seen from the definition "below, possible covers are the only available external variables and/or gates with which we can cover the un- J o covered component under consideration. Suppose P in gate k is an uncovered component in a given intermediate solution. Definition (Possible covers of an uncovered component P, = of gate k) (i) An external variable x. which is not yet connected to gate k is a possible cover of P = if x. satisfies the following condition: r x. =1, and L x J " =0 for all j such that P J = 1. Xj K. (II) A gate i which is already connected to gate k is a possible cover of P =0 if gate i satisfies the following condition: p d ° . .. i (ill) A gate i which is not yet connected to gate k is a possible cover of P = if gate i satisfies the following condition: a connection of gate i to gate k will not form any loops, < P. = 1 or *, and v. P? =0or* for all j such that pj = 1. 1 J£ (IV) A gate which is not yet incorporated in the intermediate solution is a possible cover. This gate is called a new gate , and satisfies the following condition: 10 The output components are all ■*. The gate number of this gate is assigned R + 1, when the highest gate number in the current intermediate solution is R. With each of the possible covers defined above, we can cover P n accord- k ing to the following procedure, called an implementation of a possible cover. Procedure (implementation of a possible cover ) Step 1; If the possible cover is not yet connected to gate k, then connect it to gate k; otherwise do nothing. Step 2: If the j -th component of the possible cover is not yet assigned, then assign the value 1 to it; otherwise do nothing. Step 3: Assign the value to unassigned components so that the assignment of the output of every gate in the network be feasible. The Fig. l.U illustrates the above procedure for the case where a possible cover is gate i of definition (ill) above. J 'o Suppose we are going to cover P fc in gate k with gate i in the figure (a). By step 1, we connect gate i to gate k. By step 2, we set P.° = 1. And by step 3, we assign the value to the components indicated by a, 0, y, and 5. The resulting network is the figure (b). 11 ( ll 6 ) gatlT^Q^ (£ ) ( o 8 (a) before covering of P =0 with gate i. gate KS^ ^ 1 ) V ( 1 0— - 1 1 ) ) (b) after a covering of P =0 with gate i Fig. l.k Illustration of the procedure of implementing a possible cover 12 By applying the above procedure repeatedly to uncover components in each of the intermediate solutions, we eventually obtain intermediate solu- tions in which all output components Pr. = in all gate k are covered. In order to enumerate all such intermediate solutions systematically, we introduce the following set of rules. Definition (SCUC & IPPC) The selection criterion of uncovered components (SCUC ) is the criterion under which an uncovered component Pr. = is selected from the given inter- mediate solution. '. 'he implementation priority of possible covers (IPPC ) is the priority under which the order of implementation among the possible covers for the selected uncovered component is determined. Using the concepts defined in this section, we present the basic form of Davidson' s branch-and-bound algorithm based on which he made several versions of programs. In the algorithm below, we use a parameter C which we call the cost ceiling , or the incumbent cost ; the C is used to preclude all intermediate solutions whose cost exceeds the cost of the current best feasible solution. Initially (! is set to a sufficiently large number. 'he basic form of the algorithm (Davidson ) Step (start ) : k = 1. Let S denote the initial solution. Set C to a sufficiently large number. Step 1 : Calculate the cost C, of the current intermediate solution S . 13 Compare C, with C. If C, is greater than C , then go to step 7-lj otherwise go to step 2. Step 2 : Search for an uncovered component in S, . If there is none, then go to step 8; otherwise go to step 3« Step 3 ^ Select one uncovered component from S , according to the selection criterion of uncovered component (SCUC). Let P denote it. Step 4 : Ma.ke a list of all possible covers of P. Step 5 : Store P and the list of possible covers in a working space, and label k to this portion of working space. Select one possible cover from the list, according to the implementation priority of possible covers (IPPC). Step 6 : Increment k by 1. Implement the possible cover selected at step 5j generating the augmented intermediate solution, S . Go to step 1. Step 7 (backtrack) Step 7-1: Decrease k by 1. If k becomes 0, then go to step 9j otherwise go to step 7-2. Step 7-2: Retrieve P and the list of possible covers of the label k from the working space. Search for unimplemented possible covers in this list. If there are no unimplemented possible covers, then go to step 7-1; if there are some, then go to step 7 _ 3» Step 7-3: Reconstruct S v . Select one of the unimplemented possible covers from the list, according to the IPPC. If we replace this condition ' C > C by 'C > C, then the algorithm obtains only one optimal network, instead of all optimal networks. 11+ Step 1-h: Increment k by 1. Implement the possible cover taken at step 7-3> generating the augmented intermediate solution, S . Go to step 1. Step 8 (solution) ; Print S . Replace the value of C with the cost G of S, . k k Go to step 7-1. Step 9 : Stop. In the next two sections we describe two kinds of gimmicks to obtain our improved version of the algorithm: our modification of the set of SCUC and IPPC, and a redundancy check of feasible solutions which prunes non-optimal solutions. We modify Davidson's versions of the algorithm with these gimmicks. The entire description of the algorithm is presented in section k. 2. Heuristics for the Set of SCUC and IPPC The set of SCUC and IPPC is the most important part of the algorithm, "because this determines the order of intermediate solutions which the algorithm enumerates. The investigation on this part of the algorithm was the subject with which Davidson was mostly concerned; he experimented eight versions of his program with different combinations of SCUC's and IPPP's. According to the experiment, he chose the best version among the eight. However, the heuristics incorporated into the SCUC and IPPC of this version (and also most of other versions) is fairly complicated. We conceived a set of SCUC and IPPC which is simpler than the simplest version of Davidson's program, and which yet works as comparably well as his most elaborate version. Before presenting our version of the SCUC and IPPC, we define some -x- concepts necessary for describing the SCUC and IPPC. Type of covered component J o A component P =0 which is already covered is assigned the type COV. (COV stands for a component which is already COVered. ) J o Types of possible covers of an uncovered component P, = of gate k . 1111111 iC Possible covers are classified into the following seven types. J "o G : A gate i which is already connected to gate k, and has P. = *. , * ^O V (G stands for a Gate having P. = *J -x- t See [2]. The concept defined here is essentially the same as Davidson's. However, we use different mnemonic names for the types in the concept. For the definition of possible cover, see Section 1. 16 #■ VC : An external variable x ff which is not yet connected to gate k, and which has x ° = 1, and x). Assume that the only output g of gate i is connected to gate k. Let gate t denote the output gate of the entire network, and let F be its output. Define h , = F ®Fj, and e = F, ©F", where F, is the output of gate t when g is disconnected from gate k, and F" is the output of gate t when the constant input 1 is connected to gate i. If we find some external variables and/or gates whose disjunction e satisfies h c e C e , then eleiminate gate i and its input connections and output connection, after connecting these external variables and/or gates to gate k (Fig. 3*2). Other transformations similar to the above one are explained in [hi. we (II) Elimination of Connections II- (i) We search for connections (of external variables or gate outputs) whose disconnections do not change the output of the entire network. If we find such connections, then we eliminate them. 22 disjunction = e j£>; (a) (b) Fig. 3.2 If e satisfies h, , c e c e i n ( a ), then we have a better network as shown in (b) Il-(ii) We search for an external variable or the output of a gate whose new connection to another gate makes some existing connections redundant. The transformation is based on the following property of a NOR network configuration as shown in Fig. 3-3 (a). Fig. 3.3(a) A NOR network configuration which consists of a subnetwork o and gate t, where all output connections of a go to only- gate k, but not to other gates. 23 disconnected Fig. 3.3(b) A network configuration equivalent to Fig. 3- 3(a) Property If an external variable g (or the output g of a gate outside of the a) satisfies F > g, along with g > g., for i = 1, ..., v, where g. are some Tj 1 1 of inputs to the a, then Fig. 3»3(b) is an equivalent network configuration with respect to the output of gate t. Fig. 3- 3(h) has (v-l) less inputs than Fig. 3- 3(a). By combining the above transformations, we have the computational procedure for checking redundances as shown in Fig. 3«^+- KDY-step 2k Let S denote the given feasible network. KDY-step 1 JsL J Department of Computer Science, University of Illinois, January 1969. 6. S. Muroga and T. Ibaraki, "Logical design of an optimum network by integer linear programming - Part I," Report No. 26k, Department of Computer Science, University of Illinois, 1968. 7. S. Muroga and T. Ibaraki, "Logical design of an optimum network by integer linear programming - Part II," Report No. 289, Department of Computer Science, University of Illinois, December 1968. 8. S. Muroga, "Logical design of optimal digital networks by integer programming," in book Advances in Information Systems Science, Vol. 3 edited by J.T. Tou, Plenum Press, 1970, pp. 283-3^8. 9. T. Nakagawa and S. Muroga, "Comparison of the implicit enumeration method and the branch- and-bound method for logical design," To be published as a report, Department of Computer Science, University of Illinois . > 6" «VV