UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN JlAA'ff Report No. UlUCDCS-R-7^-678 /7U^C4i IAN 61< COST MINIMIZATION IN COMPUTER SYSTEMS SUBJECT TO MULTIPLE MEMORY CONSTRAINTS by Michael Austin Jamerson October I97U The Library of the JAN 3- 1975 ■ Digitized by the Internet Archive in 2013 http://archive.org/details/costminimization678jame Report No. UIUCDCS-R-7i+-678 COST MINIMIZATION IN COMPUTER SYSTEMS SUBJECT TO MULTIPLE MEMORY CONSTRAINTS by Michael Austin Jamerson Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 61801 This work was submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 197^* Ill ACKNOWLEDGMENTS I would like to express my sincere appreciation to Western Electric and their Engineering and Science Fellowship Program whose financial assistance helped make this paper possible. In addition, I would like to thank Dr. Edward K. Bowdon whose guidance and advice in preparation of this paper was invaluable. Lastly, deepest thanks to my loving wife who provided con- stant support and encouragement throughout the preparation of this paper. iv TABLE OF CONTENTS Page 1. INTRODUCTION 1 2. INTERACTION EFFECTS IN A MULT I PROGRAMMED SYSTEM 4 2.1 Memory Resources 4 2.2 Job Interaction 5 3. SINGLE RESOURCE SYSTEMS 17 3.1 Scheduling Contiguous Memory Resources 19 3.2 Scheduling Fragmented Memory Resources 32 4. MULTIPLE MEMORY CONSTRAINTS 39 5. CONCLUSION -- 48 LIST OF REFERENCES 50 1. INTRODUCTION Every computer center is faced with the problem of providing service to its customers at a cost and speed which is acceptable to the customer and at the same time satisfies management goals. One important aspect of the cost of a computing center is the amount and types of memory which are available. Generally, the types of memory devices are dictated by the user community. The amounts of the memories, on the other hand, are limited if not by cost, then by physical limitations such as floor space or hardware design. When- ever a resource is limited to a degree that requires considerations of the size of the resource, then that resource is constrained. It is the goal of this paper to present scheduling techniques which minimize the amounts of idle resources, while providing nearly optimal flowtimes. The systems which we will consider will be those with a single centra] processing unit (CPU) having the capability of concurrently processing two or more jobs. We will use the term multiprogramming to apply to these systems. The essential point in a multiprogramming system is that although more than one job may be present in main memory (CORE) only a single job is being processed at any given instant. The flowtime or run time of a job is derived from three elements: 1) active time, 2) passive time, and 3) wait time. Active time is the time in which CPU processing takes place. The term passive time will be used to denote the time consumed by events which are external to the CPU such as input-output processing. Time which is neither active nor passive will be called wait time. Wait time includes but is not limited to time spent in queues prior to the start of processing and time spent waiting for the CPU when the CPU is processing another job. * Flowtime is defined as the amount of time a job spends in the system. Scheduling techniques which have been previously discussed in the literature are generally concerned with the problem of minimizing some measure such as maximum flowtime, average flowtime, or maximum lateness [l]. Results for systems where a single server processes a single request at a time also have been extensively examined. Browne, Lan, and Baskett [2] discuss the results of various scheduling strategies such as f irst-come-first-serve (FCFS) and shortest processing time (SPT) in a multiprogrammed environment. In all of the above strategies, jobs are selected for processing on the basis of their arrival or required processing time. One technique considered by Browne, Lan and Baskett [2J, the smallest memory first (SMF) algorithm, is concerned with memory availabilities, but not with the best use of those resources . Some scheduling systems currently in use are based on a scheme for evaluat- ing the resources requested by a job. In these systems the resource requests such as amount of CORE and CPU processing time are used as the independent variables of a function. The result of evaluating this function is then used to assign a priority to the job. Other systems rely on a user defined priority and operate under a FCFS selection discipline within the individual priority queues. In all of the techniques which have been discussed the resources are checked to insure that enough of the resource is available to satisfy the requirements of a given job. However, no attempt is made to select jobs which best utilize the available resources. Denning [3] presents one approach to the problem of efficient utilization of the available resources. He presents the concept of a "working set" and suggests that jobs be run according to their ability to restore system balance as opposed to using some other priority criteria. This approach merits further discussion and will be examined in Chapter 3. A selection technique which attempts to maximize the utilization of the available resources by the selection of one or more jobs for execution will be called a best fit selection technique (BFST) . A BFST may be used in conjunc- tion with some other technique such as SPT or FCFS in order to minimize some other measure of the system, but the primary goal of a BFST will always be the minimization of the amount of idle resources. It has been shown that when arrival times are exactly known, that inserted idle time may improve the resource utilization of a system [lj. The concept of inserted idle time may be incorporated into a BFST. We will not consider cases in which the arrival times are known and therefore we will not be concerned with inserted idle time. In Chapter 2 we will define the various types of resources and develop a methodology for determining the effect of job interaction in a multiprogram- ming system on a given job's run time. These results are then used in Chapter 3 to develop a BFST for the case of a system with a single constrained memory resource. Chapter 4 will investigate the application of a BFST to a generalized system with multiple constrained memory resources. Finally we conclude with a review of the results of Chapters 3 and 4 and then attempt to identify the future research which will be required. 2. INTERACTION EFFECTS IN A MULTI PROGRAMMED SYSTEM 2.1 Memory Resources Every computer center is faced with the problem of meeting its customers' demands for service. These service demands consist of two parts: (1) requests for portions of the available memory resources and (2) a period of time during which those resources will be used. The responsibility of meeting these demands in a time frame which is acceptable to the users rests with the computing center management. Management must, on the other hand, attempt to balance the use of these resources and minimize the amount of idle resource. We now turn our attention to the subject of what those resources are and how multiple users interact in the pursuit of the use of those resources. We will consider each computer center to consist of a single CPU which is capable of multiprogramming and a number of constrained memory resources. In considering the multiprogramming capability of the CPU, we will assume that neither the hardware nor its controlling operating system will pose a limit on the number of jobs which may run concurrently. The constrained memory resources may be divided into two classes: 1) con- tiguous and 2) fragmented. A contiguous memory resource (CMR) is a resource in which a contiguous portion of the resource must be used to satisfy a resource request. The prime example of a contiguous memory resource is the CORE of the CPU. Here the user's request must be filled by s single contiguous portion of CORE. There are cases in which CORE is not treated as a contiguous resource, but those cases will not be considered here. Fragmented memory resources (FMR) are those resources in which requests for the resources need not be filled by a contiguous assignement but rather by the assignment of individual resource units. An example of an FMR is the magnetic tape drive pool. In this case, a user's request for a number of tape drives is satisfied by selecting as many of the unassigned drives as are required. The question of whether the drives are logically or even physically contiguous is not raised. The available size of any memory resource may be reduced from the actual size. These reductions may be static or dynamic in nature. We will call those reductions which occur because of various day-to-day requirements of the computing center, static reductions. Examples of static reductions include the decrease in available CORE caused by the presence of the operating system and other real time activities or the reduction in available tape drives resulting from the allocation of one or more drives for the function of logging system activities. Static reductions then are predictable and can be taken into con- sideration. Dynamic reductions, on the other hand, are unpredictable in terms of occurrence and size. Particular examples of dynamic reductions are equipment failures or the dedication of a portion of CORE to insure the completion of some critical job. Thus we see that, at any given instant, the amounts of the memory resources may be less than originally planned. For convenience we will view the static reductions of each resource as though those portions of the resources were never available. Dynamic reductions will be treated as service requests of an indefinite length which exactly equal the size of the particular dynamic reduc- tion. We will denote the available amount of a resource i, given by the difference of the original size of the resource and the sum of the static reductions for that resource, as R. . 2.2 Job Interaction Each user job which enters the computing center and begins processing interacts with other jobs which are in the center. This interaction results in a prolongation of the flowtime for all of the jobs which are involved. In this section we will develop analytical tools which will provide information concerning this prolongation. The systems with which we will be concerned are single processor systems. A basic element in our concept of a single processor is that it may only process a single task at a time. Thus, even in a multiprogramming environment in which many jobs may be in execution and all are competing for the CPU, only one job is using the CPU at any given instant. Each job which enters the computing center will be characterized by a set of parameters which indicate the resources required by that job and the process- ing characteristics of that job. These parameters are: 1) a. - the amount of CPU processing time required by job i. 2) p. - the amount of passive time which job i requires. 3) R. . - the amount of resource j which is required by job i. When job i is the only job which is in execution, then the execution time, e. , for job i is the sum of the active and passive times e. = a. + p. . (1) ill In dealing with the jobs which are characterized by the n+2 tuple (a.,p.,R #1 , R._,'* , ,R. ) we assume that: iz' 'in 1) the passive time, p., is evenly distributed over the execution time, ta. or that in any time interval, t, the amount of active time is and the tp t e i amount of passive time is . e. l 2) the amount, R. ., of any resource which is requested by job i is less than the available amount, R., of resource j. When two or more jobs are concurrently executing we observe a phenomenura which will be called overlap. Overlap is a result of the use of the CPU to process a job when some other job is in a period of passive time. The effect of overlap is a reduction in the overall execution time. From this we see that overlap is a measure of how well the active and passive periods of jobs mesh. Using these observations we will define the overlap factor, <3, as the ratio of the amount of processing that occurs in a given period of time. In dealing with overlap factors we will make the following assumptions: 1) the amount of overlap in a given period is constant, < & < 1 , when two or more jobs are processing and unity when only one job is processing. 2) processing is equitably rationed to each job which is being processed. The amount of overlap is dependent on factors which are intrinsic to the com- puter center and is to be determined experimentally. In real life the amount of overlap is dependent on the total number of jobs. Beginning at some low value, <3 increases to some optimum, and then decreases as more jobs compete for the resources of the CPU. As stated the execution time, e., of a job is dependent on both the job and the other jobs in the system. We now develop the techniques for deter- mining the relationship of the execution time to the state of the system. Let C^(n) be the overlap factor for a given system with n jobs. Let j. be a job with active time, a., and passive time, p.. The execution time for i. is i l l e. >a. + p., where the equality holds if j. is the only job which is processed. We assume that the available processor time is rationed in an equitable fashion to every job which is being concurrently processed. This rationing is designed to prevent the disproportionate assignment of processing time that 8 could occur if a job with a Large amount of active time and Little or no passive time were aLLowed to execute untii it relinquished control of the CPU. Thus we can guarantee that each of the n jobs which are processing will be provided with at least — of each available processor minute. Under our n r assumption that the passive time is evenly distributed over the execution time, a . we see that no job may use more than — of a processor minute. i a If a iob cannot utilize its entire share of the processor, — < — , e. n ' 1 the unused time will be distributed evenly to the other jobs which are execut- ing. Thus, each job is given an initial allocation of t. < 1 minutes of the 1 a i available processor time, where t. = min(-0-. — ). The unused time, U , is r ) x n ' e. ' T l given by the expression = Y k> - min(r<5, — ) Z_j n n e. (2) n a i . i U T i-1 -5>i = o i=l The unused time, U , may now be allocated to those n' < n jobs having a . t. < — . The new allocations are then i e. l U a. t. = min(t. + -7 , — ) . (3) l i n' e. l Again the unused time is colllected and distributed. This process continues a. until each iob has been allocated t. = — units of time or U^ = 0. l e . T l The above discussion indicates the technique used to determine the amount of processing which is allocated to each concurrently running job. Using this technique under the assumption that the passive time is evenly distributed with respect to the processing time we may new calculate the real time requirements for processing a set of jobs. The following example is provided to illus- trate these techniques. Let us start with four jobs which are to be processed by a system with an overlap factor O = 0.9. To avoid confusion we will not be concerned with resource constraints on requirements at this time. The parameters of the four jobs are shown in Table 2.1. In this illustration, we assume that jobs 1, 2, and 3 are all started at the same time, T = 0, and that job 4 will begin execution when one of the first three jobs completes. Table 2.1 Job passive time, a. l active time, a . l y i a.+p. L 1 1 1 1 2 2 2 .5 3 1 3 .25 4 1 2 .33 Using the information in Table 2.1 and the preceding formulas we will now determine the processor allocations. Table 2.2 shows the results of the cal- culations and illustrates the effect of multiprogramming. In the Gantt chart of Figure 2 . 1 we have graphically shown the progress of each job. From the above description we see that the CPU allocation factor, T, may change when- ever a job begins or finishes processing. It is these points in time which must be determined. Barring any changes in the system a given job will require a. — to finish. Thus we may determine the completion time of the first job by 1 a l computing — = 3.077 to be the smallest and so job 1 is the first to finish. Using this value we may now determine how much processing the other jobs a l received by calculating T. (— — ). We see these values tabulated in column 7 of Table 2.2. A sample Job Stream 10 Time Job n a . l e . l CPU allocation (t) time remaining time processed 1 2 3 .3 .3 .3 1.0 0.5 0.25 0.325 0.325 0.25 1.0 2.0 1.0 3.0769 1 2 3 .3 .3 .3 1.0 0.5 0.25 0.325 0.325 0.25 1.0 0.2308 1.0 1.0 0.7692 3.0769 2 3 4 .3 .3 .3 0.5 0.25 0.33 0.325 0.25 0.325 1.0 0.2308 1.0 1.0 0.7692 4.0000 2 3 4 .3 .3 .3 0.5 0.25 0.33 0.325 0.25 0.325 0.7000 0.7000 1.3000 1.0 0.3000 4.0000 2 4 .45 .45 0.5 0.33 0.5 0.33 0.7000 0.7000 1.3000 0.300 5.4000 2 4 .45 .45 0.5 0.33 0.5 0.33 0.2333 2.0 0.7667 5.4000 4 1.0 0.33 0.33 0.2333 0.7667 6.1 4 1.0 0.33 0.33 1.0 ; 11 -3- CM -LTN PM C\J -VO c •H J3 0) S •H E-i d) ft H W C\J CM ■s EH C •H w o u o 0) a} CvJ H O Ph D En a •H w ,£> O •i-3 VO U vr> a • «H VO (U ^-^ 3 in >cJ 0) x <-> tnen we will replace the step (x f , ,d' ) with the two steps (x t-l' X t" X t-l } (8) and (x i ,d t . 1 -x t+ x'. 1 ) . . (9) Since jobs can only be scheduled at the termination of some other job, then s must equal some x already in the list. If, in fact, s = x then equation (1) results in a step of zero duration and may be dropped. In general, if the duration of any step becomes zero, that step may be removed from the list. Using the step list which has been described above and a list, initially empty, of jobs which have been scheduled, Codd now proceeds to schedule the next unscheduled job. He locates the first step which is long enough to con- tain the job. If the current resource level usage is such that the inclusion of the job would exceed the upper limit, B, then the next starting time on this step is found. The search on this step will continue as long as there remains sufficient time to complete the job. If no such position is found, then the 25 next step is examined. When a job is finally placed in the schedule, the step list is updated as described above. We have already discussed the problems inherent in the use of elapsed time for scheduling purposes. It is important to note that even the use of programmer-supplied active and passive times with the CPU allocation factor may not provide an accurate estimate of elapsed time. Through the enforcement of termination upon exceeding the specified active time, it can usually be guaranteed that the calculated elapsed time will not be exceeded. Unfor- tunately, there is no guarantee that the program will not use less than the requested amount of active time. Having presented Codd's algorithm, we will now discuss a few modifications in an attempt to improve the algorithm. Our primary modification will be the use of computed run times based on the programmer or system supplied active and passive times. From our discussion of multi-programming effects we have noted that a prolongation in run time occurs whenever a job receives less than the maximum CPU allocation factor which it can use. The maximum CPU alloca- tion factor, t. , is defined as ratio of active time to execution time or 1 max a. T i = a~^T ' (1 °) max 1 r i Thus when the number of iobs being executed, n, is such that — > T. then max job i will not suffer a prolongation of run time. Since we wish to make extensive use of the previous work, we must take this prolongation effect into account . We use these results by first ordering our list of unscheduled jobs in decreasing execution time, a.+p., within decreasing T. . The reason for 11 l max this ordering is two-fold. First, we wish to keep the longer running jobs to 26 the outside of the resource area. By maintaining the list of unscheduled jobs in decreasing order of execution time, which, by definition, is the minimum time for the job to complete, we guarantee that even if another job is placed above the first job and both are processed at T. , the first iob will not l max complete before the second job. Second, since jobs with equal execution times are maintained in order of decreasing T. , we insure that even if they max are processed at less than t. , the first job will suffer greater prolonga- max tion of run time than the second job. To illustrate, let two jobs k and & with equal execution times appear in the order k, i in the list of unscheduled jobs. The execution time, E, is then given by E = a, /t = a ,/t . (11) k w max £ *max s Since T *T , then a, > a „. The method used to determine T. (t) , max — .max ' k _ * l v k I insures that ^(t) = T £ (t) or T fc (t) > T f (t) = r £ . If T fc (t) = T ^(t), then max job I. will finish first, since a > a . If T (t) > T.(t) = T , then job k * k * ,«max & finishes first or at the same time, since a^/f ^^ = E is the minimum jnax execution time and T, (t) < T which would require iob k to finish either at k , max J k the same time or later than job i. Our strategy is then to move down the unscheduled list inspecting each job. Using the resource usage parameter, r., of the 3-tuple which defines the next job, we determine whether the inclusion of this job in the current schedule will exceed the amount of available resource, R. If this job will fit, resource- wise, then we must check to see if it can be used to build the next layer of a pyramid. Since we are dealing with a dynamic system, we will consider the scheduling interval to be that interval between the present moment (set to 27 zero at the beginning of each interval) and the earliest termination of a job executing in that interval. We have already discussed the treatment of jobs which extend into the next scheduling interval. At the beginning of each scheduling interval a step list will exist which contains a list of the eligible positions for a job and the length of that position. The elements of the step list are of two types: 1) those steps which are formed from the exposed sections of pyramids whose bases lie on the lower level of the resource map, termed lower level steps; 2) those steps which are formed from the exposed sections of pyramids whose bases lie on the upper limit of the resource map, termed upper level steps. This step list will con- tain either two elements of the type (0,t,X) where t is the duration of the step and X represents a lower or upper step, or it will contain no elements of the type (0,t,X). From the construction of the pyramids, only those jobs which form the highest (lowest) levels of a lower (upper) pyramid may terminate in a given scheduling interval. As a result, the resource which is freed is contiguous and can be represented by the two step list elements (0,t ,U) and (0,t ,L). Because of our definition of the scheduling interval, the pyramids Ju which exist at the beginning of any interval are always left justified. Initially, the step list contains two entries: (0,°=,!) and (Oj^jU). The first job is then guaranteed to fit on an available step. When this job is placed in the schedule, then the step list is updated by replacing the entry (0,°°,L) with the entries (0,a /t l) and (a 1 /t ,°°,L). The resource usage 1 ,max' 1 max' ' ' ° for this iob is added to the current resource usage level, R . The next job J c which satisfies r < R-R is then located. Upon placing this job in the l — c . schedule, we now recalculate the T.(t) for the two jobs and replace the step list entry (0,a /t ,L) with the two entries (0,a„/T (t),L) and 28 a 2 Vt^V" ( T / t x , ~ , L). The step list entry for (- , °°,L) 2 max max a 2 a 2 T l (t) is replaced by (- + (a^ - (t) )(" ) , CC ,L). 2 2 max In general, the method for updating the step list is identical to that proposed by Codd. Specific differences, which are mainly redactions in the amount of work done, are a result of the restriction that we may only schedule jobs on the two steps which start at 0, namely (0,t U) and (0,t ,L) . Since the size of the steps are determined by the jobs which are being run during the scheduling interval, we need not maintain the steps which do not fall in the current scheduling interval. Thus, we need only know the length of the last scheduling interval and the CPU allocation factor, T.(t), for each job during that interval. From this information we may then determine the amount of active time remaining for those jobs which are present at the beginning of this new interval. To illustrate the above techniques, we will use them to schedule a group of jobs. The jobs to be scheduled are shown in Table 3.1. Note that we have provided t , the maximum CPU allocation factor, instead of the passive r .max' ' r i time, p.. We will schedule these jobs on a contiguous memory resource which is four resource units large. The overlap factor is assumed to be equal to unity. We further assume that no jobs enter the system after the first scheduling interval. We start by taking the jobs P ,P ,P_, and P lf) . The resource usage level is now four units. Upon calculation of the t. (t) for each of the jobs, and dividing the active time for each job, we find the job P.. will finish in 37.08 seconds. When job P. _ completes, we check the unscheduled job list and 29 find that no job in the list will fit in the remaining resource. We recal- culate the t. (t) and find that this new scheduling interval will end at 212.79 seconds. At the beginning of the third interval, we find that the resource usage level is only two units. Examining the unscheduled job list we find that P. will fit in the remaining resource. In the fourth interval 4 we find that the next job, P , will not fit on the lower step which is above P . To accommodate P we schedule it on the upper step. The process Job Active time (sees) T . max l Resource Request Execution Time (sees) 1 60 .2 1 300 2 100 .66 1 150 3 80 .6 1 133 4 40 .3 2 133 5 50 .4 2 125 6 40 .4 2 100 7 30 .3 3 100 8 30 .5 3 60 9 20 .9 2 22 10 10 1 1 10 Table 3.1. Scheduling Data in the above manner until all of the jobs have been scheduled. The final schedule is graphically displayed in Figure 3.2. A Gantt chart has been pro- vided in Figure 3.3 showing the CPU allocation factors used in each schedul- ing interval. We have thus demonstrated that there exist methods for deriving best-fit type schedules for the class of constrained contiguous memory resources. 30 a Q H O CO Ph O LT\ t- Ph Ph Ph -=t" CVJ u- on o n oo -3- P-l H Q J W OJ OJ OJ H OJ O H oo Ph OJ Ph H Ph h- 00 O H Ph en O O cu CO CD s •H CD 10 & H i-t oo cu H ■3 ■H 10 O U O > o § CO O •H O o OJ PO g OO OJ H KWtflODKUH DSH&ra 31 H t-q GO O Ph H t- Ph V£> -=t .rf On Ph VD Ph LTN Ph CO W J Q H f\l \I\ CM H Ph CM CO Ph Ph o H Ph VD O V£> CM vo • -3- on L^ (L) u bO •H P>H a ■H _-r O CM *•""■"* A-l -=f CO C CD o H o 3 CD -d en (1) 1 — ■ Xi CD u e CO •H R ?H n o n TJ «H on CD W -p P< k crt cd H rC W o -p •p CM C H ctf CM on on H ptH CO LTV CM O Ph D max (r, ,R-r.-r, -r „) , then we will use two areas, one which is l — k j k x r . -max(r, ,R-r . -r -r .) , and the other which is max(r ,R-r .-r -r ,) . We may use 1. K. K. Xt K. K. Ju any number of groups to arrive at the required number of resource units. We see then that we are not concerned with the location of the free units, but with the total quantity of the free resource units. As a consequence, we may remove the pyramid restriction when dealing with fragmented resources. Having removed the pyramid restriction, we are now able to treat the scheduling of a job. Our algorithm for scheduling fragmented resources will be as follows: (1) Determine the length of the current scheduling interval by examining all jobs which are currently processing and determining which one will finish first and when it will finish. 34 (2) Select a candidate job which has a resource request commensurate with the free resource units. (3) Calculate the tentative CPU allocation factors for this job and the others which are currently being processed and using the tentative T.(t) determine the length of the next scheduling interval and the amount of free resource at the end of the interval. (4) Note the tentative amount of free resource at the end of the new interval if it is greater than or equal to the current amount. (5) After checking every job which will fit the resource availability, select the one which will result in the greatest amount of free resource at the end of the next scheduling interval. In order to demonstrate the use of the fragmented memory scheduling algorithm, we present the following example. The jobs which are to be scheduled, their active time, T , and their resource requirements are shown . max l in Table 3.2. We will assume that all of the jobs are present at the begin- ning of the first interval. The jobs have been ordered by decreasing resource request and decreasing execution time. As in the example presented for the contiguous memory case, we assume that O = 1 and that no jobs enter the system after the beginning of the first scheduling interval. The fragmented memory resource has an available resource amount, R. = 7. l Since at the start of the first interval, the resource usage is zero, we may take any job. We will choose P.. , since its completion will release the most resource. There are now 2 resource units left. Upon examining the jobs which require 2 or less resource units we find that P, will finish at the same 6 time as P and 7 units will be available. At 133 seconds, when P and P, ter- minate, we select P„ with a request of 4 units. To fill the remaining 3 units 35 Table 3.2. Scheduling data for fragmented memory resource example Resource request Execution time (sees) 5 133 4 10 3 150 3 125 2 300 2 133 2 60 1 100 1 100 1 22 Job Active time (sees) T i max 1 80 .6 2 10 1 3 100 .66 4 50 .4 5 60 .2 6 40 .3 7 30 .5 8 40 .4 9 30 .3 10 20 .9 36 we will select P . Upon termination of P at 153 seconds, there are four free resource units. We select P, for execution, since its termination will release 3 units. P. is then assigned to the remaining 1 unit of resource. o Upon determination of the T. (t) we find that P. will terminate at 275 seconds r 1 8 At that time we will schedule P q , which has a resource request of 1 unit. We proceed in this manner until all of the jobs have been scheduled. The entire schedule is presented in Figure 3.4. The T.(t) used in each schedule are shown in the Gantt chart of Figure 3.5. We have produced two algorithms for use in systems with a single con- strained resource. The first is designed to deal with contiguous memory resources, and the second is directed towards the problem of a fragmented memory resource. 37 H O VD US PL, t— H O on t— Ph HQh-1H o H PL, P-, on LT\ ON Pm CO PL, OJ PL, 00 Ph PL, \D TJV P-, LT\ .. — » OJ to -3- Ti a o o-i o CO CD m to *. — ' ci> fi •H &H >ti 0) en LT\ P-i t~- cd OJ W CVI oo > U 1 S -p a J- on 8 en oj EWfflODKOW DSMEHCO 38 CM m oo on LTN t- C\J on Ph OO Ph oo H Pu OJ Ph H Ph TTY- TT Ph TTT" OJ o vo on •H Pn c H ■H t- J- O ^-^ CO W d r— 03 (Li on en ^ ^— s a 0) to 6 ■H U &H o H -d n CD -p oo w u Ph cti cti fl H o H -p -P on on w H C3 H t-H oPnp T ). We chose to schedule P„ instead of P_ , P c or P_ since only this choice would reduce the critical fragmented resource usage more than it would the contiguous resource usage. Table 4.1. Set of jobs for 2-resource example 44 Job Active time (sees) T .max L r c r F Execution time (sees) 1 60 .2 1 2 300 2 100 .66 i 3 150 3 80 .6 i 5 133 4 40 .3 2 2 133 5 50 .4 2 3 125 6 40 .4 2 1 100 7 30 .3 3 1 100 8 30 .3 3 2 60 1 9 20 .9 2 1 22 10 10 1 1 4 10 45 H •J Q M IT\ ' Ph o t- H -J- Ph OO CU CO P-. H w r— p OJ On M OJ Pk H -d" OJ Ph VD P-, P-. on on H Ph CM P-, OJ • _=f £ CD •H t>0 EH o Ti 3 o