CENTRAL CIRCULATION BOOKSTACKS The person charging this material is re- sponsible for its renewal or its return to the library from which it was borrowed on or before the Latest Date stamped below. You may be charged a minimum fee of $75.00 for each lost book. TK-f. mutilation, ami underlining of books are reason, Theft, mutilation, am, dismissal from for disciplinary action and may result K ZSfWCMl TELEPHONE CENTER, 3M-8400 UNIVE^^ JAN 2 8 1996 cr: B 1 2 1997 AUG 24 2006 When renewing by phone, write new due date below previous due date. pnA.t\ \U * 9 P] UIUC DCS-R-77-859 T SCHEDULING PERIODIC-TIME-CRITICAL JOBS ON SINGLE PROCESSOR AND MULTIPROCESSOR COMPUTING SYSTEMS by SUDARSHAN KUMAR DHALL UILU-ENG 77 1712 April, 1977 Digitized by the Internet Archive in 2013 http://archive.org/details/schedulingperiod859dhal SCHEDULING PERIODIC-TIME-CRITICAL JOBS ON SINGLE PROCESSOR AND MULTIPROCESSOR COMPUTING SYSTEMS BY SUDARSHAN KUMAR DHALL B.A., Panjab University, 1956 M.A., University of Delhi, 1968 M.S., University of Illinois, 1972 THESIS Submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 1977 Urbana, Illinois This work was supported in part by the National Science Foundation under Grant Number NCS 73-03408. Ill ACKNOWLEDGMENT I wish to express my sincere thanks and gratitude to my thesis advisor, Professor Chung-Laung Liu, whose supervision, invaluable guidance and advice, and constant encouragement made this work possible. I would also like to thank Professor Paul Handler for supporting me during the course of the study. Lastly, I wish to thank my wife, Pushpa, for her patience, encouragement and devoted love. This work is dedicated to my parents. iv TABLE OF CONTENTS CHAPTER Page 1 . INTRODUCTION 1 1.1 Motivation and Objectives 1 1.2 The General Problem 3 1.3 A Survey of Previous Work 10 1.4 Scheduling to Meet Deadlines 16 2 . SCHEDULING PERIODIC-TIME-CRITICAL JOBS 18 2.1 Introduction 18 2.2 Previous Work on Time-Critical Jobs 21 2.3 Periodic-Time-Critical Jobs 22 2.4 Reverse-Rate-Monotonic Priority Assignment Algorithm 36 3. SCHEDULING PERIODIC-TIME-CRITICAL JOBS ON A MULTIPROCESSOR COMPUTING SYSTEM 41 3.1 Introduction 41 3.2 Rate-Monotonic and Deadline Driven Scheduling Algorithms for Multiprocessor Computing Sys terns 41 3.3 The Rate-Monotonic-Next-Fit Scheduling Algori thm 48 3.4 Conclusion 54 CHAPTER Page 4. RATE-MONOTONIC- FIRST-FIT SCHEDULING ALGORITHM 55 4.1 Introduction 55 4.2 Rate-Monotonic-First-Fit Scheduling Algorithm 55 5. CONCLUSIONS , 103 LIST OF REFERENCES 109 VITA Ill i t CHAPTER 1 INTRODUCTION 1. 1 Motivation and Objectives There is a variety of situations in which a set of resources is available for the performance of a set of jobs. The problem of scheduling is to allocate the resources to the jobs so that they will be performed in accordance with some given constraints. We mention here some examples: (a) In a diagnostic clinic, a patient is scheduled to undergo a number of tests and to visit a number of specialists. In many cases, the schedule must ensure that a patient has undergone certain tests before his visit to a certain specialist. (b) A number of input-output devices are connected to a computer through a multiplex channel. It is desired to allow these devices to access the central memory through the multiplex channel. (c) At an airport with a number of runways, it is necessary to assign aircrafts to runways for landing or taking off in a certain order. (d) The manager of a service station with three mechanics wants to ensure servicing ten vehicles during the day. He has to decide which mechanic should work on which vehicle at what time. In assigning work to mechanics, he may also have to take into consideration the capability of the individual mechanic with regard to the type of work he is being assigned. It may at times be necessary to ask a mechanic to discontinue work on one vehicle to start work on another. In the above examples, "specialists", "central memory of the computer", the "runways" and the "mechanics" are resources and "visits of outpatients", "accessing by the input-output devices", "take off or landing of an aircraft", and "servicing the vehicles" are jobs to be performed. Regardless of the type of resources available and the character of the jobs to be performed, there is a fundamental similarity among all these scheduling problems: given a set of jobs and a set of resources, to determine the allocation of resources to jobs and the order in which the jobs will be executed on the allocated resources so as to meet some desired goal. While determining such an allocation and the ordering one should satisfy the given properties of, and constraints on, the jobs and resources. For example, for a given set of jobs it may not be possible to start execution of a particular job before certain other job or jobs have been completed, as in an assembly line where a certain number of parts must be ready before a unit can be assembled. It may also be the case that a particular job in the set cannot be executed on a particular resource. With each scheduling problem, one may associate a cost function. For example, if some machinery is hired for executing a set of jobs, it will be advantageous if all the jobs can be completed as soon as possible. Thus, in many real life problems, poor scheduling decisions might lead to excessively large costs. In certain other cases, it is necessary to complete jobs before some prescribed deadlines, or else it may result in some irreparable loss. It would, therefore, be worth considering those cases where a proper scheduling decision may result in savings, or, in general, minimize a cost function. A special case is where the cost function is a function of the deadline. Considerable effort has been spent in research in this direction during the past years. Many interesting results have been obtained and some intriguing questions are still unanswered. This thesis concerns itself with one aspect of the general scheduling problem - that of scheduling a set of jobs with a view to meeting the deadlines of all jobs. 1.2 The General Problem The general scheduling problem is to allocate available resources over a period of time to perform a set of jobs so as to achieve some performance measure. We shall use the language of computing, and henceforth a resource will be referred to as a processor . A set of processors will be referred to as a computing system . Thus we may talk of a single-processor computing system or a multi-processor computing system. The processors in a computing system might not be identical. It may be that it takes different units of time to execute a job on different processors. It may also be possible that some jobs can only be executed on some of the processors. A computing system with n processors will be denoted by Qr ■ (Pi, P ? , • P )• A set of jobs is formally specified by an ordered triple (J, *^, u). O = {3-i* Jo» »^m^ denotes a set of m jobs to be executed. »( is a partial ordering relation defined on \J , which specifies restrictions on the order in which jobs can be performed. That is J V J means that the execution of job J cannot begin until the execution of job J has been completed. J is called a predecessor of J_ and J_ is called a successor of J„. J c is said to be an immediate successor of J , if there is no J,, such that J *K J "^ J . r» t r t s A set of jobs is said to be independent if the partial ordering relation •< is empty. y is a function from CJ to the space of n-component vectors. p( J • ) = (t, j , t , , t . ) , where <_ t . < °°, and for each i x Xi Zi ni — ri — there exists at least one r such that t . < °°. The value of t . is the ri ri time it takes to execute job J, on the r processor and is called the execution time of job J. on that processor. That t . = °° means J 1 v ri that job J. cannot be executed on the r processor. If all processors in the computing system are identical, then t is the same for all r. In that case, we will simply write u(Jj) = t.. A set of jobs can be represented by a directed graph such as the one in figure 1.1. Corresponding to each job J. in the set there is a node J. in the graph. There is a directed edge from the node J to the node J s if and only if J is an immediate successor of J . The node corresponding to job J- is labelled as J./u(J.), By scheduling a set of jobs on a multiprocessor system we mean to specify for each job J. the time interval (s) within which it will be executed and the processor P on which execution will take v r place. Such a specification of the assignment of processors to jobs is called a schedule . An explicit way to describe a schedule is a timing diagram , which is also known as the Gantt Chart . As an example, the timing diagram of a schedule for execution of the set of jobs in figure 1.1 on a three-processor computing system is shown in figure 1.2. In a schedule a processor might be left idle either because there is no executable job at that time or because it is an intentional choice. (A job is said to be executable at a certain time instant if Figure 1.1 A directed graph representing a set of jobs. J l | Ji 1 u | 7 . r A , 2 1 1 ' 2 1 I ' 1 ' J 2 1^ 1 J ? 1 ^ 1 2 1 1 ' 2 1 2 ' "l 1- — 1 : in Figure 1.2 A Liming diagram representing i\ sc.liedu 1 o the execution of all its predecessors has been completed at that time.) Clearly, it is neither necessary nor beneficial in a schedule to leave all processors idle at the same time. For a given schedule, an idle period of a processor is defined to be the time interval during which a processor is not executing any job (while at least one other processor is executing some job). In the Gantt Chart, ^, (Jm, are used to denote such idle periods (figure 1.2). A scheduling algorithm is a procedure that produces a schedule for every given set of jobs. A class of scheduling algorithms may have the restriction that a processor will not be left idle intentionally. That is, a processor is left idle for a certain period of time if and only if there is no executable job within that period. A scheduling algorithm that follows this rule is called a d emand scheduling algorithm . Such a scheduling algorithm can be specified by merely giving the rules on how jobs are to be chosen for execution at any instant when one ore more processor is free. (Of course, the choice is only among jobs that are executable at that time.) A subclass of demand scheduling algorithms in priority driven «cheduling algorithm! , According to a priority driven scheduling algorithm all jobs in the set are assigned priorities and jobs with higher priorities are executed instead of jobs with lower priorities when they are competing for processors. Rules are also provided for breaking ties. In this type of algorithm, one just lists the jobs in J in descending order of their priorities from left to right. Such a list is called a priority list . When a processor becomes free, the priority list is scanned from left to right until the first executable job, which has not yet 8 been executed, is found. This job is then assigned to the free processor. (If two or more processors are available at the same time, a rule is also specified as to which of the processors will be assigned which job. A simple rule is to assign a higher priority job to a processor of lower index.) Such a scheduling algorithm is also sometimes referred to as a list scheduling algorithm . A scheduling algorithm is said to be non-preempt ive if it follows the rule that once the execution of a job has begun, it must continue without interruption until completion. On the other hand, a preemptive scheduling algorithm is the one that permits the execution of a job to be interrupted and removed from the processor subject to the condition that the interrupted job is restarted later from the point at which it was last interrupted. A possible interpretation of the preemptive strategy is: suppose a high speed. drum is available in a multiprocessor computing system. It is then quite reasonable to execute a portion of a program, store all intermediate results on the high speed drum to make room (on the processor) for another program and retrieve the results when execution of this program is resumed later on. The completion time , b„ > > b, > 1, let to denote the completion time when jobs are scheduled according to an arbitrary priority list, and let w- denote the shortest possible completion time. Then A. _ b w o b k < 1 + L_ _ L k I n.b i-1 * ± Recently Garey and Graham [5] considered an augmented multiprocessing model. They introduced a set of "auxiliary resources" into the system with the restriction that at no time may the system 14 use more than some predetermined amount of each auxiliary resource. This can be described as follows. Assume that a set of auxiliary resources u\ = {R, , R„, .. , R } 1 2 s is given and that these resources have the following properties. The total amount of resource R. available at any time is (normalized without 1 J loss of generality to) 1. For each k, the job J, requires the use of p.. units of the auxiliary resource R. at all times during its lk } i b execution, where < p.. < 1. At any t, let r.(t) denote the total = lk = J l amount of the auxiliary resource R. which is being used at time t. Thus, r.(t) = i - — ik J k e f(t) where f(t) is the set of jobs which are being executed at time t. (The restriction that there are at most n processors can be expressed by requiring |f(t)| <_ n for all t.) In this model, the fundamental constraint is simply this: r.(t) <_ 1, for all t and i = 1, 2, ... , s. In other words, at no time can one use more of any resource than is currently available. As before, let to denote the completion time for a set of jobs using an arbitrary priority list and to denote the optimal completion time for the same set of jobs. Then, we have the following results (see Garey and Graham [5]): Theorem : For a computing system with n identical processors and one kind of auxiliary resource, we have: o)/u) <^ n. 15 Theorem ; For a computing system with n( >. 2) processors and q auxiliary resources, and for a set of independent jobs, we have: u>/u) < min { (n + l)/2, q + 2 - (2q - 1) /n } In the results presented above, no effort has been spent in obtaining a priority list. A few algorithms, which spend some effort in searching for a priority list, are presented below. Theorem (Chen & Liu [1]): When Hu's algorithm [7] is applied to schedule a set of jobs with unit execution times on a computing system with n identical processors, then, to/a) < 4/3, for n = 2 w/w <_ 2 - l/(n- 1), for n > 3 _ — where u> is the completion time for a given set of jobs, when scheduled according to the above algorithm, and u) Q is the optimal completion time for the same set of jobs. Theorem (Lam & Sethi [12]): When Coffman and Graham's algorithm [2] is applied to schedule a set of jobs with unit execution times on a computing system with n identical processors, then, w/ojq <. 2 - 2/n. The following two results regarding scheduling a set of independent jobs on a computing system with n identical processors are due to Graham [6]. Theorem : If jobs are scheduled according to a priority list which assigns higher priority to a job with longer execution time, then, u>/ui < 4/3 - l/3n. 16 Theorem : If k jobs with longest execution time in the set are scheduled in such a way that the total execution time (for the execution of these k jobs) is minimum, and the other jobs in the set are scheduled according to the rule that whenever a processor is free an arbitrarily chosen job will be executed on that processor, then, U) /u) < 1 + (1 - l/n)/(l + Lk/n|) One can consider algorithms that produce schedules which are as close to optimal schedules as it is desired at the expense of computation time. An algorithm is said to be an e -approximation algori thm if for a given e, the algorithm yields a schedule such that the ration (w - t, the deadline of job J. will not be respected. A switching point is any instant of time where the schedule may be modified. At a switching point t, a processor is assigned to the active request of J, , if at time t: (a) a , (t) <^ a.(t) for all i such that job J. has not been assigned a processor at time t. 22 and (b) C, (t) < C (t) for every r, such that iob J has an k r r active request, has not been assigned any processor and a, (t) = a (t) . k r If there are any ties they are broken arbitrarily. A request which is assigned a processor at time t, continues to be processed till the next switching point, which is the earliest of the following times: (i) t + c k (t), (ii) the time of a new request, and (iii) t + min { a.(t) J. is not being executed on any processor at t } - max { a (t) i J is being r r executed at t } . At any instant, reassignment of processors takes place according to steps (a) and (b) . It has been shown that the "e algorithm" is optimal for a single-processor computing system (see Labetoulle [11]). Nothing has been proved concerning the behavior of "e algorithm" for a multiprocessor computing system. The above two results regarding the deadline driven algorithm and the "e algorithm" are also applicable to the problem of scheduling a set of independent jobs with individual initiation time and deadline on a computing system with a single processor where preemptions are allowed (Labetoulle [11]). 2 . 3 Periodic-Time-Critical Jobs An interesting variation of the above model is obtained 23 by stipulating that the deadline of each request of a job coincides with the next request of the same job, and by assuming that the first request of all jobs occurs at the same time. A job J in this model can thus be specified by an ordered pair (C, T) , where C is the computation time for each request of the job and T is the time interval at which the requests of the job occur. The reciprocal of the request period T will be called the request rate of J. In what follows we shall be concerned with the scheduling of a set of jobs of the type just described. It will further be assumed that all jobs are independent. A set of m jobs will be denoted as {J, = (C, T.)} The utilization factor of job J = (C, T) is defined to be the ratio C/T and the utilization factor of ja set of jobs is the sum of the utilization factors of all jobs in the set. Notice that the utilization factor of a job is the fraction of the processor time taken up by the job. Clearly, if the utilization factor of a set of jobs is greater than n, it will be impossible to find a schedule that will feasibly schedule the set of jobs on n-processor computing system. Hence a necessary condition for the existence of a feasible schedule for a set of jobs on a computing system with n processors is that the utilization factor of the set of jobs be less than or equal to n. Whether this condition is sufficient or not will depend on the particular scheduling algorithm used. A set of jobs is said to fully utilize £ processor according to a certain scheduling algorithm, if the set of jobs can be feasibly scheduled by that algorithm and increasing the computation time of any one of the jobs in the set would cause the algorithm not to 24 produce a feasible schedule for the set. For a given algorithm, the minimum achie vable processor utilization is the minimum of the utilization factor over all job sets that fully utilize the processor. This means that any set of jobs whose utilization factor is less than or equal to the minimum achievable utilization can always be feasibly scheduled by the corresponding algorithm. (Sets of jobs with larger utilization factor may or may not be feasibly scheduled by the corresponding algorithm.) Thus a possible measure of the "effectiveness" of a scheduling algorithm is the minimum achievable processor utilization of the algorithm. Other things being equal, an algorithm which has higher minimum achievable processor utilization is naturally more "effective" since it enlarges the set of feasibly schedulable job sets. The problem of scheduling a set of periodic-time-critical jobs on a single-processor computing system was first studied by Liu & Layland [13] and Serlin [16]. Algorithms considered by Liu & Layland fall in the class of premptive priority-driven scheduling algorithms. The deadline driven algorithm, which assigns priorities to the requests according to their deadlines with the highest priority being assigned to the request whose deadline is the earliest, was also considered by Liu & Layland. They obtained the following result: T heorem : A necessary and sufficient condition that a set of periodic-time-critical jobs be feasibly scheduled on a single processor by the deadline driven scheduling algorithm is that the utilization factor of the set of jobs is less than or equal to 1. This theorem, together with the fact that for a set of jobs to be feasibly scheduled on a single processor computing system by any 25 algorithm, its utilization factor should be less than or equal to 1, establishes that the deadline driven scheduling algorithm is optimal, which is in conformity with Labetoulle's result stated earlier. Another algorithm considered by Liu & Layland was called the rate-monotonic priority assignment algorithm. This algorithm was referred to as the intelligent fixed priority algorithm by Serlin [16]. According to this algorithm; priorities to jobs are assigned in the decreasing order of their request rates. A job with a higher request rate is assigned higher priority over a job with lower request rate. This algorithm is an example of a fixed priority scheduling algorithm. Some results about this algorithm are as follows. Theo rem [13]: Among all fixed priority scheduling algorithms for scheduling a set of periodic-time-critical jobs on a single processor, the rate-monotonic scheduling algorithm is a best one in the sense that if a set of periodic-time-critical jobs can be feasibly scheduled by any fixed assignment of priorities, then this set can also be feasibly scheduled by the rate-monotonic scheduling algorithm. The following theorem gives a measure of the 'effectiveness' of the rate-monotonic scheduling algorithm. Theorem A [13]: A set of m periodic-time-critical jobs can be feasibly scheduled on a single processor computing system by the rate-monotonic scheduling algorithm, if the utilization factor of the set is less than or equal to m(2 - 1). Also, this bound is tight in the sense that for each m, there exists a set of m jobs with utilization factor m(2 - 1) which fully utilizes the processor. It should be noted that this theorem provides only a sufficient condition for a set of m jobs to be feasibly scheduled by the rate- 26 scheduling algorithm. Sets of m jobs with utilization factor greater than m(2 - 1) may or may not be feasibly scheduled by the rate- monotonic scheduling algorithm. If the request periods T J is less than or equal to l/(m-l) (m - 1){(2/(1 + x,)) - 1}, then the set can be feasibly scheduled by the rate-monotonic scheduling algorithm. Proof ; It will be sufficient to show that the least upper bound to the utilization factor for the set of m jobs in which the utilization factor of the job with the shortest request period is x. , is x x + (m - 1){(2/(1 + x 1 )) 1/(m_1) - 1}. It was shown in [13] that in determining the least upper bound to the utilization factor according to the rate-monotonic scheduling algorithm, it is sufficient to consider all sets of jobs in which the ratio T /T-, is less than or equal to 2, where T is the m -L n in longest request period and T is the shortest request period. Let CJ - C ± and T| - T . Let C 2 ', C 3 ' C^Lj be such that C. * + C ' + + C ' , < T, ? . Let J/ = (C^ » , T . ' ) , -L I m-1 — -L I 11 where T' = T.' + C.', for i = 1, 2, , m-2. Then, if we choose i+1 i i J ' E (C ' , T '), where m v m ' m ' T ' - t *, + C ' m m-1 m-1 and C ' = T * - 2(C n ' + C ' + + C ',) ra m 1 2 m-l / the processor is fully utilized. It was shown in [13] that among all job sets which fully utilize the processor, a job set for which m-1 c - = T 4x1 - T., for i = 1, 2, , m-1, and C = T - 2 E C. l l+l i' '' ' ' mm i 28 has the minimum utilization factor. The utilization factor of the set of jobs {J.»= (C ', T.')} m is 1 i x i=l C ' C ' 1 2 l l *1 L l C ' nKl T, ' + C ' + + C ' + 1 1 m-2 T ' -C ' - C ' 1 1 m-1 T ' +C ' + + C ', 1 1 m-1 Writing x. for C./T.', we can write 6 l i i U - x n + x,/(l + x.) + + x /(l + x. + + x _) 1 z 1 m-1 1 m-2 + CI - x x - x 2 - x^^/d + x x + x^) (2.1) This expression for U must be minimized over x.'s, i = 2, 3, , m-1, to find the least upper bound to the utilization factor. On setting the first derivatives of U with respect to x„, xt, , x -, to zero, we obtain: j ' 'm-1 ' 2 3U/8x 2 = 1/(1 + x x ) - x 3 /( 1 + x ± + x 2 ) - - X m-1 /(1 +«! + -.-+ x m - 2 ) 2 " 2 ><1 +»i + .-.. %-l) 2 = 8U/8x 3 = 1/(1 + x_ L + x 2 ) - x /(l + x 1 + x 2 + x 3 ) 2 - " x m-l /(1 +»!+••••+ x m-2> 2 2 - 2/(1 + x 1 + + x ) = 1 m-1 29 8U/3x = 1/(1 + x n + x + + x ) m-.l 12 m-2 - 2/( 1 + x. + x. + + x ) 2 = J- ^ m-1 Solving these equations for x~, x~, , x and & M 2' 3' ' m-1 substituting these values in (2.1), we obtain l/(m-l) U = x ± + (m - l)f(2/(l + x x )) - 1} which proves the assertion. Note that as m approaches infinity, the utilization factor U approaches x + In (2/(1 + x,)). Also if x = 0, the minimum achievable utilization factor l/(m-l) for a set of m - 1 jobs is (m - 1)(2 -1), which is in conformity with the result of Theorem A. Figure 2.1 shows how the minimum achievable utilization factor of a set of jobs varies as a function of the utilization factor of the job with the shortest request period. Suppose we are given a set of m - 1 jobs which can be feasibly scheduled according to the rate-monotonic scheduling algorithm, but do not fully utilize the processor. It will be useful at times to know whether a job can be added to this set so that the new set can also be feasibly scheduled. The result of Theorem A can be applied to determine the least upper bound to the utilization factor that the new job can have. However, if the request period of the new job is longer than the request period of any other job in the set, then the following theorem gives a stronger result. 30 (TO C rc K3 H Tl n zr 03 c n; n >i c rr < •-! O ra rt a C/3 3 i-n > > O |-1 03 03 3 3 0> CL CL 1/3 CO H 0) ZT rr TO fD r- 1 - o O < 1 i-h rt> fD 3 3 rr i— ■ O rc> cr 3 »i w Hi TO 3 C/3 03 H- T3 n 3 n> n C n o 3 rr n H" CL C < H- rr rt> 3 H« •< (TO • rr N O 03 rr H> O 3 O O r- (7> 00 O °' — i — rp — 1~ I — i — r-i — r— J> o o > - In order to find the minimum utilization factor over all possible sets of values C ' , T ', T 2 ' , T ', we have to minimize the above expression for U. Let x. = C.'/T ', for i - 1, 2, , m-1. Then, U = x x + x 2 /(l + x x ) + + x ,/U + x. + + x ) m-1 1 m-2 +(i- Xl - ... - ^ 1 )/a + x 1 + ... +x m _ 1 ) or, U = u + (1 - x l - ... - x m _ 1 )/(l + X] _ + ... + x m _ 1 ) (2.2) 33 where, u = x + x_/(l + x.) + ... + x m ./(l + x, + ... + x m 9 ) (2.3) 1 2 1 tn-l 1 m-z Thus we have to minimize the expression for U as given in (2.2) subject to the condition (2.3). This can be done by forming the Lagrangian L = U + AH, where H = x + x„/(l + x ) + .. + x m _ 1 /(l + x 1 + .. + x m _ 2 ) - u = and then minimizing the function L over x.'s. The minimum of this function is obtained by solving the following set of equations: 2 ? 9L/ dx 1 = -2/(1 + X;L + ... + X^) + A{1 " x 2 /(l + x x ) z - ... ■ .... " W< 1 + X l + — +x m-2 )2} = ° tf *m m 9L/3x 2 = - 2/(1 + x Y + ... + X,^) 2 + A{1/(1 + x ) - x 3 /(l + x 1 + x 2 ) - 2 - x m /(l + x. + + x ) } = m- 1 1 m- 2 dL/dx = -2/(1 + x, + + x .) 2 + A{1/(1 + x + ...+ x .) m-2 1 m_ l 1 m ~3 - x.-./d + x, + ... + x ) 2 } = m- ± l m - 2 dL/dx m _ 1 = - 2/(1 + x 1 + ... + x^) - X/(l + x ± + .. + x m _ 2 ) - 3L/8A = x + x /(l + x ) + ... + x 1 /(1 + x. + .. + x m _ 2 ) - u = Solving these equations for x.'s and A, we obtain 34 (-f) i - 1 m - 1 for i = 1, 2, ... , m-1. -V +1 1 _ m-1 Substituting the values of x 's in (2.2), we obtain i -(m-1) U = u + 2( 1 + — "— ) - 1, m-1 th which means that so long as the utilization factor of the m job with request period T > T , is less than or equal to m = m-1 n -(m - 1) 2{1 + u/(m -1)} - 1, it is possible to schedule all jobs on one processor according to the rate-monotonic scheduling algorithm. Note that as m approaches infinity, the quantity -On - 1) 2{1 + u/(m - 1) } - 1 approaches 2 exp(-u) - 1 in the limit. Figure 2.2 compares the minimum utilization factor of the th m job that can be obtained by applying the result of Theorem A and that of the above theorem. As was pointed out earlier, it has been shown that amongst all fixed priority scheduling algorithms, the rate-monotonic scheduling algorithm is a best one in the sense that if a given set of jobs can be feasibly scheduled according to any fixed priority scheduling algorithm, that set can also be feasibly scheduled by the rate-monotonic scheduling algorithm. It would, however, be interesting to investigate how some other fixed priority scheduling 35 n n T) o O I I I I I I I O 30 c 1 fD 3 NJ II ro O 03 "-n O D 03 c Cl n >1 O rr < • H O IV ^i =r rj 03 0) l-t) > "f fD >-i 03 3 a rr Q. 1-0 -r - 0) tfl -1 3 TO fD rr H- 02 -r< "O fD fD <-j. n C rr rr CT J !->• fD < 03 fD n 3 (—> H« *< 3 1-1 H- O CL 3 H- c 3 3 (TO C rr rr O r— ■ 3 H H- II rr N fD 03 8 rr r-i H- (D O 3 3 O ^D> f 3 II N3 o 36 algorithms perform. We have investigated the performance of another heuristic fixed priority algorithm in the next section. 2 . A Reverse-Rate-Mono tonic Priority A ssignment Algorithm In the reverse-rate-monotonic priority assignment algorithm, priorities to jobs are assigned on the basis of increasing order of their request rates. That is, a job with a smaller request rate is assigned higher priority over a job with larger request rate. We note that the assignment of priorities in this case is exactly in the reverse order of those in the case of rate-monotonic scheduling algorithm (hence the name), and, not unexpectedly, turns out to be a worst possible priority assignment, that is, if a set of jobs can be feasibly scheduled according to reverse-rate monotonic priority assignment, then it can also be feasibly scheduled by any fixed assignment of priorities. Before we take up the analysis of this algorithm, we prove the following result. Theorem 3 : If the sum of computation times of a set of jobs is less than or equal to the shortest request period amongst all jobs in the set, then the set of jobs can be feasibly scheduled by any 'fixed' priority scheduling algorithm. Proof: Let {J. = (C , T ) } be a set of m jobs, with i i i i=l T, < T < < T and let C, + C n + + C < T n . 1=2= = m 12 m=l Let J. be the job that is assigned the lowest priority. Since the execution of the request of a higher priority job will not be interrupted by a request of lower priority job, it will be sufficient to show that each request of the lowest priority job, J,, 37 is completed before the corresponding deadline. Suppose the first k requests of job J are all completed th before the respective deadlines, and that the (k + 1) request at time kT. is not completed before its deadline. We will show that 1 this is not possible. If the request of job J is not completed before time (k+l)T., then the processor- is busy throughout the time interval [kT., (k+l)T ]. Suppose the J request at time (k-l)T. is completed 1 i i * at time t e ( (k-l)T , kT . 1 . Since J. has the lowest priority, no o i' i J i r ' other job can have an active request at time t (unless, of course, it is initiated at time t_.) . There must be some job which has at least two of its requests initiated in the time interval [tp., (k+l)T.], because otherwise, since the sum of computation times of all jobs is less than or equal to T, < (k+l)T. - t , the request of iob J, i 1 = - ' i o i at time kT . must have been completed on or before the time (k+l)T.. We consider two cases: (i) All requests initiated during the time interval [tQ, kT^] are completed on or before time kT . . (ii) Some of the requests initiated in the time interval [t Q , kT.] are still active at time kT.. In case (i) , if there is no job which has more than one request initiated in the interval [kT., (k+l)T. ], since the sum of computation times of all jobs is less than or equal to the shortest request period, which in turn is less than or euqal to T., the J. request initiated at time kT . must be completed on or before time (k+l)T^. So, suppose there are jobs which have two or more requests 38 initiated in the time interval [kT. , (k+l)T.]. Let J be the job i i r whose second request is initiated earlier than the second request of any other job. In the time interval [pT , (p+l)T ], where pT is the time when the first of requests of job J is initiated in the time interval [kT., (k+l)T ], no job has more than one request initiated. Moreover, the processor is busy throughout the time interval [kT., (p+l)T ] . The length of this time interval is greater than or equal to T. , which is greater than or equal to the sum of the computation times of all jobs. Hence the request of job J. at time kT. must be completed on or before time (p+l)T <. (k+l)T.. In case (ii) , since there are one ore more requests active at time kT., the processor must be busy at least from the time the first of these active requests was initiated in the time interval [t , kT . ) upto the time kT . . Let t,.e [t A , kT . ] be the o - 1 - i 'J 1 earliest time, such that the processor is busy throughout the time interval [t,, kT.]. Amongst all jobs that have two or more requests initiated in the time interval [t , (k+l)T.], let J be the one whose second request is initiated at the earliest. Suppose this second request is initiated at time p'T > kT . . Then, since each job has at most one request in the time interval [t., p'T ], of length greater than or equal to T r >^ T,, all requests initiated in this time interval, including that of J at kT., must be executed on or i i before p'T r < (k+l)T.. On the other hand, if p'T r < kT., since there is at most one request initiated for each job in the time interval [t,, p'T ], and since the length of this time interval is greater than or equal to T , all requests initiated in this interval must be 39 executed on or before p'T . We shift our t, to p'T and repeat the r . 1 r above arguments for the new interval [t,, (k+l)T.]« By repititive application of the above arguments, if necessary, we see that the request of job J. at time kT . must be completed on or before time (k+l)T.. 1 For the reverse-rate-monotonic priority assignment algorithm, we have the following result: Corollary 1 : A necessary and sufficient condition for a set of m jobs to be feasibly scheduled according to the reverse-rate- monotonic priority assignment algorithm is that the sum of computation times of all the jobs in the set be less than or equal to the shortest request period in the set. m Proof: Let {J. = (C., T )} be a set of m jobs with i l i . , i=l T < T < . . . < T . 1=2= = m We show first the necessity. According to the reverse- rate-monotonic priority assignment algorithm, the job with the shortest request period will have the lowest priority. Suppose the first request of all jobs are initiated simultaneously at time t . The first request of job J^ will, therefore, be executed after the first request of all the jobs have been executed. If the first request of job J is to be executed before its deadline, it is necessary that C, + C, + + C < T . 12 m = 1 We now show sufficiency of the condition. Since the sum of the computation times of all the jobs in the set is less than or equal to the shortest request period, according to Theorem 3, any fixed assignment of priorities, and in particular the reverse- 40 rate-monotonic priority assignment, will produce a feasible schedule for the given set of jobs. Corollary 2 : Amongst all fixed priority assignment algorithms, the reverse-rate-monotonic priority assignment is a worst one, in the sense that if a set of jobs can be feasibly scheduled according to the reverse-rate-monotonic priority assignment it can also be feasibly scheduled by any fixed priority scheduling algorithm. Proof : If a given set of jobs can be feasibly scheduled by the reverse-rate-monotonic priority assignment, then the sum of the computation times of all jobs in the set must be less than or equal to the shortest request period in the set. Therefore, any fixed assignment of priorities will produce a feasible schedule for the set. 41 CHAPTER 3 SCHEDULING PERIODIC-TIME-CRITICAL JOBS ON A MULTIPROCESSOR COMPUTING SYSTEM 3.1 Introduction Recent progress in hardware technology and computer architecture has led to the design and construction of computer systems that contain a large number of processors. One of the most significant advantages attributed to multiprocessor computing systems is the potential decrease in computation time for a large class of problems achievable by parallel programming, that is, the concurrent execution of independent portions of a computational job. It is, therefore, both of practical and theoretical importance to investigate how to make efficient use of multiprocessor systems for the periodic-time-critical jobs described earlier. In this chapter, the problem of scheduling periodic-time- critical jobs on a multiprocessor computing system is considered. 3.2 Rate-Monotonic and Deadline Driven Scheduling Algorithms for Multiprocessor Computing Systems One would naturally hope that a simple extension of the algorithms developed for single-processor computing systems would give satisfactory results when applied to scheduling jobs on multiprocessor computing systems. Unfortunately, this does not turn out to be the case, as the following example shows. A priority-driven preemptive scheduling algorithm for a multi- processor computing system works as follows. Priorities are assigned to the requests of jobs in the set. At any instant when a processor 42 is free, it is assigned to an active request of highest priority. Also if a request occurs at an instant when all the processors are assigned, and if this request has priority higher than the priorities of the requests which are being executed at that instant, then this request preempts the request with the lowest priority. Ties are broken arbitrarily. As in the case of a single processor computing system, the rate-monotonic scheduling algorithm assigns priorities to jobs on the basis of their request rates, with higher priority being assigned to a job with higher request rate over a job with lower request rate. The deadline driven scheduling algorithm assigns priorities to requests on the basis of their deadlines with highest priority being assigned to the request whose current deadline is the earliest. Here again ties are broken arbitrarily. m Consider a set of m jobs {J. = (C , T.)} to be scheduled 1 * 1 = 1 on n-processor computing system (m > n) , where C = 2 e i = l,2, ,m-l i and T =1, i = 1, 2, , m-1 i C = 1 m T = 1 + e m Figure 3.1 is a schedule of this set of jobs on n-processor computing system, according to the rate-monotonic scheduling algorithm. Since jobs J , J , , J , each with request period 1, have 1 * m-1 higher priority than that of job J with request period 1 + e, processors P n through P are assigned to the jobs J,, J , .... , J (n <_ m - 1) , 1 ° n J l2 n — during the time intervals [0, 2e] and [1, 1 + 2e]. Thus, the maximum Tl 43 30 c re CD Cu > 1— O 00 n CO o C n 1 i-( 3* H o. re rr H- CL 3" 3 c 3 rr re O o rr rr 3 ft) i_i. >"1 C cu cr rr 00 re 1 o 3 3 o 3 3 rr T) o i-i 3 O H. n n re w en cn n o 3" *l rt> CO CL c I— 1 H> 3 30 44 time available for the first request of job J which occurred at time 0, m with deadline 1 + e, on any one of the processors is less than or equal to 1 - 2c. Hence the first request of job J cannot be completely m executed before its deadline, and, therefore, this set of jobs cannot be feasibly scheduled on n-processor computing system according to the rate-monotonic scheduling algorithm. The utilization factor of this set of jobs is U = 2(m-l)e + 1/(1 + e). As e ■> 0, U ■*■ 1. Thus the minimum achievable utilization factor for a set of m jobs on n-processor computing system (n < m) according to the rate-monotonic scheduling algorithm is less than or equal to 1. Similarly, when the above set of jobs is scheduled according to the deadline driven scheduling algorithm, the first requests of jobs J , J_, , J _, , with deadline 1 will have higher priority over the first request of job J , whose deadline is 1 + e. Consequently, the n processors will be occupied by jobs J , J~, , J , (n ^ m-1) during the time interval [0, 2e], leaving a maximum of 1 - e units of time for the first request of "job J before its deadline at 1 + e n m (figure 3.2). Hence this set of jobs cannot be feasibly scheduled by the deadline driven scheduling algorithm either. However, if job J is assigned the highest priority, then m this set of jobs can be feasibly scheduled. This example shows that the rate-monotonic scheduling algorithm when applied to scheduling a set of jobs on a multiprocessor computing system is not optimal amongst all fixed priority scheduling algorithms. Neither is the deadline driven scheduling algorithm optimal amongst all scheduling algorithms, as was the case on a single processor computing system. It is, therefore, desirable to look for better scheduling strategies that will lead to 45 ►n TO C >-l re u> ho en £U > n n 3* o C/) re o O a. ri zr c Cu re i— ' H- a H- 3 c D TO i— • TO rr re P3 O o 1— ' i-h TO rt O 3* 3 1 (D H- C_i. rr CL O ZT re cr 3 03 cn M o H- 3 3 re 3 a. x) i-i 1 H> O < n ft) re 3 en en o cn 0" s s s \ + 46 more efficient use of multiprocessor computing systems. The problem of devising optimal algorithms to schedule a set of periodic time-critical jobs on a fixed number of processors turns out to be a difficult one. An alternative approach is to partition the jobs into groups so that each group of jobs can be feasibly scheduled on a single-processor according to some scheduling algorithm. The scheduling algorithm to be applied to the individual groups in the partition will influence the partitioning process, since each individual group of jobs must be feasibly schedulable on a single processor according to the designated algorithm. The problem is then reduced to determining an optimal partitioning of a given set of jobs with respect to the designated algorithm to be used for the groups in the partition. An optimal partitioning of a set of jobs with respect to ja scheduling algorithm for single processor is one that uses a minimum number of processors. Since for a finite set of jobs, there are only a finite number of possible partitions, the problem of determining an optimal partition can be solved by an exhaustive search. However, the number of possible partitions increases exponentially with an increase in the number of jobs. Such an exhaustive search will, therefore, require considerable computation time and will offset the advantage gained by using an optimal partition. Hence, it is interesting to consider some "good" partitioning scheme. By a good partitioning scheme, we mean a scheme under which the number of processors required is reasonably close to the number required by an optimal partitioning. Recall that if the utilization factor of a set of jobs is less than or equal to 1, then the set can be feasibly scheduled on a single processor computing system according to the deadline driven 47 scheduling algorithm. The problem of partitioning a set of jobs with respect to the deadline driven scheduling algorithm is then the same as the bin-packing problem where it is desired to pack a set of packages into bins of fixed size so that the sum of the sizes of the packages in a bin does not exceed the size of the bin. This can be seen by imagining a job as a package of size equal to its utilization factor and a processor as a bin of size 1. Thus all results about the bin-packing problem will also be applicable in this case. However, partitioning a given set of jobs with respect to the rate-monotonic scheduling algorithm is no longer the same as the bin-packing problem. An obvious reason is that a set of jobs may not be feasibly schedulable on a single processor according to the rate-monotonic scheduling algorithm, even if the utilization factor of this set of jobs is less than or equal to 1. The total utilization factor of a set of jobs that can be 'accommodated' on a single processor (i.e. can be feasibly scheduled on a single processor) depends not only on the number of jobs in the set, but also on their computation times and request periods. There is, however, a more subtle point that should be noted. A seemingly possible approach, using the result in Theorem A of chapter 2, would be to consider the size of the bins to be In 2, and apply the known bounds obtained for the bin-packing problem. This approach is, however, false since the utilization factor of a set of jobs that can be feasibly scheduled on a single processor may turn out to be larger than In 2. 48 With this background in mind, the problem of finding an algorithm which produces an optimal partitioning for a set of jobs with respect to the rate-monotonic scheduling algorithm, appears to be a rather difficult one. In view of this, it is interesting to investigate into the performance of some 'heuristic' algorithms. In what follows, two such 'heuristic' algorithms have been considered and in both cases, bounds on their worst-case performance relative to the optimal partitioning have been determined. 3 . 3 The Rate-Mono tonic-Next-Fit Scheduling Algorithm The first algorithm considered is named the rate-monotonic next-fit scheduling algorithm. According to this algorithm, jobs are first arranged in descending order of their request rates. (For convenience, they are renumbered as J 1 , J„ , , J .) The assignment scheme is as follows: Step 1 ; Set i = j = 1. Step 2 : Assign job J. to processor P., if job J. together with the jobs that have been assigned to P. can be feasibly scheduled on P. according to the rate- monotonic scheduling algorithm. Else, assign J. to P. , and set i to i + 1. Go to step 3. i j+1 Step 3 : Set i = i + 1, and repeat step 2 until all jobs have been assigned. The number j so obtained is the number of processors that will be required under this partitioning algorithm. As an example of the application of the rate-monotonic-next- fit scheduling algorithm, consider the following set of jobs: 49 (1, 2); (.1, 2.5); (1, 3); (.1,4.5); (1, 5); (1, 6); (1, 7); (1, 8); (.1, 8.5); (1, 9). The application of the above algorithm will produce the following assignment, requiring 4 processors. Jobs (1, 2), (.1, 2.5) will be assigned to processor P ; Jobs (1, 3), (1, 4), and (.1, 4.5) will be assigned to processor P„; Jobs (1, 5), (1, 6), (1, 7), (1, 8), (.1, 8.5) will be assigned to processor P„; and Job (1, 9) will be assigned to processor P.. However, the following partitioning will require only 3 processors, which is the minimum for this set of jobs if the rate- monotonic algorithm is to be used for each group in the partition: Group 1 ; (1, 2), (1, 3) and (1, 6). Group 2: (1, 4), (1, 5), (1, 7), (1, 8) and (1, 9). Group 3: (.1, 2.5), (.1, 4.5) and (.1, 8.5). It is interesting to note that since the utilization factor of all the jobs in group 2 and group 3 above is less than 1, all these jobs can be feasibly scheduled on a single processor using the deadline driven scheduling algorithm. Thus partitioning this set of jobs with respect to deadline driven scheduling algorithm requires only 2 processors. We have the following theorem concerning this algorithm. Theorem 4 : Let N be the number of processors required to feasibly schedule a set of jobs by the rate-monotonic-next-f it 50 scheduling algorithm, and N-. the minimum number of processors required to feasibly schedule the same set of jobs, then as N_ approaches infinity : 2.4 <_ lim N/N £ 2.67 Proof ; In order to establish the lower bound, we show that given e >0, there exists a set of jobs for which N/N„ is greater than 2.4 - e. Let N be chosen to be 12k. Consider the set of jobs: (1, 2); (6, 2); (1, 3); (6, 3); (2, 4); (6, 4); (2, 6); (6, 6); N N N N N N 2 ~ 2 2 2 " 2 ~ 2 ~ (2 Z , 2 Z ); (6, 2 Z ); (2 Z , 3x2 Z ); (6, 3*2 Z ). When we apply the rate-monotonic-next-f it scheduling algorithm to this set of jobs, jobs (1, 2) and (6, 2) are assigned to the first processor, jobs (1, 3) and (6, 3) are assigned to the second processor, jobs (2, 4) and (6, 4) are assigned to the third processor, , . . ,_N/2 ~ 1 N/2 - 1, , .. . N/2 - 1. and so on. Jobs (2 , 3x2 ) and (6, 3*2 ) are assigned to the N processor. Thus total number of processors required for this set of jobs when the rate-monotonic-next-f it scheduling algorithm is applied, is N. However, this set of jobs can be feasibly scheduled on 5N/12 + 1 processors. This can be seen as follows. Let us divide the jobs into k + 1 subsets as follows. For i = 1, 2, , k, let the i subset consist of the following jobs: (2 6(i-l) i 2x2 6(i- 1) }j (2 6(i -1) ; 3^6(1-1), (2 6(1-1) ♦ 1_ 2 , 2 6(1-1) + 1^ (2 6(i-l) + L 3x2 6(i-l) + 1 51 , 9 6(i-l)+5 , 6(i-l)+5. 6(i-l)+5 , 9 6(i-l)+5. (2 , 2x2 ) and (2 , 3x2 ). All jobs with computation time 6 form the (k+1) subset. Each of the first k of these subsets can be partitioned into five groups as follows: . /0 6(i-l) . _6(i-lh , 6(i-l)+l ,6(i-l)+l. Gr oup 1 : (2 , 2x2 ) and (2 , 2x2 ). r 6(i-l)+2 ■ 6(i-l)+2 6(i-l)+3 ?x9 6(i-l)+3 Group 2 : (2 , 2x2 ) and (2 , 2x2 ; . _ 6(i-l)+4 6(i-l)+4. . ,.6(i-l)+5 9 „6(i-l)+5. Group 3 : (2 , 2x2 ) and (2 , 2x2 ). . ,-6(1-1) _ 6(i-l), , 9 6(i-l)+l „ -6(i-l)+l. . Group 4 : (2 , 3x2 ) , (2 , 3x2 ) and (2 6(i-l)+2 3x2 6(i-l)+2 )> Group 5: (2 6(i - 1)+3 , 3x 2 6(1 - 1)+3 ) , (2 6(i - 1)+ \ 3 x 2 6(i - 1)+4 ) and (2 6(1 - 1)+5 , 3 x2 6(i - 1)+5 ). Each of these groups can be feasibly scheduled on a single processor according to the rate-monotonic scheduling algorithm. Further, if 6 is chosen so small such that N<5 <_ 2, then all jobs with computation time 5 can be scheduled on one processor. This arrangement will thus take only 5k + 1 processors. Thus N n <^ 5k + 1 = 5N/12 +1, and so N/N > 12N/(5N +12). By selecting N to be sufficiently large, we can make this ratio approach the limit 2.4, or, in other words, for a given c>0, there is an integer N, such that N/N„ > 2.4 - c, thus establishing the lower bound. We now show that the ratio N/N n is upper bounded by 2.67. To this end, we define a function f mapping the utilization factors 52 of jobs into the reals. Let u be the utilization factor of a job We define f(u) = 2.67u < u < 0.75 u > 0.75 Let J n , J„, , J be m jobs with utilization factor i / m u.^ u 2 , , u respectively. Let J , , J „ , , J , be the m J pi p2 pk p th k jobs assigned to the p processor according to an optimal assignment. Then k k P P I f(u ) < 2.67 1 u < 2.67 pr ** „ _ -, P r r = 1 r = 1 for p = 1, 2, . N, Summing up over all processors, we have m I f(u ) <_ 2.67N Q i = 1 (3.1) In the case of assignment of processors according to the rate-monotonic-next'-f it scheduling algorithm, we will show that if u n ,u„, ...,., u, are the utilization factors of the k jobs pi P 2 pk p th . , assigned to the p processor, either k P I f(u ) > 1 r = 1 pr or where u (p+Dl I f(u ) + f(u ) > 2, r = I P r (P+Dl ~ is the utilization factor of the job which could not th be assigned to the p processor. 53 k Suppose L f(u ) < 1 for some p. Since this means that i P r r = 1 k k u < 0.75 for r = 1, 2, , k , we have Z p f(u ) = 2.67 Z P u . pr p* n pr pr i r r _ 2 r r=l k Thus, Z u < 1/2.67. Therefore, according to Theorem 2, pr r = 1 k U ( P+ 1)1 > 2exp[ ~ r = \ U P r ] " *' If U ( P+ 1)1 i °' 75 > then k f( u /- mi) = 2 . and we have ^ P f ( u ) + f K xim) > 2 - (p+l)l r = 1 pr (P+Dl Otherwise, we have k k k Z p f(u ) + f(u. _.,) > 2.67[ Z P u + 2exp(- Z p u ) - 1] > 2, r = 1 P r (P+Dl " r = 2 Pr r = l pr since for x < 1/2.67, the expression x + 2exp(-x) - 1 >_. 2/2.67. Let N be the total number of processors used according to the rate-monotonic-next-f it scheduling algorithm. Let processor p, 1 < p < N, be the first processor for which k P Z f(u ) < 1. P r r = 1 r Then l £(u ) + I «« i 2 - r=l r r=l r We then repeat the above procedure on processors p+2 to N. All this implies that N k Z [ Z P f(u ) ] > N - 1. i i P r ~ p = 1 r = 1 f 54 Thus m N k P I f(u.) = I [ I f(u ) ] > N - 1 (3.2) i=l 1 P =l r - 1 pr Combining Lhe result of inequalities (3.1) and (3.2), we have m N - 1 <_ T. f(u ) < 2.67N i = 1 or lim N/N £ 2.67 N + °° 3 . 4 Conclusion In the rate-monotonic-next-f it scheduling algorithm, while assigning the next job to a processor, only the last processor used is checked to see whether or not this job can be assigned to that processor, even though this job may possibly be feasibly assigned to one of the processors used earlier. If earlier processors used are also considered for assignment of the next job, it may be possible to reduce the number of processors used. This approach is considered in the next chapter. 55 CHAPTER 4 RATE-MONOTONIC-FIRST-FIT SCHEDULING ALGORITHM 4 . 1 Introduction According to the rate-monotonic-next-f it scheduling algorithm considered in chapter 3, we only attempt to assign a job to the processor currently under examination, ignoring the possibility of scheduling the job on a processor examined earlier. Intuitively, before assigning a job to a new processor, one might wish to consider whether the job can be feasibly scheduled on any one of the processors examined earlier. The rate-monotonic-f irst-f it scheduling algorithm, introduced in this chapter, is an attempt in this direction. According to this algorithm, a job is not assigned to the j processor until it is determined that it cannot be feasibly scheduled on any one of the (j-1) processors examined earlier. Intuitively, therefore, one would expect that the rate-monotonic-f irst-f it scheduling algorithm will perform better than the rate-monotonic- next-fit scheduling algorithm. 4 . 2 Rate-Monotonic-First-Fit Scheduling Algorithm According to the rate-monotonic-f irst-f it scheduling algorithm, the jobs are first arranged in descending order of their request rates. (For convenience, the jobs are renumbered as J-,, Jo, , J .) The assignment procedure is as follows: Step 1 : Set i = 1. 56 Step 2 : (a) Set j = 1. (b) If job J. together with the jobs already assigned to processor P. can be feasibly scheduled on processor P. according to the rate-monotonic scheduling algorithm, assign J^ to P . , and go to step 3. (c) Else set j = j + 1 and go to step 2(b). Step 3 : If all jobs have been assigned then stop; else set i = i + 1 and go to step 2. The largest index j used in the above algorithm is the number of processors required to schedule the given set of jobs according to this algorithm. We illustrate how this algorithm works by an example. Example : Let (1, 2), (1, 3), (1, 4), (1.9, 5), (2, 6), (2.5, 7), (3, 8), (3, 9), (3.7, 10), (1, 11), (4, 12), (2, 13), (2, 14), (6, 18), (5, 20) and (8, 24) be a set of jobs. According to the rate-monotonic-f irst-f it algorithm, jobs (1, 2), (1, 3) and (1, 11) will be assigned to processor P,; Jobs (1, 4), (1.9, 5) and (2, 13) will be assigned to processor P~; jobs (2, 6), (2.5, 7) and (2, 14) to processor P • jobs (3, 8) and (3, 9) to processor P,; jobs (3.7, 10) and (4, 12) to processor P^; jobs (6, 18) and (5, 20) to processor P.; and job (8, 24) will be assigned to processor P^. Thus a total of 7 processors will be used. The following partitioning will, however, require only 5 processors: Group 1 : Jobs (1, 2), (3, 8) and (1, 11). 57 Group 2 Group 3 Group 4 Jobs (1, 3), (3, 9) and (6, 18). Jobs (1, 4), (2.5, 7), (2, 13) and (2, 14). Jobs (1.9, 5), (3.7, 10) and (5, 20). Group 5 ; Jobs (2, 6), (4, 12) and (8, 24). Since the utilization factor of this set of jobs is 4.97, this is the minimum number of processors required. We now proceed , to analyze the behavior of the rate-monotonic- first-fit scheduling algorithm. We establish first the following result . Theorem 5 : If a set of three jobs cannot be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm, then the utilization factor of the set of jobs 1/3 is greater than 3/(1 +2 ). 3 Proof: Let {J. = (C., T.)} be a set of three jobs with i i» r i=l T, <_ T_ <_ T . Let the utilization factor of this set of jobs be less 1/3 than or equal to 3/(1 + 2 ). Assume further that this set cannot be feasibly scheduled on two processors according to the rate-monotonic- first-fit scheduling algorithm. This implies that no two of these three jobs can be feasibly scheduled on a single processor according to the rate-monotonic scheduling algorithm. This in turn implies that 1/2 the utilization factor of any set of two jobs is greater than 2(2 - 1) (Theorem A, chapter 2). Since 2(2 1/2 - 1) + 1/2 > 3/(1 +2 ) , it follows that the utilization factor of each of the three jobs is less than 1/2. Therefore, 2C. < T. , i = 1, 2, 3. l l (4.1) 58 Further, it is claimed that T < 3T . According to Theorem B (chapter 2), a set of two jobs (C , T ) and (C , T_) , with T 2 /T2 = m can be feasibly scheduled on a single processor by the rate-monotonic scheduling algorithm if C /T + C /T <. 2(/m(m+l) - m) Therefore, if T > 3T (implying [TjT J > 3), we must have: C l /T l + C 2 /T 2 > 2(/12 " 3) C 1 /T 1 + C 3 /T 3 > 2(/l2 - 3) C 2 /T 2 + C 3 /T 3 > 2(/2 " 1) This would give C l /T l + C 2 /T 2 + C 3 /T 3 > 2(/12 " 3) + (/2 " 1) 1/3 > 3/(1 +2 ). This contradicts the assumption that the utilization factor of the 1/3 given set of jobs is less than or equal to 3/(1 + 2 ). This establishes our claim that T 2 < 3T X (4.2) With J-, assigned to processor P.. , and J 2 assigned to processor P2, let S denote the set of all those Jobs J = (C, T) , such that neither J and J can be feasibly scheduled on processor P nor J? and J can be feasibly scheduled on processor P_ according to the rate-monotonic scheduling algorithm. Let us find a job J = (C, T) belonging to the set S, whose utilization factor is minimum among all jobs in S. Let |_T/T J = k and [T/tJ = k . For a given T, let C' and C" be such that the set of jobs (Op T-j_) and (C', T) fully utilizes the processor P , and jobs ((:», T ) 59 and (C", T) fully utilize the processor P . Then k l (T l " C l ) ' if k l T l i T i k l T l + C l T - (k 1 + l)C lf if kjT + Cj < T < (k + 1) Tj (A. 3) k 2 (T 2 " C 2 ) ' if k 2 T 2 = T = k 2 T 2 + C 2 (A. 4) T - (k + 1)C 2 , if k 2 T 2 + C 2 4 T < (k 2 + 1)T 2 If C > max (C, C"), then the job (C, T) belongs to the set S. Let f,(T) ■ C'/T and f 2 (T) = C"/T, where C and C M are given by (A. 3) and (A. 4) respectively. Let f(T) = max ( f^T), f 2 (T) ). For a given T, f(T) represents the maximum utilization factor that a job can have so that it can be feasibly scheduled either on P. or on ?2' The absolute minimum of f(T) is what we want to determine. We observe that the function f,(T) decreases monotonically in the interval [ kiT^, k^T^ + C-, ) and increases monotonically in the interval [ k.T, + Cp (kj + DTj ] . Similarly the function f (T) decreases monotonically in the interval k 2 T2» k 2 T2 + C 2 | and increases monotonically in the interval [ k„T 2 + C 2 , (k 2 + 1)T 2 ] . In figure A.l, curve AA' is the graph of the function fi(T) and curve BB' is that of f 2 (T). The heavily marked portions of the two curves represent the function f(T). It is claimed that the decreasing segments of the two curves f . (T) and f 2 (T) are either identical or have no point in common. This can be seen as follows: Suppose in an interval [ a, b ] both fi(T) and f 2 (T) are decreasing. Let a/T A = k 1 and U/T 2 |= k" . For all x e [ a, b] 60 Tl 00 c _i. (li Hh cr^ O M to n cr o n to 3 rt X) C_4 00 O 3" • £ ►i H- O ft) rt i-h -a :r i-< 03 (t i_i. 3! o (_i. ft cr o 3 CT rt C_i H- H^ ft 3 * t 00 to rt rt 3 :r 0- i-ti C m TJ I— 1 3 i-( h- ' -~ c l < X n H- (T) c 3 :/) rt C in H- 3 Q ^ I-! H< C N rt T) fD H- ho (/) h- 1 (13 TJ N h- 1 f{ to O o rt 3 n H- 30 n> o Cfi 3 s v> p- o rt -t o to O o 0> o 00 to H + >» Cd 3* 61 f (x) - f (x) = [k'(T - C ) - k"(T 2 - C )]/x is less than, equal to or greater than zero, according as k' (T, - C ) - k"(T - C ) is, or according as f-^(a) - f (a) is. Since both f , (T) and f„(T) are continuous positive-valued functions in the interval [a, b] and f -i (T) - f 2 (T) has the same sign throughout the interval [a, b], they do not cross each other at any point in the interval. We can similarly show that the increasing segments of the two curves f (T) and f 9 (T) are either identical or have no point in common. Thus at a point of intersection of the two curves f , (T) and f (T) , one of the functions is increasing and the other is decreasing. Also note that the points of minima of the function f(T) are among the points of minima of the functions f -. (T) and f 2 (T) and the points of intersection of the two curves. We consider the following cases: Case 1 : T 4 T + C (4.5) Since J and J cannot be feasibly scheduled on P , we have C 2 > T : - C 1 (4.6) In this case, we must have T 3 < 3T , because if T_ >_ 3T, , then T-, >. 2T„, and, therefore, according to Theorem B (chapter 2), C l /T l + C 2 /T 2 > 2(/2 " 1) C 2 /T 2 + C 3 /T 3 > 2(/6 - 2) C 1 /T 1 + C 3 /T 3 > 2(/l2 - 3) 1/3 which gives C 1 /T 1 + C 2 /T 2 + C-^/T^ > 3/(1 + 2 ), contradicting the assumption that the utilization factor of the given set of jobs is 1/3 less than or equal to 3/(1 +2 ). 62 We now show that in this case the two curves fi(T) and f~(T) do intersect in the interval [ T. + C , 2T.]. When T = T 1 + C , f^T) = (Tj - C^/T f 2 (T) = (T 2 - C 2 )/T Since T 2 > 2C 2 and C 2 > T - C,, it follows that f 2 (T) > f 1 (T) at T = T l + C l (4.7) When T = 2T X , and f^T) = 2(T l - C 1 )/2T 1 (T - C ? )/2T 1 , if 2T L < T ? + C ? f 2 (T) 2(T X - C 2 )/2T ly if T 2 + C 2 4 2T X Since C < T, - C, < C 2 and T 2 < Tj^ 4- C±, we get f 2 (T) < fjCT) at T - 2Tj (A. 8) Since fi(T) and f 2 (T) are single-valued continuous functions, it follows from (4.7) and (4.8), that the two curves must intersect at a point somewhere in the interval [ T, + C,, 2T,] . Let X. be this point of intersection (figure 4.1). At Xi, T is given by the equation: T - 2Cj = T 2 - C 2 i.e., T = T 2 - C 2 + 2Cj (4.9) and C = T 2 - C 2 (4.10) From (4.1), (4.5) and (4.6), it follows that (T - C^ < C 2 < ^ T l + c l)/ 2 - 0nce c i» T l and T ? are fixed as a bove, C can have any value in the interval ( (Tj - C^), ^(T. + Cj) ). If we choose C 2 , C^ and T, such that C 1 /T 1 + C 2 /T 2 + C3/T3 is minimum over 63 all possible choices of C-, C-, and T~, and C, and T- are such that once C is chosen, the point (C-, T 3 ) is the minimum of the curve f(T), then a set of jobs having utilization factor less than or equal to C 1 /T 1 + C 2 /T 2 + C 3 /T 3 and satisfying the conditions (4.1), (4.2), (4.5) and (4.6) can be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm. We claim that these requirements will be satisfied, if we choose C 2 = T x - Cj Co = To — Co = To — Ti + Ci T 3 = T 2 " C 2 + 2C 1 = T 2 - T l + 3C 1 and then C l T l " C l T 2 " T l + C l T 2 T 2 - T X + 3C L Let C 2 ' = Tj - Cj + A , with A > 0. To find the corresponding pair (CA, To'), we have to find the corresponding point where the function f(T) attains its minimum. Since this point is attained at one of the points of intersection of the two curves fj(T) or f 2 (T), or one of the points of minima of these two curves, we examine all such possible points. At the point of intersection X. in the interval [Tj + Cp 2T 1 ], T 3 ' = T 2 - C 2 ' + 2C X = T 2 - T 1 + 3C X - A = T 3 - A and, C 3 ' = T 2 - C 2 * = T« "- Tj + C, - A - C 3 - A 64 Let Then U' = Cj/Tj + C 2 '/T 2 + C 3 '/T 3 U* - U = A /T 2 + (C 3 -A)/(T 3 -A) - C 3 /T 3 = A/T 2 - A (T 3 - C 3 )/T 3 (T 3 - A) = A [T 3 (T 3 -A) - T 2 (T 3 - C 3 )]/T 2 T 3 (T 3 -A). Since C 3 = T 2 - C 2 > J$T 2 > C 2 ' = C 2 + A > A and T 3 > Tg, we have, U f - U > 0. The next point of intersection is either in the interval [ 2Tp 2T, + Cj ] or there is no other point of intersection upto 3T. . Suppose there is a point X in the interval [2T., 2T, + C^] where the two curves f . (T) and f 2 (T) intersect. At X 2 , C 3 ' = 2(Tj - C' x ) = 2C 2 and T 3 ' = 2(T X - C l + C 2 ' ) = 2(2C 2 + A) U' = C l /T l + (C 2 +A)/T 2 + 2C 2 /2(2C 2 + A) U' - U = C 1 /T 1 + C 2 /(2C 2 + A) - (T 2 - C 2 )/(T 2 - C 2 + 2^) A C^UCj + C 2 - T 2 ) - A (T 2 - C 2 ) ~T^ + (2C 2 -A )(T 2 - C 2 + 2C L ) A{ (2C 2 + A )(T 2 - C 2 + 2C X ) - T 2 (T 2 - C 2 ) ) T 2 (2C 2 + A ) (T 2 - C 2 + 2C L ) C^lCy + C 2 - T 2 ) + (2C 2 + A )(T 2 - C 2 + 2C L ) 65 A {T (3C 9 - T ) + AT ? + (2C, - C,) (2C + A ) } U' - U = - - - - = T 2 (2C 2 + A ) (T 2 - C 2 + 2Cj) C^C, + C 2 - T ) + (A. 11) (2C 9 + A ) (T 2 - C 9 + 2Ci) Since T ? 4 T. + C, and C 2 = T, - C^, we have 2C 1 + C - T 2 > 2C! + Tj_ - Cj - Tj_ - Ci = (4.12) Also 3Tj_ - T 2 > 31 1 - (T L + C x ) = 2T - C >, 4C - C 1 = 31^ Therefore, 3(T, - C.) £ T 2 or 3C 2 > T 2 (4.13) Further, since T 1 - C 1 = C 2 < h(?i + Cj) T l < 3C 1 Hence, C 2 = T. - Cj < 3C X - Ci 2C X (4.14) From (4.11), (4.12), (4.13) and (4.14), we see that U' - U > 0. The other possibility is that there is no point of intersection of f (T) and f 2 (T) in the interval [ 2^, 2^ + Cj ] (See figure 4.2). Since at T = 2T^ fAT) 4 f,(T), ( from 4.8), in this case, we should also have f 2 (T) 4 f^T), when T = 2T, + C.. Therefore, (T - 2C 2 ')/T 4 2(T X - C 1 )/T, where T = 2T + Cj_ Hence 2T + C - 2C 2 ' 4 2Tj - 2C or, 3C L 4 2C 2 » (4.15) In this case, the minimum of f(T) can occur at the minimum of the curve fj(T) in the interval [2T , 3Tj] which is at point T = 2Tj + C. 66 •n TO C H rt) -> K> TJ T3 hi O M H. ai M O O n 03 n n rt T3 rt> fli o £T en en i en 7) l-l o o o rt) h. ii Hi "O T3 0) rt> NJ h- 1 V) l_i. rt) 03 03 o 3 t— ' M cr rt O O H- a ~ n 3 TO DQ 3* 03 TO t c (T rt H- H- cr rt rt Hi ft) ^j* "ZT C H 3 ; — .. (_i. m 03 O O VJ X CT cr H> C 3 (-. r_< rt c to )—■ H« 3 • <# »— ' H- c (D N rt 3 rt) H- CL W >—• H- N 0) H H- O 3 o K3 O 4> O o oo H -M v 67 Then, In that case, C ' = 2(Tj - C^ = 2C and T 3 ' = 2Tj_ + C U' = C /Tj + (C 2 + A)/T 2 + 2C2/C2TJ + C x ) U» - U = A /T 2 + 2C 2 /(2T 1 + C x ) - (T 2 - C 2 )/(T 2 - C 2 + 2C X ) A , 2 < T 1 - C l> T 2 - T l + C l T 2 2Tj + Cj T 2 - Tj + 3C 1 A 7CiT! - 3C!T 2 - 7Ci 2 + " T 2 (2Tj_ + C 1 )(T 2 - Tj + 3C X ) (4.16) From (4.15), 2C 2 ' > 3Cj or, 2(T X - Cj + A ) > 3^ or, 2T. - 5C 1 > -2 A (4.17) Also, since T 4 T + C 1 we have, 7(^1 - 30^2 - 7C X 2 > 7C 1 T 1 + 3Ci(-Tj - C^ - 7C X 2 = 4C 1 T 1 - lOCj 2 - 2C 1 (2T 1 - 5^) Using (4. 17) , we get 7C 1 T 1 - 30^2 - 7C X 2 > -4-ACj (4.18) Therefore, from (4.16) and (4.18), U 1 - U > A/T 2 - 4AC 1 /(2T 1 + C 1 )(T 2 - Tj + 3Cj) A {T 2 (2T X - 3C L ) + (5C 1 T 1 + 3C X 2 - 2T L 2 )} T 2 (2T X + CJXT2 - T 1 + 3C X ) A {T 1 (2T, - 3C L ) + 50^ * 3C X 2 - 2Tx 2 } > : T 2 (2Tj + C 1 )(T 2 - T x + 3C X ) 68 U' - U >^ A { 2C 1 T 1 + 3C X 2 }/T 2 (2T 1 + CjXT - Tj + 3^) > 0. Another point that is a potential minimum of f(T) is the point of intersection of the two curves f . (T) and f„(T) in the interval [2T, + C. , 3T, ] provided it exists. This point is determined by the equation: (T 3 ' - 3C 1 )/T 3 ' = 2(T 2 - C 2 ')/T 3 ' Thus in this case, T 3 ' = 2(T 2 - C 3 ') + 3C lf and then, C ' = 2(T. - C ? ') 3 2 l C, (Co + A ) 2(T 9 - Co - A) U' = ., v.^2 T u > ^vJ-i ~ ^9 + + Tj_ T 2 2(T 2 - C 2 - A ) + 3Cj The two curves f (T) and fo(T) can intersect in the interval [ 2T, + C. , 3T,] only if there is also a point of intersection of the two curves in the interval [ 2T,, 2T^ + C, ] (see figure 4.3). At that point of intersection, T" - T 2 - C 2 ' + 2C 1 and Co = T 2 - Co, and the corresponding utilization factor is, U" = Cj/Tj + (C 2 + A)/T 2 + (T 2 - C 2 - A ) / (T 2 - C 2 - A + 2C t ) It has already been shown that U" > U. Now, U' = C 1 /T 1 + (C 2 +A)/T 2 + 2(T 2 - C 2 -A)/{ 2(T 2 - C 2 - A ) + 3^} = C 1 /T 1 + (C 2 +A)/T 2 + (T 2 - C 2 -A)/{T 2 - C 2 - A + 3Cj/2} > U" . h9 •n C TJ i-h O O l-o O o 0"> H»— ' to M rt n 3 3" to rt T3 M O 3" <-"• O H. O 3 H cr oo o ro r-tl TJ C K ►1 ro h- cu fD rt en 3- C_i. rt) O 3 i_i. cr rt o H- cr rt 3 3* tw C-. to h- ' rt rr -• 3" Ml rc to c 3 (-■ 3 CL (-> to v; X T3 H- 1 c 3 O rt c n h- 3 fD h- 1 C/) p. C W N rt O 0) H> 1-1 W h- 1 H- M3 T) N N> H. 3 rt to n H- h-' fD O o en 3 3 en 00 O H r-o O oo 70 Hence, U 1 > U. Thus for fixed C. , T and T , with T < T, + C , the 1 1 2 2 = 1 1' utilization factor U = C 1 /T 1 + (Tj - C 1 )/T 2 + (T 2 - T ± + C 1 )/(T - T + 3C ) is such that for any choice of C in the range ((T-, - C, ) , (T. + C- L )/2)) and any C~, T such that C /T + C /T + C /T is less than or equal to U, the set of jobs (C 1^) , (C , T ), and (C , T ) can be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm. Since T > 2C = 2(T 1 - C ), we let T = 2(T - C^ + k, where k is greater than or equal to zero. Then, U = C 1 /T 1 + (T - C 1 )/{2(T - C ] _) + k} + (T ± - C ± + k)/(T ] _ + C ± + k) Writing x for C /T , and y for k/T , we can write U = x + l ~ x + 1 - x ± y (4.19) 2(l-x)+y 1+x+y To determine the minimum value of U over all possible sets of three jobs, satisfying the conditions of this case, the expression in (4.19) is to be minimized. Setting partial derivatives of U with respect to x and y to zero, we have: 3U y 2(1 + y) 3x {2(1 - x) + y} 2 " (1+x+y) 2 " ° ^U (1 - x) 2x m 3 y {2(1 - x) + y} 2 (1 + x + y) 2 Solving these two equations for x and y, we get: x = 1/(1 + 2 1/3 ) 2/3 1/3 y - (2 - 2 )/2, 1/3 which gives U = 3/(1 + 2 ). 71 Thus, we conclude In this case that if three jobs cannot be feasibly scheduled on two processors according to the rate-monotonic- first-fit scheduling algorithm, their utilization factor must be 1/3 greater than 3/(1+2 ). Case 2 ; T + C < T < 2T (4.20) Since J., and J„ cannot be feasibly scheduled on a single processor by the rate-monotonic scheduling algorithm, we must have: C 2 > T 2 - 2C X > T - C ] _ (4.21) Also, we claim that in this case T~ <_ 4T-. . Because otherwise by Theorem B (chapter 2), C 7T + C 2 /T 2 > 2(/2 - 1) C l /T l + C 3 /T 3 > 2(v/2 ° " A) C 2 /T 2 + C 3 /T 3 > 2(/6 " 2) which would give 1/3 C 1^ T 1 + C V T 2 + C 3^ T 3 > 3 ^ 1 + 2 ^' contrar y to our assumption. The possible configurations for the graphs of f . (T) and f (T) , and consequently that of f(T), are shown in figures 4.4, 4.5, 4.6, 4.7, 4.8 and 4.9. As shown in case 1, we can prove that in this case also the two curves f , (T) and f 2 (T) must intersect in the interval [T, + C-, , 2T, ] . However, there might or might not be other points of intersection in the interval [2T , 4T, ] . It will be shown below that for fixed C , T, , C , and T ? , the absolute minimum of f(T) occurs - 72 -rl O l-O o o o oo C n (t> -t> *» S TJ »-t> C3 H4— ' CU i-1 rr n CU 3" &) rr T3 i— • O 3" t_j. O r-1 o p i-t cr oq O ro r-h -o <-i s: ri hO H- 03 fD rr [/) zr <_i. (T> o 3 (_i. rr rr o H« cr rr 3 3" (T) c_ CU M rr rr <• zr r-n fD 0i c 3 t— 1 3 CL r— 0) *< X -a H> M c 3 O rr c n M- 3 rc h- ' (/) H- C en N rr o n> H" n en I—" T) X, N NJ r-t O O rr &) n H- t— (D O o W 3 3 en TO o 7 '3 -n o r-o O CR C ■-( ID -P- Ln l_i. CD t-h O O M CD i-( cr o n £d 3 rf T3 C_ TO o rr to i-t • s f-i H- O ft) rt Ml T) 3- 1 03 ro l_j. V) O t_i. m cr O 3 cr rt C-H H- M rt 3 •< zr oq CD 0) rt rt a J CL t-h fD C T3 M 3 f-( M CD O v; X o H- fD c a CO rt C Cfl H- 3 O 1— ■ f-l H- C N rr T3 fD H- tsj U) M H- CD T3 N t- 1 ff CD O O rt 3 O H- era fD O Cfi 3 <, CO H- o rr l-< 3" "0 H H 4> H + O C* 03 O CO i>3 74 -ii OQ c M ro j> a- _i. fn Ml o O h-' 03 M cr o n &3 3 pt TJ j o i-i :r • <] i-i H- o ro rr Ml -a rr M 03 ro i_j. 0) o l_l. ro cr o 3 cr rr c_ H- h-' rt 3 - rr oq 0) (D rr rr 3 rr ex. Mi c ro T3 H^ 3 M H 03 O •^ X n H- (D c 3 U) rt c 01 H> 3 o i— • 1-1 H- c N rr -d rt> H- NJ 03 r-> CJ T) N M i-l 03 O O rr 3 n H- QQ ro o 0) 3 <, tn H- o rr M o ro O O a* o CO f> CO > -a 75 ►n 00 c ►1 fD 4>- ^J t— 1. (13 i-+i O O \-> 0) i-t cr o n in P rt X) C-, 00 O 3* nj H • £ n I-" o a> rt l-ti T) D* M m fD i_j. en o (_i. rti cr O 3 CT rt c_, H- H rt 3 ■• 3" 00 03 EU rt rt =r & i-n ro c 13 h- 3 >-( |_J (U v: x n H- ro 3 3 W rt C 0) H- 3 o H 1-1 H« C N rt <-a fD H- to 0) h- ■ H- to •3 N h- ' M CU O O rt 3 n H« 09 ro o 0) 3 £ CO H- o rt M o o O o to H H _L Tl 76 30 C ii 0) .o 00 _l. CD i-h n O I- 1 Co M a- o n Co 3 rt T) C 00 o r-t J . £ l-l H- o re ff hh -a zr i-i Co re t_i. 03 o (_i. re cr o 3 zr rr <-H M- H rr 3 " Co 00 (S3 rr rt 3 3" a- c re T3 1— 1 3 i-t (— ' Co o ^ X n H- re c 3 oi I-t C w H- 3 o M ^ H- C N rr TJ re H- ro o> 1— ' Co X) N t— ' l-< Co o o rr 3 n H- 00 K O Cfi 3 ?: 01 H" o rr M o o o H Hi H +"» H 77 Tl o ro 00 c H ro *s VO oj Ml o H [13 f-t O n 0) 3 rr -o 00 O ►1 cr € ►i H« O ro rr I-*, "O zr i 09 ro L-l. U9 o U fD cr o D cr rr C_i H- _i rr 3 » D* OQ 03 t3 r-f rt 3 :r c. r-h C ro X) M 3 n H DJ o ^ X n H- m C 3 c/i rt c 0) r— 3 o M •1 H- c N rt TJ (B H- o CD w "O N f— " i-( pj o o rr 3 n H- 00 re O CO 3 S3 en H> o rr ►i .c- o o oo io Hi LO H H > 00 ►n 78 either (i) at the point of intersection of the two curves f (T) and f (T) in the interval [T + C , 2T ] , or (ii) at the point of intersection of the two curves in the interval [2T , 2T + G ], if there is one, or if there is no point of intersection in this interval, at the point 2T + C , which is one of the points of minima of f (T) . If there is no point of intersection in the interval [2T,, 2T + C ], then there is also no point of intersection in the interval [2T X + C , 3T ] , and f (T) £ f (T) for T e [2^, 3T-] . Thus, in the interval [2T,, 3T ] , f(T) = max {^(T), f^T)} = f (T) . Since f (T) is minimum at T = 2T + C, in the interval [2T , °°) , and since f(T) > f,(T) in the interval [2T , °°) , the minimum of f(T) in the interval [2T , «>) occurs at T = 2T + C . Thus in this case, the points where f(T) can have its minimum is either the point of intersection of the two curves in the interval [T, + C. , 2T 1 ] , or the point where T « 2T, + C,. If the two curves do intersect at a point, X , say, in the interval [2T , 2T + C ], then we claim that at other points of minima of f(T) in the interval [2T + C , 4T ] the value of the function f(T) will either be greater than that at X or greater than that at X . This can be seen as follows. If the two curves f (T) and f (T) intersect in the 1 2 interval [2T + C, , 3T ] , that point of intersection is given by the equation: 79 (T - 3C )/T = 2(T 2 - C 2 )/T so that T = 2(T - C ) + 3C . The value of f(T) at this point is, therefore, equal to 2(T - C 2 )/{2(T - C ) + 3C } which is greater than (T - C )/{(T 2 - C ) + 2C } which is the value of the function f(T) at X , the point of intersection of f,(T) and f (T) in the interval [T 2 , 2TJ. Next suppose there is a point of intersection in the time interval [3T,, 3T + C ]. This point is given by the equation: 3(T X - C 1 )/T - (T - 3C 2 )/T which gives T = 3(T-, - C^ + 3C 2 , and then C = 3(T — C ). Hence, at this point of intersection, f(T) = 3(T 1 - C 1 )/{3(T - C x ) + 3C 2 > = (Tj - C 1 )/(T 1 - C + C ) = f(T) at X 2 . Suppose further that there is a point of intersection of the two curves in the interval [3T + C,, AT ] . There are two situations under which the two curves can intersect in this interval! (i) There is a point of intersection in each of the two intervals [2T + C , 3T ] and [3T , 3T + C ]; or (ii) there is no point of intersection in either one of the above two intervals. In situation (i), the value of the function f(T) at this point of intersection is 3(T - C )/{3(T 2 - C ) + 4C } which is greater than (T - C )/(T 2 - C 2 + 2C ) , the value of f(T) at X . In situation (ii) , the value of the function f(T) at 80 at the point of intersection in the interval [3T^ + C, , AT ] is 2(T 2 - C 2 )/{2(T 2 - C 2 ) + 4^} = (T 2 - C 2 )/(T 2 - C 2 + 2C.) = f(T) at X r Another point that is a potential minimum for f(T) is the point where T = 3T + C , provided there is a point of intersection in the interval [2T + C , 3T ] , but there is no point of intersection in the interval [31^, 4T ] , as shown in figure 4.8. This situation can arise in one of the following two cases: Either (i) 3(T - C ± ) > 2(T 2 - C 2 ) (4.22) or (ii) 3(T X - C ± ) > 3T 1 + C - 3 C 2 i.e. 3C 2 > 4C-L (4.23) In case (i) , we claim that f(T) at T = 3T, + C, is greater than f(T) at X . This will be so, if, 3(T 1 - C 1 )/(3T 1 + C 1 ) > (T 2 - C 2 )/(T 2 - C 2 + 2C X ) or if, (1 - C 1 /T 1 )/(l + C i /3T i ) > 1/{1 + 2C 1 /(T 2 - C 2 ) } or if, (T - C 1 )/(T 2 - C 2 ) > 2/3 which is true because of (4.22). In case (ii) , we claim that f(T) at T = 3T + C. is greater than f(T) at X 2 . This will be so, if, 3(T - C 1 )/(3T l + C x ) > 2(T - C 1 )/2(T 1 - C ± + C 2 ) or if, 3C > 4C,, which is the same condition as (4.23). There is one more possibility to consider. That is when the two curves intersect in the interval [2T , 2T + C ], there is no other 1 1 1 point of intersection in the interval [2T + C. , 4T-, ] , and f_(T) is greater than f (T) in the interval [X , 4T ] (figure 4.9). In this case we claim that the value of f(T) at T = 2T„ + C„ is greater than that of f(T) either at X, or at X ? . Here again, there are two cases: Either (i) 2(T 2 - C 2 ) > 3(T - C ] ) (4.24) or (ii) 2(T 2 - C 2 ) > 2T 2 + C 2 --4^ i.e. 4C > 3C (4.25) In cnse (i), we claim that the value of the function f(T) at T = 2T 2 + C is greater than that of f(T) at X . This will be so, if, 2(T 2 - C 2 )/(2T 2 + C 2 ) > 2(T 1 - C 1 )/2(T - C ± + C 2 ) or if, 2(T 2 - C 2 )/f2(T 2 - C 2 ) + 3C 2 } > 1/(1 + ^/(T^ - C )} or if, (T 2 - C 2 )/(T - C ) > 3/2, which is true in view of (4.24). In case (ii), we claim that the value of f(T) at T = 2T + C is greater than that of f(T) at X, . This will be so, if, 2(T 2 - C 2 )/(2T 2 + C 2 ) > (T 2 - C )/(T 2 ~ ^ 2 + 2C ± ) or if, 4C n > 3C , 1 2 which is true because of (4.25). Thus, the minimum of f(T) occurs either at X or at one of the following two points: (a) The point of intersection of f (T) and f (T) in the interval [2T , 2T. + C ], if it exists, or (b) The point where T = 2T + C . As before, for fixed C , T , T , we can choose any value 112 y for C 2 in the range (T - 2C , T ) , and then choose C , T such that the value of f(T) is minimum at T~. If we choose C„, C , T such that C, /T 1 + C„/T + C_/T is minimum over all possible choices of C„, C and Tt, then a set of jobs having utilization factor less than or equal to C /T + C /T + G_/T can be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm. We claim that C /T. + C /T + C /T is minimum, when 82 C 2 = T 2 - 2C. C 3 = 2(Tj - Cj) and T 3 = 2(Tj - (^ + T 2 ) with C, T 2 - 2C 2(T - C,) 1 2 2(T L - C l + T 2 ) Since Tj + C 4 T 2 4 2T 1§ we can write T 2 = T l + C l + A where 4 A 4 T - C This gives C l T, - C, + A T. - C, U - — = — + — = r^— : + T l T 2 + C 2 + A 2(Tj - C,) +A subject to the condition that 4 A < Ti ■ C. . Suppose C 2 ' = T 2 - 2C 1 + k Then at the point of intersection X, , C- = To - Co = To — Co — k and T 3 ' = T 2 - C 2 ' + 2Cj = T 2 - C 2 - k + 2Cj U' = C l /T l + (C 2 + k)/T 2 + (T 2 - C 2 - k)/(T 2 - C 2 - k + 2C X ) = C 1 /T 1 + (C 2 + k)/T 2 + 2C X / (4Cj_ - k) U' - U = k/T 2 + 2C 1 /(4C 1 + k) - (Tj - C 1 )/{2(T ] _ - C^ + A } k A/2 + k/2 Tj + Cj + A 2(Tj - C^ +A 4C X - k k{ 7Cj - (T x + A + 2k)} A /2 (Tj + C x + A)(8C 1 - 2k) 2(T - C ) +A Since 2C 2 ' 4 T 2> 2(T 1 - Cj + A + k) < T L + Cj + A 83 or T L + A + 2k < 3C J < 7 C Hence, U' - U > 0. The other point where U' can be minimum is either the point of intersection of the two curves f , (T) and f~(T) in the interval [ 2Tp 2T + Cj ] if it: exists, or else when T-' = 2Tj + Cj. If there is no point of intersection in the interval [ 2Tp 2T 1 + C. ] then, f 2 (2Tj + C L ) <_ f 1 (2T 1 + Cj) i.e., 2Tj + Cj - 2C 2 ' < 2(T l - C±) or, 3Cj - 2 A - 2k 4 2(T 1 - C^ - 2k 4 2T, - 5C 1 + 2 A (4.26) At T 3 ' = 2T 1 + C,, C 3 " = 2(T 1 - Cj) Hence, U' = C 1 /T 1 + (C 2 + k)/T 2 + 2(T 1 - C 1 )/(2T 1 + C x ) U' - U = k/T 2 + 2(Tj - C 1 )/(2T 1 + C x ) - (T 1 - C 1 )/{2(T 1 - Cj) +A} (T - C ) (2T - 5C + 2A ) k 1111 T 1 + C 1 + A T (2T, + Cj) (2T x - 2C x + A ) Using (4.26), we obtain U' - U > k/(T + C + A) - 2k(T - C )/(2T + C^) (2Tj -2C 1 +A ) = k{2T 1 (T 1 - C ) + 3C )}/(T + C +A)(2T + C ) (2T -2^+ A ) > 0. 84 In case the two curves f (T) and f (T) intersect in the interval [2T 1 , 2Tj + Ci]i say at X 2 , then, f 2 (2Tj + Cj) > f 1 (2T 1 + Cj) i.e., 2Tj + Cj - 2C 2 ' > 2(T, - Cj) or, 5C - 2T > 2(k + A ) ^ (A. 27) At X 2 , C 3 ' - 2(Tj - C x ) and T ' = 2(T - C, + C 2 + k) U' = C 1 /T 1 + (C 2 + k)/T 2 + (T 1 - C 1 )/(T 1 - (^ + C £ + k) U' - U = k/T 2 + (Tj - C 1 )/(T 1 - C + C + k) -(T - C )/{2(T - C ) + A } k {3T 1 (Tj - 2Cj) + C 1 (5C 1 - 2T X )} = ( T i + c i+ A)(2T 1 - 2C 1 + A+ k)(2T 1 - 2Cj_ +A) Using (4.1) and (4.27), we obtain U' - U ^ 0. Hence, for given C , T. , T_ if we choose C^, C~ and T„ such that the utilization factor of the three jobs (C]_, Tj) , (C 2 , T 2 ) and (C 3 , 7~) is less than or equal to U = C 1 /T 1 + (T 2 - 2C 1 )/T 2 + 2(Tj - C 1 )/2(T 1 - (^ + T 2 ) , then the three jobs can be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm. Since 27 ^ > T ^ T + C , we can write T = T + C + k subject to <. k <, T, - C, , and then we can write U = C 1 /T 1 + (T 1 -C 1 + k)/(Tj + C 1 + k) + 2(Tj - C 1 )/(4T 1 -4C 1 +2k) subject to the condition that 4k 4 T. - C,. 85 Also since the minimum of f (T) occurs for T <. 2T. + C. , we have the condition that 2(2(T - CjO + k ) 4 2Tj + : 'C or, 2T - 5C L + 2 k 4 0. Writing x for C,/T\ and y for k/T, , we can write U = x + (1 - x + y)/(l + x + y) + (1 - x)/(2 - 2x + y) (4.28) subject to: (i) 0 3/(1 + 2 1/3 ) Thus in this case also, a set of three jobs with utilization 1/3 factor less than or equal to 3( 1 + 2 ) can be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm. Case 3: 2T < T„ < 3T, 1 = 2 = 1 In this case T~ must be less than 3T , because otherwise, C 1 /T 1 + C 2 /T 2 > 2(/6 - 2) C 1 /T 1 + C 3 /T 3 > 2(/l2 - 3) C 2 /T 2 + C 3 /T 3 > 2(/2 - 1) which gives U = C 1 /T 1 + C 2 /T 2 + C 3 /T 3 > 3(1 + 2 1 / 3 ), contrary to our initial condition. Thus we have to consider only the case T^ < 3T. . In figure 4.10, the curve AA' represents the function fi(T) and the curve BB ? represents the function f 2 (T). It can be easily shown that the two functions f . (T) and f~(T) must have a common point somewhere in the interval [ 2T. + C, , 3T. ] . Let X. be this point of intersection. Then the function f(T) is represented by the curve BX^A 1 . Since the function f,(T) is increasing in the interval [ 2T^ + Cj, 3Tj and f 2 (T) is decreasing in this interval, the minimum of f(T) in this interval will be at the point of intersection of the two functions f , (T) and f«(T) at X.. At the point X , T 2 - C 2 = T - 3C L so that, T = T 2 - C 2 + 3C 1 and C = T 2 - C 2 87 oo c o o _3. (1) Hi O O h-" a >-f cr o n CD 3 rt T) C-i 00 ) O i-( rr • <*^ M M> O n> rt >-Tl -a zr M 0) n> Lj. 05 o i-.. fD cr o 3 zr rt c_ H- K' rt 3 " 00 u rt rt 3 3* O- hti C (D TJ I— • 3 M h-" &j O '< v. n H- rc> c 3 en rt c 0) H- 3 o h- 1 1-1 H- c N rt -a ro H" ro 01 £U -X3 N M 1 (U o O rt 3 n H- 00 n> o tfi 3 «-„ en H- c rr M o H 88 For fixed values of C, , T. and T~, we can choose C~ in the range ( 2(1^ - C,), HT 2 ) , and tne " choose C 3 , T-, such that f(T) is minimum at T 3 . It is claimed that U = C^/Tj + C 2 /T 2 + C /T will be minimum when C 2 is minimum possible, and C 3 and T 3 are T« - C 2 and T~ - C 2 + 3Ci respectively. In that case, U = Cj/T! + C 2 /T 2 + (T 2 - C 2 )/(T 2 - C 2 + 30^ Suppose C 2 ' > C 2 . To be specific, let C 2 ' = C 2 + k, k > Then, C 3 ' = T 2 - C 2 - k = C 3 - k and T 3 ' = T 2 - C 2 - k + 3Cx = T 3 - k U f = C 1 /T 1 + (C 2 + k)/T 2 + (T 2 - C 2 - k)/(T 2 - C 2 - k + 3C X ) = Cj/Tj + (C 2 + k)/T 2 + (C 3 - k)/(T 3 - k) U 1 - U = k/T 2 + (C 3 - k)/(T 3 - k) - C 3 /T 3 - k {T 3 (T 3 - k) - T 2 (T 3 - C 3 ) }/T 2 T 3 (T 3 - k) Since T 3 ^ T 2 and T 3 - k > T 3 - C 3 U' - U > 0. Thus U is minimum when C 2 is minimum. There are two cases to be considered: (i) 2T X 4 T 2 4 2Tj_ + Cj and (ii) 2T + Cj 4 T 2 4 3Tj_ In case (i) , C 2 > 2(T t - C^ Thus, in this case, minimum U is given by C, 2(T, - C.) T - 2(T - C ) U = + + Tj T 2 T 2 - 2(Tj - Cj) + 3C X Since T 2 > 2C 2 , we can write 89 T = 2C 2 + k, where k > = 4(Tj - C x ) + k This gives C L 2(T 1 - Cj) U = Ty + 4(T X - C x ) + k + 2Tj + C x + k subject to the condition that C L 2(T 1 - Cj) 2(T 1 - C x ) + k 2T X < 4(T L - Cj) + k < 2T + C. or ° I 2T i _ 4c i + k * c i* Wr iting x for Cj/Tj and y for k/T, - x + 2(1 - x) 2(1 - x) + y 4(1 - x) + y 2 + x + y subject to the condition that <^ 2 - 4x + y <_ x In order to minimize U over all x and y, we set 9U = 1 _ 2% _ 3(2 + y) 3x { 4(1 - x) + y} 2 (2 + x + y) 2 3U 2(1 - x) 3x 9y (4(1 - x) + y} 2 ( 2 + x + y) 2 Solving these two equations for x and y, we obtain x = ^18 - h i + 2 y = (x 2 + 4x - 2)/(l - x) and the corresponding value of = = 3 / , ,, . 1/3, U = 3x = _JL{ h8 - /12 + 2} > 3/(1 + 2) In case (ii) , Let T 2 = 2T, + C x + k 4 3T,, k > 90 Then C = 2(1^ - C^ + k Therefore, U = C /T 4- C 2 /T 2 + (T 2 - C 2 )/(T 2 - C 2 + 3^) = C 1 /T 1 + (2Tj - 2C 1 + k)/(2Tj + C Y + k) + h subject to the condition that T 2 > 2C 2 or 5C 1 > 2T l + k Again, writing x for C./T, and y for k/T., U = x + (2 - 2x + y)/(2 + x + y) + h subject to the condition that 5x > 2 + y (4.29) In order to minimize U over all possible pairs x and y, we set ^U_ = i . 3 < 2 + y) - = (4.30) 3X (2 + x + y) 2 3U 3x (4.31) ^ ( 2 + x + y ) 2 (4.31) gives x = 0, violating condition (4.29). Hence x + 0; but then 3U/3y + 0. Hence y itself must be 0. Then, 3U/3x = 1 - 6/(2 + x) 2 = 0, giving x = /6 - 2. and U = 2(/6 - 2) + h > 3/(1 + 2 1/3 ). Hence, if a set of three jobs cannot be feasibly scheduled on two processors according to the rate-monotonic-f irst-f it scheduling algorithm, the sum of their utilization factors must be greater than 3/(1 + 2 1 / 3 ). 91 Theorem 6 : If a set of m jobs cannot be feasibly scheduled on (m - 1) processors according to the rate-monotonic-f irst-f it scheduling algorithm, then the utilization factor of the set of jobs 1/3 must be greater than m/(l +2 ). Proof: Since no set of 3 jobs can be feasibly scheduled on two processors, the utilization factor of any subset of three jobs 1/3 is greater than 3/(1 + 2 ). Hence, 1/3 u x + u 2 + u 3 > 3/(1 + 2 ) u + u + u. > 3/(1 + 2 1/3 ) 1^4 1/3 u 1 + u 2 + u m > 3/(1+2 ) 1/3 u 2 + u + u 4 > 3/(1 + 2 ) 1/3 u_ + u + u > 3/(1 + 2 ) 2 J m 1/3 u + u + u > 3/(1 + 2 ) m-2 m-1 m where u. is the utilization factor of job J., 1*1, 2, .... , m. Summing up all these inequalities, we obtain: n - l\ 2 m 1/3 or i u . > m/(l + 2 X/J ), ± = 1 X as claimed, 92 Theorem 7 : Let N be the number of processors required to feasibly schedule a set of jobs by the rate-monotonic-f irst-f it scheduling algorithm, and Nq be the minimum number of processors required to feasibly schedule the same set of jobs. Then, as N n approaches infinity: I/ 3 1/3 2 < lim N/N 4 4x2 /(l + 2 1/J ) N "°° Before we give the proof of this theorem, we establish a series of lemmas. We define first a function f mapping the utilization factors of jobs into the real interval [0, 1]. if u is the utilization factor of a job, let f 2u 0< u< 1/2 f(u) = I 1 1/2 4 u 4 1 Lemma 1 : If jobs are assigned to the processors according to rate-monotonic-f irst-f it scheduling algorithm, amongst all processors to each of which two jobs are assigned, there is at most one processor for which the utilization factors of the set of the two jobs is less than 1/2. Proof : Suppose the contrary is true. Let J r ^ and J „ denote the two jobs assigned to processor P , and J , and J IT o J- S £. denote the two jobs assigned to processor P (r < s) , such that u . + u . < 1/2 (4.32) rl r2 and u . + u _ < 1/2 (4.33) si s2 We have the following three cases: 93 Case 1: Jobs J , and J „ were assigned to processor P — — si s2 s after job J o na d been assigned to processor P . Since a set of 1/3 three jobs with utilization factor less than or equal to 3(2 - 1) can be feasibly scheduled on a single processor according to the rate-monotonic scheduling algorithm (Theorem A, chapter 2) , we must have: 1/3 u + u + u > 3(2 - 1) rl r2 si 1/3 and u . + u n + u > 3(2 - 1) rl r2 S 2 1/3 Hence, u . + u > 6(2 - 1) - 2(u + u ) si s2 rl r2 > 6(2 1/3 - 1) - 1 or, u + u > 1/2, si s2 which is a contradiction to (4.33) above. Case 2: Jobs J , and J were assigned to processor P si s2 s after job J ■, had been assigned to processor P , but prior to assignment of job J ~. We have, in this case, 1/2 u + u > 2(2 ±/z - 1) rl si u + u > 2(2 1/2 - 1) rl s2 1/2 Hence, u . + u „ > 4(2 - 1) - 2u . si s2 rl 1/2 > 4(2 - 1) - 1 > 1/2, which is again a contradiction to (4.33) above. the 94 Case 3 : Job J , was assigned to processor P_ after s job J , had been assigned to processor P , and job J ~ was assigned to processor P after job J „ had been assigned to processor P . r s J r 2 or r We have: 1/2 u . + u > 2(2 - 1) rl si and Ul +u +u o > 3(2 1/3 - 1) rl r2 s2 Once again, we have 1/2 1/3 u . + u _ > 2(2 - 1) + 3(2 - 1) - 1/2 - 1/2 si sZ > 1/2, which is in contradiction to (4.33) above. Lemma 2: Let N be the minimum number of processors required to schedule the set of jobs J, , J2, , J , with utilization factors u, , u , , u , respectively. Then, 1 ^ m m Z f(u ) 4 2N i = 1 Proof: Let J , , J , J be the set of Jobs ri rz rk assigned to processor P . Since k r E u 4 1 i = 1 ri we have k k r r E f(u .) < 2 E u . £ 2 . ri ri — i=l i = l 95 Hence, f(u.) 1 i - 1 N k r E £ f(u ) < 2N , r - 1 1 - 1 r i ° as claimed, We introduce now some definitions, Let J r i> ^ r o' , J be k jobs assigned to rk r r processor P , and let I u U . Tbe deficiency 6 of r i = 1 ri r r processor P is defined as: r 1/k if U > k [2 r - 1] r = r -k S 2[1 + U /k ] r - 1, otherwise i r r J ' Th e coraseness a of processor P is defined to be: r v r max 1 < j < r-1 for r = 1 (6 ), for r > 1, Lemma 3 : Suppose jobs are assigned to processors according to the rate-monotonic-f irst-f it scheduling algorithm. If a processor with coarseness a greater than or equal to 1/6 is assigned three or more jobs, then Zf(u.) >_ 1, where u. runs over all jobs assigned to the processor. Proof : First we observe that if the coarseness of a processor is a, then the utilization factor of every job on this processor is larger than a. This follows directly from the definition of the coarseness and Theorem 2. Thus the utilization 96 factor of each of the jobs assigned to this processor is larger than 1/6. If any one of the jobs assigned to the processor has utilization factor larger than or equal to 1/2, the result is immediate. Otherwise, Sf(u ) > 2 x 3 x 1/6 > 1, Lemma 4 : Suppose jobs J -. , J _, , J . , k > 2, r i r 2. r k r r are assigned to processor P , whose coarseness a is less than 1/6. If k x Z u _> In 2 - a. i = 1 ri then, k r I f(u ) > 1. 1=1 ri " Proof : If any one of the jobs has utilization factor larger than or equal to 1/2, the result is immediate. We, therefore, assume that all jobs have utilization factors less than 1/2. We then have: k k r r I f(u ) = 2 E u . > 2(ln 2 - a ) - ri . . ri = r i=l i = l > 2(ln 2 - 1/6) > 1. Lemma 5: Let processor P with coarseness a be assigned r r "jobs J ,J , ,J , with utilization factors u ,, u „, .... rl' r2 rk rl r2 r ...., u , respectively, and let L K. r 97 k r E f(u ) = 1-3, where 3 > 0, I - 1 then: either (i) k = 1 and u , is less than 1/2, r rl or (ii) k = 2 and u .. + u _ is less than 1/2, r rl r2 k or (iii) Z r u < In 2 - a - 3/2. i = i ri ~ r Proof: (1) If k =1 and u is greater than or r rl equal to 1/2, then f(u n ) = 1, which contradicts the fact that rl 3 > 0. (ii) If k = 2, and u , + u > 1/2, then r rl r2 — again we have f(u ) + f(u ) _> 1 , which is in contradiction to the rl r 2 fact that 3 > 0. (iii) If neither (i) nor (ii) holds, then k >_ 3. By Lemma 4, we have: k r Z u < In 2 - a i = 1 ri k r Let E u. = In 2 - a -A, where A > 0. i = 1 ri Let us replace jobs J = (C , T ), i = 1, 2, 3, by jobs ri ri ri r*. = (C . T ), such that C'. > C^. n ri ri ri — ri , and 3 3 E C* /T . = l C /T + A i = 1 ri rl i = 1 ri ri 98 and C /T < 1/2 for i = 1, 2, 3. ri ri Since the utilization factor of the set of -jobs J' , J' , and J' , and rl' r2' r3 J ,, , J . is In 2, this set can be feasibly scheduled on r4 rk J r a single processor (Theorem A, chapter 2). By Lemma 4, i k 3 r Z f(u'.) + Z f(u .) > 1 , n . ri = i = l i = 4 k r Z f(u .) > 1 - f(X) - 1 - 2X n = i = l 1-6 > 1 - 2A k r Zu < In 2 - a - 6/2. I - 1 ri r Proof of Theorem : Suppose that for a given set of jobs N processors were used in the rate-monotonic-f irst-f it scheduling algorithm. Let P ,P , ,P denote the processors for r r r 1 2 s each of which Zf(u), where the summation runs over all jobs assigned to the processor, is strictly less than 1. For convenience, let us relabel these processors as Q., Q„, , Q . To be specific, 12 s for processor Q., let k J Z f(u ) = 1 - 6 , where 6 > 0. r = i jr j j for j = 1, 2, , s. 99 Let us divide all these processors into 3 sets: (1) Processors to each of which only one job is assigned. Suppose there are p of them. (2) Processors to each of which two jobs are assigned. According to Lemma 1, there is at most one such processor. Let us denote this number by q, where q = or 1. (3) Processors to each of which more than two jobs are assigned. Suppose that there are r of them. Clearly p + q + r = s. Note that coarseness of each processor in set (3) is less than 1/6 (Lemma 3) . Let a be the coarseness of processor Q . For the r J J processors in set (3), we have: k J U = E u <_ In 2 r a. - &J2 j i . i Ji - J J Also a j+l i 6 j i In 2 - U., for j = 1, 2 , r - 1. Thus, a. + B /2 < In 2 - U. < a. , J j J - J+l for j = 1, 2, , r - 1 Hence, r-1 — E 3. ^a-a < 1/6. 2 i m 1 x r 1 100 Thus, for the first r-1 processors in set (3), r-1 f(u) - (r - 1) - Z 0. i=l > (r-1) - 1/3 = r - 4/3. For the processors in set (1), since the p tasks do not fit on p-1 processors, by Theorem 6, P 1/3 E u., > p/(l + 2 1/J ) (A. 34) i=l Also, since f(u) for each of these processors is less than 1, each of these jobs has utilization factor less than 1/2. Hence, I f(u n ) > 2p/(l + 2 1/3 ) i=l Also, no job in this set has utilization factor less than 1/3, because a job with utilization factor less than 1/3 can be feasibly scheduled on a single processor together with a job of utilization factor less than 1/2. Hence in any optimal partition of the set of jobs, no more than two of these jobs can be scheduled on a single processor. Therefore, N > p/2. n 1/3 E f(u ) > (N - s) + (r - 4/3) + 2p/(l + 2 ) i=l 1 = N - (p + q + r) + (r - 4/3) + 2p/(l + 2 1/3 ) = N - p {1 - 2/(1 + 2 1/3 )} - 4/3 - q Also, from Lemma 2, we have n I f(u ) < 2N i=l 101 p (2 1 / 3 - 1) Therefore, N < 2N n + — ; +4/3 + q ~ ° (1 ■+ 2 1 / 3 ) p(2 1/3 - 1) ( 2 l/3 + 1) 4/3 + " 2 + + N N N o 2 1 / 3 - 1 < 2 + 7/3N + 2 2 1/3 + 1 When N„ Is sufficiently large, we have N/N Q < 4.2 1/3 /(l + 2 1/3 ) + E. To eastblish the lower bound, we show that for a given £ > 0, there exists an arbitrarily large set of jobs for which N/N^ > 2 - £ For a given N, let us choose a set of N jobs as follows: 1/N \ - (1, 1 + 2 ) ^2= (2 1/N + 6, 2 1/N (1 + 2 1/N )) x 3 = (2 2/N + 6, 2 2/N (l + 2 1/N )) (N-D/N (N-l)/N 1/N T N - (2 + 6, 2 V (1+2 )) where <$ is such that no job has utilization factor greater than 1/2. This set of jobs when scheduled according to the rate-monotonic first-fit scheduling algorithm, will require N processors, because no two of these jobs can be feasibly scheduled on a single processor 102 according to the rate-monotonic scheduling algorithm. However, since the utilization factor of each job in this set is less than 1/2, any pair of these jobs can be feasibly scheduled on a single processor according to the deadline driven scheduling algorithm. Thus all these jobs can be feasibly scheduled on J - r - 1 Thus, N 4 P?-] and so, N/N > N/["-^-| Taking N sufficiently large, we can make the ratio N/N Q > 2 - e. 103 CHAPTER 5 CONCLUSIONS The problem of scheduling periodic-time-critical jobs on single and multiple processor computing systems was considered in the previous chapters. In view of the difficulty in devising optimal algorithms for scheduling periodic-time-critical jobs on multi- processor computing systems, we directed our attention to two heuristic algorithms. These heuristic algorithms do not use more than a fixed percentage of the minimum number of processors that are needed. We have obtained bounds on the worst-case behavior of these algorithms. In the case of the rate-monotonic-next-f it scheduling algorithm, we showed that in the worst case, the ratio N/N^, where N is the number of processors needed according to the algorithm, and Nq is the minimum number of processors, is lower bounded by the constant 2.4 and is upper bounded by the constant 2.67. We suspect that the upper bound can be improved to 2, A, although we have not been able to prove it. In the case of the rate-monotonic-f irst-f it scheduling algorithm, we showed that in the worst case, the ratio N/N~ is lower bounded by the constant 2 and is upper bounded by the constant 4*2 /(l + 2 ). Again, we suspect that the upper bound can be improved to 2 . In Theorem 6, we showed that if a set of m jobs, m >. 3, cannot be feasibly scheduled on m - 1 processors according to the rate- monotonic-f irst-f it scheduling algorithm then the utilization factor 1/3 of the set of jobs must be greater than m/(l + 2 ). If for a given 104 set of jobs, the ratio of the longest request period to the shortest period is less than or equal to 2, we are able to show that this value can be increased to m/(l + 2 ). It is our conjecture that this result is true for any arbitrary set of m jobs. If this can be proved then substituting p/(l +2 ) for p/(l + 2 ) in equation (4.34), we see that the upper bound for the rate-monotonic-f irst-f it scheduling algorithm can be reduced to 2. We have: , m Theorem 8 : If a set of m jobs {J = (C , T )} , with T < T < < T < 2T n , cannot be feasibly scheduled on m-1 1 = 2 = = m = 1 processors according to the rate-monotonic-f irst-f it scheduling algorithm, then the utilization factor of this set of jobs must be 1/m greater than m/(l +2 ). Proof : We shall prove the theorem by induction on m. We know that the result is true for m = 2 and m = 3. Let us suppose, it is true for all integers less than m. We first observe that the utilization factor of each of the jobs in the set is less than 1/2. This follows from the fact that no subset with m-1 jobs can be feasibly scheduled on m-2 procesaore, or else the given set could be feasibly scheduled on m-1 processors. Hence, the utilization factor of any m-1 jobs in the set is greater l/(n»-l) than (m - 1)/(1 + 2 ). Thus if any one of the jobs has utilization factor greater than or equal to 1/2, then the utilization factor of the set will be greater than (m - 1)/(1 + 2 1 ^ m-1 ^) + 1/2 > m/(l+2 1/m ). Since no two of the jobs in the set can be feasibly scheduled on one processor by the rate-monotonic scheduling algorithm, while assigning jobs to processors according to the rate-monotonic-f irst-f it 105 scheduling algorithm, jobs J n , J 9 , , J , will be assigned to 1 z m-1 processors P , P , , P _- respectively. As T varies, let f (T) , f 2 (T), f _, (T) represent the utilization factor of a job that along with J, , J~, , J fully utilizes processors P n , P , ...., i ^ m-1 1 ^ P respectively. Then, for a given T , the maximum utilization factor m-i m of the job that can be assigned to at least one of the P _, processors according to the rate-monotonic-f irst-f it scheduling algorithm is max{f,(T ), f (T ),...., f , (T ) } . Thus, once we fix C , T , C , 1 m' 2 m m-1 m 1 1 2 T~ , , C i, T -, , then if the utilization factor of the m tn job, 2 m- 1 ' m- 1 ' J (C , T m ) , is less than or equal to min{max{ f , (T) , f„(T), f m _ 1 (T)}}, the m jobs {J. 5 (C . , Tj)}. ,, can be feasibly scheduled on m-1 processors. If we restrict the value of T to within the range T and 2T , then this minimum occurs at the point where the curves f , (T) and f , (T) intersect. This follows from the fact that since 1 m-l 2C. < T,, i = 1, 2, , m-1, and C. _> T. - C i , for i = 1, 2, ... , m-1, we have Cj.i > C., i = 1, 2, ...., m - 1. Therefore, ff . (T), T . <. T < T' (f,(T) , f ,(T)} = m_1 m_1 ~ " Vl i T 4 2T l I fiCT), T'< T < 2T X where T' is the point where the two curves f , (T) and f , (T) 1 m-1 intersect . At this point of intersection, we have T - 2C, T , - C i 1 m-1 m-1 T T or T = T , - C , + 2C- m-1 m-1 Thus, if the utilization factor of the m job is less than or equal to (T , - C ,)/(T , - C , + 2C. ) > then the m jobs can be m-1 m-1 m-1 m-1 1 J 106 feasibly scheduled on m-1 processors. Hence the utilization factor of the given set of jobs must be greater than [C,/T.. + C 2 /T ? + + C ,/T , + (T - - C ,)/(T , - C , + 2C,)]. We wish to find the m-1 m-1 m-1 m-1" v m-1 m-1 1 minimum value of this expression over all possible combinations of C , T, , C 2 , T~ , , C _, , T __!• Since J., and J_ cannot be scheduled on processor P, , C~ must be greater than T - C, . Similarly, since J-, and j_ cannot be scheduled on P , and J« and J„ cannot be scheduled on P2, we see that C > max (T - C , T~ - C 2 ) . In general, C ± > max ( T 1 - C 1S T 2 - C 2 , , T - C ±-1 > , i = 2, 3, , m. It can be shown that the expression [C-./T, + C 9 /T + + C , /T -, + r 1 1 *■ L m-1 m-l (T m _ 1 - c m -l)/ T m -l " C m -i + 2c i^ wil1 nave minimum value when C . * T . , - C . - , for i = 2, 3, , m-1. With these values of C/s, 1 l-l l-l 1 the utilization factor of the given set of m tasks must be greater than [C 1 /1 1 + (T 1 -C 1 )/T 2 + (T 2 ' \ + C 1 ) /T 3 + + ( T m-2 " T m-3 + T m-4 - V'Vl + (Vl - T m-2 + T m-3 " C l> /( Vl " C m-1 + 2C l^' Since T. > 2C , for i = 2, 3, m-1, we can write T 2 = 2(T X - C x ) + k 2 T 3 = 2(T 2 - T 1 + C x ) + k 3 = 2(T X ~ C ± + k 2 ) + k 3 T = 2 (T , - T + T . - C n ) + k . m-1 m-2 m-3 m-A 1 m-1 = 2( Tl - C, + k , + k, + + k.) + k m-2' m-1 where k.'s are greater than 0. Thus we can write the utilization factor as 107 U - C 1 /T + (T - C^/UO^ - c x ) + k 2 ] + [T x - C x + k 2 + ... + k m . 2 ]/[2(T 1 - Cj + k 2 + .. + k m . 2 ) + k^] + [ Tl - C j ♦ k 2 + .. + k^l/Iij ♦ c x + k 2 + + Vl ], Writing x for C /T and x. for k /T , i = 2, 3, , m-1, we have : U = x ± + (1 - x 1 )/[2(l - Xl ) + x ] + + (1 - X;L + x 2 + .... + x m _ 2 )/[2(l - Xl + x 2 + .. + x m _ 2 ) + x m _ x ] +(1 - x, + x + ... + x m ,)/(l + X, + + x m .) 1 Z m— i J- m— 1 (5.1) In order to minimize U over x, , x_, x , > we solve the 1 z m-i following set of equations for x^'s: 3U/Dx = 1 - x 2 /[2(l - x x ) + x 2 ] 2 - X /[2(1 - X x + x 2 ) + x 3 ] 2 - .... - 2(1 + x 2 + + x m _ 1 )/[l + x x + ... + x m _ 1 ] 2 = 3U/3x 2 = -(1 - x 1 )/[2(l - X;L ) + x 2 ] 2 + x /[2(1 - x x + x^ + x 3 ] 2 + .. + 2 Xl /[l + x 1 + ... + x^] 2 - 9U/3x = - x 9 /[2(l - x, + x 2 + .. + x m 9 ) + x ] m-i m-z i ^ m-z m-1 + 2x 1 /[l + Xl + + x ,] 2 = 1 1 m-l J On substituting the values of x.'s obtained by solving this set of equations, in (5.1), we obtain: 1/m U = m/(l + 2 ). This proves our assertion. 108 There are still many questions in connection with the scheduling of periodic-time-critical jobs. For example, it would be interesting to investigate other heuristic algorithms, such as, the rate-monotonic-best-fit algorithm. It would also be interesting to study the scheduling problem when the processors are not identical Furthermore, we limited our study to preemptive scheduling algorithms only. The problem of non-preemptive scheduling still remains mostly unexplored. It may also be interesting to study algorithms that minimize the number of preemptions in a schedule. 109 LIST OF REFERENCES [1] Chen, N. F. and C. L. Liu, "Bounds on the Critical Path Scheduling Algorithm for Multiprocessors Computing Systems", (to appear) . [2] Coffman, Jr., E.G. and R. L. Graham, "Optimal Scheduling for Two Processor Systems", Acta Informatica 1,3 (1972), pp. 200- 213. [3] Cook, S. A., "The Complexity of Theorem Proving Procedures", Proceedings of the 3 r ^ ACM Symposium on Theory of Computing , 1970, pp. 151-158. [4] Fujii, M. , T. Kasami, and K. Ninomiya, "Optimal Sequencing of Two Equivalent Processors", SIAM J. Appl. Math. 17, 4 (July, 1969), pp. 784-789; Erratum 20, 1 (January 1971), p. 141. [5] Garey, M. R. , and R. L. Graham, "Bounds for Multiprocessor Scheduling with Resource Constraints", SIAM J. on Computing , 4 (1975) , pp. 187-200. [6] Graham, R. L., "Bounds on Multiprocessing Timing Anomalies", SIAM J. Appl. Math . 17,2 (March 1969), pp. 416-429. [7] Hu, T. C, "Parallel Scheduling and Assembly Line Problems", Oper. Res. 9,6 (November 1961), pp. 841-848. [8] Johnson, D. S., A. Demers, J. D. Ullman, M. R. Garey, and R. L. Graham, "Worst-Case Performance Bounds for Simple One Dimensional Packing Algorithms", SIAM J. on Computing, 3 (1974), pp. 299-325. [9] Karp, R. M. , "Reducibility Among Combinatorial Problems", Complexity of Computer Computations , R. E. Miller and J. W. Thatcher (eds.) Plenum Press, New York, N.Y. (1972), pp. 85- 104. [10] Labetoulle, J., "Ordonnancement des processus temps reel sur une ressource preemptive", Thess de 3 eme cycle, Universite Paris VI (1974) . [11] Labetoulle, J., "Real Time Scheduling in a Multiprocessor Environment", (to appear). [12] Lam, S., and R. Sethi, "Worst Case Analysis of Two Scheduling Algorithms", to appear in SIAM J. on Computing . 110 [13] Liu, C. L. , and J. W. Layland, "Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment", J. ACM , 20 (1973), pp. 46-61. [14] Liu, J. W. S., and C. L. Liu, "Bounds on Scheduling Algorithms for Heterogeneous Computer Systems", Proceedings of the IFTPS 1974 Congress , North-Holland Publishing Co., August 1974, pp. 349-353. [15] Sahni, S., "Algorithms for Scheduling Independent Tasks", J. ACM , 23 (1976), pp. 116-127. [16] Serlin, 0., "Scheduling of Time Critical Processes", Proc. of the Spring Joint Computers Conference (1972), pp. 925-^32. [17] Ullman, J. D., "Polynomial Complete Scheduling Problems", 4th Symposium on Operating Systems Principles , York town Heights, New York, (October 1973), pp. 96-101; to appear JCSS Ill VITA Sudarshan Kumar Dhall was born in Maghiana, Punjab (now in Pakistan), on September 6, 1937. He received his B.A. degree from the Panjab University, India, in 1956 and M.A. in Mathematics from the University of Delhi, India, in 1968. In 1972, he received M.S. in Mathematics from the University of Illinois at Urbana- Champaign. He served in the Government of India from October 1955 to January 1970. He was a Research Assistant at the University of Illinois at Urbana-Champaign for six years. He is a member of the American Mathematical Society. 3LI0GRAPHIC DATA EET Title and Subtitle 1. Report No. UIUCDCS-R-77-85Q Scheduling Periodic-Time-Critical Jobs On Single Processor and Multiprocessor Computing Systems \uthor(s) Sudarshan Kumar Dhall 3 erforming Organization Name and Address t Department of Computer Science University of Illinois at Urbana-Champaiqn Urbana, IL 61801 Sponsoring Organization Name and Address I National Science Foundation Washington, DC Supplementary Notes 3. Recipient's Accession No. 5. Report Date 8. Performing Organization Rept. No. 10. Project/Task/Work Unit No. 11. Contract /Grant No. MCS-73-03408 13. Type of Report & Period Covered 14. Abstracts The problem of presentive scheduling of periodic-time-critical For'the c as n p of P "°it -n° r 3nd multl > ocess °r computing systems was studied, for a,,inn?nn Lhc T P ° CeSSOr com P utln 9 systems, suboptimal algorithms for assigning jobs to processors were designed and analyzed.