lanxicy.com

兰溪范文 文档专家

兰溪范文 文档专家

4

Pro le Scheduling by List Algorithms

Zhen LIU, INRIA, Centre Sophia Antipolis Eric SANLAVILLE, Universite Pierre et Marie Curie

rst introduced by Ullman in 1975 in the complexity analysis of deterministic scheduling algorithms. In such a model, the number of processors available to a set of tasks may vary in time. Since the last decade, this model has been used to deal with systems subject to processor failures, multiprogrammed systems, or dynamically recon gured systems. The aim of this paper is to overview optimal polynomial solutions for scheduling a set of partially ordered tasks in these systems. Particular attentions are given to a class of algorithms referred to as list scheduling algorithms. The objective of the scheduling problem is to minimize either the maximum lateness or the makespan. Results on preemptive and nonpreemptive deterministic scheduling, and on preemptive stochastic scheduling, are presented.

Abstract: The notion of pro le scheduling was

Keywords:

Deterministic Scheduling, Stochastic Scheduling, Pro le Scheduling, List Schedule, Priority Schedule, Precedence Constraints, Lateness, Makespan.

4.1 Introduction

Consider the problem of scheduling a set of partially ordered tasks represented by a directed acyclic graph, referred to as task graph, where vertices represent tasks and arcs represent precedence relations. Tasks are executed, subject to precedence constraints, on a set of parallel identical processors. The number of available processors, referred to as pro le, may vary in time. Task and processor assignments must be nonredundant,

Scheduling Theory and Its Applications. P. Chretienne, E. G. Co man, J. K. Lenstra, Z. Liu (Eds.) c 1994 John Wiley & Sons Ltd

2

Z. Liu and E. Sanlaville

i.e., at any time, a task can be assigned to at most one processor, and a processor can execute at most one task. Each task has a due date. The objective is to minimize the maximum lateness or, when due dates are not taken into consideration, the makespan. The notion of pro le scheduling was rst introduced by Ullman 31] and later used by Garey et al. 15] in the complexity analysis of deterministic scheduling algorithms. In this paper we use the notion of pro le to deal with systems subject to processor failures, multiprogrammed systems, or dynamically recon gured systems. In such cases, the number of processors available to a set of tasks may vary in time. The problem of minimizing the maximum lateness and the makespan is in general NP-hard. We are interested in simple on-line or nearly on-line algorithms which yield optimal solutions under speci c assumptions. Results on three problems will be presented here, with each result accounting for di erent task characteristics. The reader is referred to the survey paper by Lawler et al. 20] for results on optimal scheduling under a constant pro le, i.e., when the number of available processors is constant. The rst results on optimal polynomial solutions for pro le scheduling problems are due to Dolev and Warmuth 9, 10, 11]. The paper is organized as follows. Section 2 is devoted to notation. Section 3 deals with scheduling of nonpreemptive Unit Execution Time (UET) tasks; optimality results concerning list schedules on a variable pro le are surveyed. Section 4 is concerned with scheduling of preemptive Real Execution Time (RET) tasks. We exhibit a tight relation between optimal list schedules and optimal priority schedules, which are counterparts of list schedules for preemptive scheduling. Section 5 focuses on tasks whose running times are independent and identically distributed random variables with a common exponential distribution. We prove the optimality of some list policies which stochastically minimize the makespan in several subproblems.

4.2 Notation

A task graph G = (V; E ) is a directed acyclic graph, where V = f1; 2; : : :; jV jg is the set of vertices representing the tasks, E V V is the set of arcs representing the precedence constraints: (i; j ) 2 E if and only if task i must be completed before task j can start. Denote by pi and di the respective processing time and due date of task i2V. Let p(i) and s(i) be the respective sets of immediate predecessors and successors of i 2 V , i.e., p(i) = fj : (j; i) 2 E g; s(i) = fj : (i; j ) 2 E g: Let S (i) be the set of (not necessarily immediate) successors of i 2 V , i.e.,

8i : if s(i) = ; then S (i) = ; else S (i) = s(i)

0 @

j2s(i)

1 S (j )A :

A task without a predecessor (successor) is called an initial ( nal) task. Denote by li the level of i, i.e., the number of arcs in a longest path from task i to some nal task. Denote by hi the height of i, i.e., the sum of the processing times of the tasks in a longest path from task i to some nal task, with nal vertex included. 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

3

In the sequel, we will consider di erent classes of precedence graphs. The three most interesting classes are the following:

interval order G 2 Gio : Each vertex i corresponds to an interval bi of the real line such that (i; j ) 2 E if and only if x 2 bi and y 2 bj imply x < y. These graphs have

the following characteristic property:

8i; j 2 V; either S (i) S (j ); or S (j ) S (i):

in-forest G 2 Gif : Each vertex has at most one immediate successor: js(i)j 1, i 2 V . A vertex i 2 V is called a leaf of in-forest G if p(i) = ;. A vertex i 2 V is called a root of in-forest G if s(i) = ;. out-forest G 2 Gof : Each vertex has at most one immediate predecessor: jp(i)j 1, i 2 V . A vertex i 2 V is called a leaf of out-forest G if s(i) = ;. A vertex i 2 V is called a root of out-forest G if p(i) = ;.

There are K 1 parallel and identical processors. The set of processors available to tasks varies in time, due to, e.g., failures of the processors or the execution of higherpriority tasks. The availability of the processors is referred to as the pro le, and is speci ed by the sequence M = fan; mn g1 , where 0 = a1 < a2 ; < : : : < an < : : : n=1 are the time epochs when the pro le is changed, and mn , n 1, is the number of processors available during time interval an ; an+1 ). Without loss of generality, we assume that mn 1 for all n 1 . We will assume that the pro le is not changed in nitely often during any nite time interval: for all x 2 IR+ , there is some nite n 1 such that an > x. The following three classes of pro les will often be referred to in the paper.

zigzag pro les M 2 Mz : the number of available processors is either K or K ? 1. increasing zigzag pro les M 2 Miz : the number of available processors can decrease by at most one at any time. Between successive decrements, there must be at least one increase. That is, 8j and 8n j; mn mj ? 1. decreasing zigzag pro les M 2 Mdz : a symmetrical de nition applies; that is, 8j and 8n j; mn mj + 1. Such a pro le is illustrated in Figure 4.1.

M 5 4 3 2 1 a1 a2 a3 a4 a5 K=5

a6 a7 a8

a9

t

Figure 4.1

Decreasing zigzag pro le

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

4

Z. Liu and E. Sanlaville

A scheduling algorithm (or policy) decides when an enabled task, i.e. an unassigned, un nished task all of whose predecessors have nished, should be assigned to one of the available processors. A schedule is feasible if assignments are nonredundant and the constraints relative to the precedence relation and to the variable pro le are respected. Scheduling can be either preemptive, i.e., the execution of a task can be stopped and later resumed on any processor without penalty, or nonpreemptive: once begun, the execution of a task continues on the same processor until its completion. Let S be an arbitrary feasible schedule of task graph G under pro le M . Let Ci (S ) be the completion time of task i under S . The lateness of task i is dened as Li (S ) = Ci (S ) ? di . The maximum lateness of schedule S is denoted by LS (G; M ) = maxi2V Li (S ). When all due dates are set to zero, the maximum lateness becomes the makespan. Denote by CS (G; M ) = maxi2V Ci (S ) the makespan of (G; M ) obtained by schedule S. We extend the classical notation scheme of Graham et al. 17] for scheduling problems to the case of variable pro les. We use P (t) (resp. Q(t), R(t)) to denote machine environment with identical (resp. uniform, unrelated) parallel processors whose number varies in time. For example, P (t) j pi = 1; prec j Cmax denotes the nonpreemptive scheduling for makespan minimization of UET tasks subject to precedence constraints on identical parallel processors with variable pro le.

4.3 Nonpreemptive Pro le Scheduling of UET Tasks

In the framework of nonpreemptive pro le scheduling, we will consider UET tasks and integer pro les (those in which pro les change only at integer time epochs). The problems under consideration can be denoted by P (t) j pi = 1; prec j Cmax or P (t) j pi = 1; prec j Lmax. List algorithms are often used in nonpreemptive scheduling. These algorithms put the enabled tasks in an ordered list. Each time a processor becomes available, the task at the head of the list is assigned to that processor. The list can be dynamically updated. The Highest Level First (HLF) and the Earliest Due Date (EDD) algorithms are well known examples. The reader is referred to 6] for properties of list algorithms.

In 31], Ullman exhibited a polynomial reduction from 3-SAT to P (t) j pi = 1; prec j Cmax , and then showed that P j pi = 1; prec j Cmax had the same complexity (It su ces to \ ll" the unavailable processors with dummy tasks). This is immediately generalized to the lateness minimization Lmax. Garey et al. 15] also used pro le scheduling for the study of opposing forests (union of in-forest and out-forest). They proved NP-completeness for P (t) decreasing j pi = 1; intree j Cmax (the decreasing pro le refers to the case where the number of available processors is decreasing in time), and consequently for P j pi = 1; opposing forest j Cmax . For such precedence graphs, however, polynomial algorithms have been found for bounded pro les, i.e., K is xed (see discussions below). 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

4.3.1 Complexity Issues

PROFILE SCHEDULING BY LIST ALGORITHMS

5

4.3.2 Minimization of the Makespan by List Scheduling

Consider rst the minimization of the makespan. We focus on the Highest Level First (HLF) algorithm: tasks are ordered by decreasing level. Note that HLF schedules di er only in the way ties are broken.

Forests. When the task graph is a forest, Hu 18] proved that HLF yields an optimal schedule when the task graph is an in-forest and the pro le is constant. Bruno 2] extended this result to out-forests. For variable pro le, Dolev and Warmuth 9, 10, 11] showed that HLF remains optimal in some special cases of pro les. For that, they rst showed the so-called elite theorem. De ne the height of a connected component of G to be the highest level of its vertices. Let the components be ordered by decreasing height, and h(k) be the height of the k-th component. Suppose pro le M is bounded by K . The median of (G; M ) is de ned as = h(K ) + 1. Consider H(G): set of components of G whose heights are strictly greater than (High Part); E(G): set of initial tasks of H (G) whose levels are strictly greater than (Elite); L(G): graph obtained from G by removing the components of H (G) (Low Part). When the graph contains less than K components, the median has value 0 and G = H (G). These de nitions are illustrated in Figure 4.2 where K = 3.

K=3

?

1 H(G)

Figure 4.2

2

3 L(G)

4

Elite of a task graph

We now state the elite theorem of 10]: 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

6

Z. Liu and E. Sanlaville

Theorem 4.3.1 Consider a task graph G and an integer pro le M . 1. If jE (G)j > m1 , then there is an optimal schedule such that m1 tasks of E (G) are executed during the rst time unit. 2. If jE (G)j m1 , then for each set E of m1 tasks of maximum levels, there exists an optimal schedule executing the tasks of E during the rst time unit. 3. If E (G) = ;, then any HLF schedule is optimal.

As a consequence, we obtain Corollary 4.3.1 HLF minimizes the makespan when the task graph is an out-forest and the pro le is decreasing zigzag; or when the task graph is an in-forest and the pro le is increasing zigzag. In the rst case, the elite contains less than K tasks, and m1 K ? 1. Part 2 of elite theorem states that there exists an optimal schedule that coincides with any HLF schedule during the rst time unit. The same reasoning is used during each time unit until the graph is empty. To prove the second case, it su ces to reverse the graph and the pro le. For bounded pro les, Dolev and Warmuth 11] provided a backward dynamic programming method whose time complexity is O(nK?1 log n) for scheduling an in-tree of size n on an arbitrary pro le bounded by K . This was based on two observations. First, if an optimal schedule is known for H (G), then according to the merge theorem of 9], an optimal schedule for G may be obtained in linear time with respect to jL(G)j. Second, If we consider all subgraphs of some in-forest with at most K ? 1 components, which could be obtained during the execution under any schedule, these subgraphs may be partitioned into at most nK?1 equivalence classes. Here, two graphs G = (V; E ) and G0 = (V 0 ; E 0 ) are equivalent if there is a bijection between V and V 0 such that the set of successors of vertex (v) is f (v1 ); : : : ; (vl )g, where fv1 ; : : : ; vl g is the set of successors of v. The dynamic programming method decides whether a schedule of duration D exists, building schedules backward from D-th time unit. The log n factor comes from a bisection search to get the right value for D. This method is then applied to out-forests under bounded pro les by reversing the directions of the precedence constraints and the pro le. Further, such a dynamic programming algorithm is used to solve the scheduling problem of opposing forests as mentioned in Section 4.3.1. T In the special case of chains, i.e. Gch def Gif Gof , Liu and Sanlaville 21] showed = Theorem 4.3.2 If G is a union of chains and M an arbitrary integer pro le, then any HLF minimizes the makespan. This was proved by an interchange argument, i.e., from an optimal non-HLF schedule, one can interchange the assignment decisions for two subchains so that the number of non-HLF decisions is decreased by at least one. Iterating this procedure for at most n2 times yields an optimal HLF schedule. Note that all HLF schedules have the same makespan in that case.

Interval order graphs. Papadimitriou and Yannakakis 27] have shown that task graphs with interval order structure have the following property: for any two vertices in the graph, their sets of (all) successors are comparable by inclusion relation:

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

7

one set is included into the other. This property is essential in establishing the optimality of Most Successors First (MSF) schedules (forming a subset of HLF schedules) which respect the order of inclusion of the sets of successors. This result was obtained by Papadimitriou and Yannakakis 27] for constant pro le and extended to variable pro le in Sanlaville 29].

Theorem 4.3.3 If G is a task graph with an interval order structure and M an arbitrary integer pro le, then any MSF schedule minimizes the makespan.

This result can again be proved using an interchange argument. Consider an optimal non-MSF schedule for graph G. Let be the rst time when schedules some vertex v instead of vertex u if MSF rule is applied. Note that by de nition, all the predecessors of u and v have nished execution by time . Let be the schedule obtained from by interchanging the execution times for u and v. Since G has an interval order structure, S (v) S (u), it is easy to see that satis es the precedence constraints of G. Moreover, the number of non-MSF decisions in is decreased by one. Iterating this procedure for at most n2 times yields an optimal MSF schedule.

Arbitrary graphs with pro les bounded by 2. Co man and Graham 7] proved that a subset of HLF schedules, referred to as Lexicographic Order Schedules (LOS) in this paper, minimizes the makespan when the pro le is constant and equal to two. LOS is based on a static list of tasks de ned by the lexicographic order as follows. Let there be f nal tasks. Assign labels 1; ; f to these nal tasks in an arbitrary way. Suppose now that k f tasks have already been labeled by 1; 2; : : :k. Consider all the tasks whose successors are all labeled. Assign label k +1 to the task such that the decreasing sequence of the labels of its immediate successors is lexicographically minimal (ties are broken arbitrarily). Co man and Graham 7] showed that LOS schedules minimize the makespan of an arbitrary graph in a 2-processor system. The main idea of their proof is to cut the Gantt chart of the LOS schedule into several segments such that each segment is composed of two sets of tasks Ei and Fi , where Fi is a singleton, and that all tasks of Ei are predecessors of Ei+1 . The optimality of LOS is extended in Sanlaville 29] to variable pro les with at most 2 processors by a minor modi cation of the proof of 7], which consists in assigning ctitious tasks to unavailable processors.

Theorem 4.3.4 Any LOS schedule is optimal for makespan minimization of any task

graph G under any pro le M bounded by 2. Summary and discussion on makespan minimization. These results are summarized in Figure 4.3, where \constant" and \arbitrary" stand for constant and arbitrary proles. A gray area means that this particular subproblem is NP-hard for arbitrary K . Since constant pro les are special cases of (increasing and decreasing) zigzag proles which are in turn special cases of arbitrary pro les, the implication relation for subproblems is clear. Note that when the number of processors K is xed, the complexity remains an open problem, except for the case K = 2 for which LOS algorithm of Co man and Graham 7] provides optimal schedules. However, for any xed K 3, the scheduling problem is still of unknown complexity despite extensive research. Bartush et al 1] presented an interesting work on this problem, viz. scheduling

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

8

profiles graphs

chain out-tree in-tree interval order arbitrary

Z. Liu and E. Sanlaville

K<=2 conszigzag zigzag arbitant trary

HLF HLF HLF MSF

LOS

HLF

Figure 4.3

Results on nonpreemptive pro le scheduling of UET tasks (Cmax )

with xed number of processors. The authors provided a general 2-phase solution scheme. In the rst phase, they tried to generalize the consistency notion of 4, 14]. They compute a so-called \test choice" for the problem instance under consideration. This is derived from bounds on the execution periods of some special tasks. In the second phase, they construct an optimal schedule in O(nK ) time. This is based on sophisticated dominancy criteria and on the study of special partially ordered sets. The time complexity of the rst phase is simple for the 2-processor case, and is polynomial for all known polynomial cases (trees, interval-order graphs, etc: : : ). However, its complexity remains unknown in the general case. It seems that no further research results concerning this approach have been reported since 1989.

4.3.3 Minimization of the Maximum Lateness by List Scheduling

Consider now the minimization of the maximum lateness. We analyze the Earliest Due Date (EDD) algorithm. The optimality of the EDD schedules will rely on a two-step method:

1. modify the due dates so that they satisfy some consistency relation, 2. apply EDD to the modi ed due dates. Roughly speaking, the consistency between due dates requires that whenever the due date of a task is met in some schedule, the due dates of its successors can be met too, provided there are enough processors. Such a consistency relation is not veri ed if, for instance, the due date of some of its predecessors. 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

9

Initially, for any constant pro le, Brucker, Garey and Johnson 4] proved that if G is an in-forest, EDD yields optimal schedules provided the modi ed due dates d0i are de ned as follows: s(i) = ; d0i = di ; di ; d0 ? 1); s(i) 6= ;;; (4.1) min(

s(i)

where, by a harmless abuse of notation, s(i) denotes the unique successor of task i. Note that such a modi cation has the purpose of obtaining consistent due dates. It is shown by Liu and Sanlaville 21] that the optimality of EDD still holds for increasing zigzag pro les: Theorem 4.3.5 For any in-forest G 2 Gif and any increasing zigzag pro le M 2 Miz , EDD schedule de ned on modi ed due dates minimizes the maximum lateness. The proof proceeds by rst establishing that EDD de ned on the modi ed due dates meets all the original due dates if and only if such a feasible schedule exists, and then by using a standard argument to show the optimality of EDD schedules. In a slightly more complicated way, Garey and Johnson 13, 14] showed the existence of modi cation schemes for the due dates, such that EDD applied to these due dates yields optimal schedules on two processors, even when release dates are associated with the tasks. In the latter case, the algorithm is not on-line. These results can be extended to variable pro les (see 29]). However, even without release dates, the algorithms are then o -line because the computation of the modi ed due dates depends not only on the release dates but also on the pro le. For pro le of width K 3 and general task graphs, there is no simple way to design some modi cation scheme so that EDD applied to the modi ed due dates becomes optimal. The general problem is NP-hard. In fact, even for out-forests with constant pro les, the minimization of maximum lateness is NP-hard, as was shown in 4]. Unlike the makespan minimization, the scheduling of out-forests cannot be achieved by analyzing the \reverse" problem. Indeed, reversing problem P (t) dec: zig: j pi = 1; out tree j Lmax yields problem P (t) inc:zig: j pi = 1; ri ; in tree j Cmax , where release dates ri are added to the tasks.

4.4 Preemptive Pro le Scheduling

We now consider preemptive scheduling. The task processing times and pro le change epochs are arbitrary real numbers. We are interested in optimal priority algorithms. In what follows, we rst describe such algorithms, and then present a tight relation between optimal nonpreemptive list algorithms and optimal preemptive priority algorithms. Finally, we present simple optimal priority algorithms for speci c problems. Parallel to list algorithms, (dynamic) priority algorithms are used in preemptive scheduling. At any time, enabled tasks are assigned to available processors according to a priority list which can change in time and can depend on the partial schedule already constructed. A general description is given below (see Muntz and Co man 25] and Lawler 19]): 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

4.4.1 Priority Scheduling Algorithms

10

Z. Liu and E. Sanlaville

At any time t, enabled tasks are ordered according to their priorities, thus forming subsets V1 ; : : : ; Vk , where all tasks of Vj have the same priority and greater priority than tasks in Vj+1 . Suppose that tasks in V1 ; : : : ; Vr?1 , r k, are assigned. Let mr (t) be the number ~ of remaining free processors. If mr (t) jVr j, then one processor is assigned to each ~ of the tasks in Vr , and the algorithm deals with the next subset. Otherwise, the mr (t) processors are shared by the tasks of Vr so that each task in Vr is executed ~ at speed vr = mr (t)=jVr j. ~ This assignment remains unchanged until one of the following events occurs: 1. A task completes (or a new task is enabled, when release dates are considered); 2. The priority order of tasks is changed; 3. The pro le changes. At such moments the processor assignment is re-computed. In the above scheme, the processor sharing can be achieved by McNaughton's wraparound algorithm (cf. 23]) which is linear in the number of tasks scheduled in each time interval. An example is illustrated in Figure 4.4 where three tasks are executed at speed 2=3 on two processors during a unit length interval.

processor 1 processor 2 0

Figure 4.4

task 1 task 2 task 3

task 1

task 2

task 2 task 3

1

0

1/3

2/3

1

An example of processor sharing with McNaughton's algorithm

Note that other processor sharing schemes may be used. However, they will generate the same latenesses of the tasks provided the corresponding task processing speeds are the same in these processor sharing schemes. Therefore, we will not make any di erence between them. Denote by pS (t) the remaining processing requirement of task i at time t in schedule i S . De ne the laxity of task i at time t in this schedule as bS (t) = di ? pS (t). We will i i consider SLF (Smallest Laxity First) algorithms. Figure 4.5 shows an SLF schedule for a set of independent tasks whose characteristics are indicated in Table 4.1. 1 2 3 4

Table 4.1

i ri pi di

0 0 2 2 3 2 2 1 5 2 5 3

A set of independent tasks

The schedule is optimal, which is not true in general when the maximum lateness is under consideration. 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

11

K=3

1 2 0 0.5 2

3 1 2 4 4 1.5 2 2.5 3

3 1 4

3 1 4.75 t

Lmax = 0 = L*

Figure 4.5

Priority schedules are also used for the minimization of makespan. De ne the length of the remaining longest path of task i at time t in preemptive schedule S as riS (t) = hi + pS (t). In an LRP (Longest Remaining Path rst) schedule, tasks are ordered by i decreasing length of the remaining longest path. This corresponds to the HLF rule when tasks have unit execution times.

An example of SLF schedule

4.4.2 Relation between Optimal Nonpreemptive List Algorithms and Preemptive Priority Algorithms

It was shown in Liu and Sanlaville 21] that there is a tight relation between the conditions under which nonpreemptive list algorithms (EDD, HLF) are optimal and those under which preemptive priority algorithms (SLF, LRP) are optimal. In order to state these results, we need the following notions of closure. A class G of graphs is said to be closed under expansion if the following property is true for any graph G = (V; E ) 2 G : For any vertex i 2 V , if G0 is the graph obtained from G by replacing vertex i with a chain of two vertices i1 and i2 such that:

p(i1 ) = p(i); s(i1 ) = fi2g; and p(i2 ) = fi1 g; s(i2 ) = s(i);

then G0 still belongs to the class G . A class M of pro les is said to be closed under translation if for any pro le M = far ; mr g1 in M, all pro les M 0 = fa0r ; mr g1 belong to M, provided fa0r g1 is r=1 r=1 r=1 an increasing sequence of real numbers.

Theorem 4.4.1 Let M be a class of pro les which is closed under translation and G a class of graphs which is closed under expansion. If for any integer pro le M 2 M and for any G 2 G with UET tasks and integer due dates, there exists an EDD schedule minimizing the maximum lateness of G within the class of nonpreemptive policies, then for any M 2 M and any G 2 G , the SLF schedule minimizes the maximum lateness of G within the class of preemptive schedules.

The proof proceeds in two steps. In the rst step, we prove the result for the case where the pair (G; M ) has commensurable timing, i.e., the task processing times and 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

12

Z. Liu and E. Sanlaville

due dates, and the pro le change times are mutually commensurable. Real numbers x1 ; ; xr 2 IR are said to be mutually commensurable if there exist w 2 IR and r integers 1 ; ; r such that xi = i w for all i = 1; ; r. In the second step, we extend the result to the general case with arbitrary real timing. The scheme of the proof in the rst step is similar to that of Muntz and Co man 25]. Roughly speaking, we show that when graph G is su ciently expanded, i) an optimal preemptive solution for a pair (G; M ) may be approached arbitrarily closely by considering optimal nonpreemptive schedules, and, ii) nonpreemptive EDD schedules coincide with the preemptive SLF schedule for (G; M ). Putting these two points together yields the desired result. Note that in an expansion, if vertex i is split into two vertices i1 and i2 , then their processing times and due dates are de ned as follows:

between the maximum lateness of SLF schedule and the optimal one is bounded by an arbitrarily small constant. This implies that SLF schedule does yield an optimal solution. If the complements of the task heights are taken as due dates, i.e., di = ?hi for all i 2 V , then the EDD (resp. SLF) rule coincides with the HLF (resp. LRP) rule. It can also be shown that in such a case the maximum lateness coincides with the makespan 21]. Therefore,

pi1 = pi2 = pi =2; and di2 = di ; di1 = di ? pi2 : In the second step, we show that when (G; M ) has real timings, the absolute di erence

Theorem 4.4.2 Let M be a class of pro les which is closed under translation and G a class of graphs which is closed under expansion. If for any integer pro le M 2 M and for any G 2 G with UET tasks, there exists an HLF schedule minimizing the makespan of G within the class of nonpreemptive policies, then for any M 2 M and any G 2 G , the LRP schedule minimizes the makespan of G within the class of preemptive schedules.

Note that the above results actually hold in a more general case where the task executions are subject to release dates. In the remainder of this section, we apply these results together with the results of previous section concerning optimal nonpreemptive scheduling in order to obtain optimal preemptive schedules.

4.4.3 Applications

We rst consider the maximum lateness of in-forests. For a given in-forest G 2 Gif with processing times p1 ; ; pn and due dates d1 ; ; dn , we de ne an in-forest G0 2 Gif such that G0 has the same set of tasks, the same precedence constraints and the same processing times. The due dates in G0 are modi ed as in (4.1). It can be shown 21] that such a modi cation on the due dates does not change the maximum lateness of any feasible schedule. It then follows from Theorems 4.3.5 and 4.4.1 that Corollary 4.4.1 If G 2 Gif is an in-forest, and M 2 Miz is an increasing zigzag prole, then the SLF schedule de ned on the modi ed due dates minimizes the maximum lateness within the class of preemptive schedules. 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

13

Note that this result extends Theorem 7.3 of Lawler 19] to the case of increasing zigzag variable pro le. It is possible to apply Theorem 4.4.1 to the case of arbitrary task graph and constant pro le with two processors. In such a case, a new proof of Theorem 8.3 of Lawler 19] may be obtained for the case of identical processors (see 29]). Consider now the makespan minimization problem. For the simplest case of the task graphs: the chains, it follows from Theorems 4.3.2 and 4.4.2 that Corollary 4.4.2 For any graph consisting of chains and for any pro le, the LRP schedule is an optimal preemptive schedule for makespan minimization. Note that in the preemptive case, scheduling problems for a union of disjoint chains and for a set of independent tasks are equivalent. In the case of forests, as a consequence of Corollary 4.3.1 and Theorem 4.4.2, we obtain Corollary 4.4.3 If G is an in-forest and M is an increasing zigzag pro le, or if G is an out-forest and M is a decreasing zigzag pro le, then the LRP schedule minimizes the makespan within the class of preemptive schedules. Observe that this result extends a result of Muntz and Co man 25] to the zigzag variable pro les. For an arbitrary task graph, since LOS schedule belongs to the class of HLF schedules, Theorems 4.3.4 and 4.4.2 allow us to conclude that Corollary 4.4.4 LRP is an optimal preemptive schedule for makespan minimization of any task graph G under any pro le M bounded by 2. This last result extends a result of Muntz and Co man 24] to variable pro les. Note that in order to apply Theorems 4.4.1 and 4.4.2, the class of task graphs under consideration should be closed under expansion. Thus, we cannot apply Theorems 4.3.3 and 4.4.2 to obtain the optimality of LRP for graphs with interval order structure, as this class of graphs does not ful ll the condition.

4.5 Stochastic Pro le Scheduling 4.5.1 Problem Description

In this section, we consider the problem of stochastic scheduling under variable pro le. We assume that the task processing times are independent and identically distributed random variables having a common exponential distribution. These processing times are independent of the pro le M = fan ; mn g1 which is a sequence of random vectors. n=1 We assume that the scheduler has no information on the samples of the (remaining) processing times of the tasks. At any time t, an t < an+1 , the scheduler may not have any information on the truncated sequence fal ; ml g1 n+1 . In other words, the l= scheduler may not know either the future time epochs when the pro le changes or the number of available processors at any future time. Within such a framework, dynamic preemptive scheduling is necessary. We are interested in the stochastic minimization of makespan. A policy is said

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

14

Z. Liu and E. Sanlaville

to stochastically minimize the makespan of (G; M ) within the above described class of policies if for any policy in that class, the makespan of is stochastically smaller than that of , where a random variable X 2 IR is said to be stochastically smaller than a random variable Y 2 IR, if for all x 2 IR, P X x] P Y x]. When the task graph is an in-forest, and the pro le is a constant 2, Chandy and Reynolds 5] proved that the HLF policy minimizes the expected makespan. Bruno 3] subsequently showed that HLF stochastically minimizes the makespan. Pinedo and Weiss 28] extended this last result to the case where tasks at di erent levels may have di erent expected task running times. Frostig 12] further generalized the result of Pinedo and Weiss to include increasing likelihood ratio distributions for the task running times. These results do not hold for systems with three processors, see counterexamples in 5]. However, Papadimitriou and Tsitsiklis 26] proved that for any arbitrarily xed number of processors, HLF is asymptotically optimal as the number of tasks tends to in nity, provided the task processing times have a common exponential distribution. Co man and Liu 8] investigated the stochastic scheduling of out-forests on identical parallel processors with constant pro le. For the uniform out-forests where all the subtrees are ordered by an embedding relation (see de nition below), they showed that an intuitive priority scheduling policy induced by the embedding relation, referred to as the Largest Tree First (LTF) policy in this paper, stochastically minimizes the makespan when there are two processors. If in addition, the out-forests satisfy a uniform root-embedding constraint, then the greedy policy stochastically minimizes the makespan for an arbitrary number of processors. Stochastic pro le scheduling was rst investigated by Liu and Sanlaville 22]. They considered three kinds of task graphs: interval-order graphs, in-forests and out-forests. The results we are going to present in the remainder of this section are due to 22] and were actually obtained in a more general framework: uniform processors, where the processors may have di erent speeds.

Interval order graphs. As in the deterministic UET case, MSF (Most Successor First) is optimal when the task graph has an interval-order structure.

4.5.2 Optimal Algorithms for Constant Pro les

4.5.3 Optimal Algorithms for Variable Pro les

Theorem 4.5.1 For any interval-order graph G 2 Gio and any pro le M , MSF

stochastically minimizes the makespan of G.

The proof uses uniformization technique, i.e., we can consider a coupled processing model where all processors 1; ; K , whenever they are available, are continually executing tasks. When a completion occurs, and no task was assigned to some processor, it corresponds to the completion of a ctitious task on this processor. When a task is assigned to a processor, it is assigned a running time equal to the remainder of the running time already underway at that processor. Owing to the memoryless property 19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

15

of exponential distributions, we can see that this coupled model is equivalent in law to the initial one. Let G = (V; E ) be an interval-order graph, and T (G) = fT1 ; T2; ; Tg g be a partition of V obtained by the equivalence relation on the sets of successors: for all 1 i g, u; v 2 Ti if and only if S (u) = S (v). The sets T1 ; ; Tg are labeled in such a way that for all 1 i < j g, u 2 Ti and v 2 Tj imply S (u) S (v). We de ne a majorization relation: Let G1 = (V 1 ; E 1 ) and G2 = (V 2 ; E 2 ) be two subgraphs of G obtained by successively deleting vertices of G having no predecessor in G or in the previously obtained subgraphs. It is easy to see that G1 and G2 are in Gio . Let Tij = Ti \ V j , j = 1; 2, i = 1; ; g. Graph G1 is said to be majorized by G2 , referred to as G1 s G2 , if and only if

8i; 1 i g :

i X k=1

jT 1 j

k

i X

k=1

jTk2j;

Using now the uniformization technique and the above notion of majorization, one proves the following properties: Let G1 = (V 1 ; E 1 ) and G2 = (V 2 ; E 2 ) be two subgraphs of G 2 Gio obtained by successively deleting vertices of G having no predecessor in G or in the previously obtained subgraphs. If G1 s G2 , then under MSF policy, the makespan of G1 is stochastically smaller than that of G2 . Let G 2 Gio be a task graph. Let be a policy which follows the MSF rule all the time except at the rst decision epoch. Then, the makespan of G under MSF is stochastically smaller than that under . This last property together with a backward induction allow us to conclude Theorem 4.5.1.

In-forests. When the task graph is an in-forest, we have

Theorem 4.5.2 For any in-forest G 2 Gif and any pro le M bounded by 2, HLF

stochastically minimizes the makespan of G.

The scheme of the proof is similar. However, we have to use another majorization relation, referred to as \ atter than" in 5]. Let G1 = (V 1 ; E 1 ) and G2 = (V 2 ; E 2 ) be two in-forests. Forest G1 is said to be atter than G2 , denoted by G1 f G2 , if and only if X 1 X 2 8i; i 0 : Nk (G ) Nk (G ); where Nk (G) denotes the number of vertices at level k of graph G.

Out-forests. Suppose now that the task graph is an out-forest. Even for a pro le bounded by two, examples may easily be found for which the HLF policy is not optimal (even in term of expected makespan), see 8]. Instead of HLF, the greedy policy LTF introduced in 8] turns out to be optimal in a subclass of out-forests. Let G = (V; E ) 2 Gof be an out-forest. Vertex v 2 V and all its successors form a

k i k i

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

16

Z. Liu and E. Sanlaville

subtree of G, denoted by TG (v) or simply T (v) when there is no ambiguity. We denote by jT (v)j the size of T (v), i.e. its number of vertices. The Largest Tree First (LTF) policy is de ned as follows: at any decision epoch, LTF assigns the task v whose subtree T (v) is the largest among all subtrees of the enabled tasks to an available processor. In general, policy LTF is not optimal within the class of out-forests Gof . Counterexamples were provided in 8]. However, within the classes of uniform and r-uniform out-forests (introduced in 8]), a policy is optimal if and only if it is LTF. Let T1 ; T2 2 Gof be two out-trees. Out-tree T1 is said to embed out-tree T2 , or T2 is embedded in T1 , denoted by T1 e T2 or T2 e T1, if T2 is isomorphic to a subgraph of T1 . Formally, T1 embeds T2 if there exists an injective function f from T2 into T1 such that 8u; v 2 T2, v 2 s(u) implies f (v) 2 s(f (u)). Function f is called an embedding function. Let r1 and r2 be the roots of the out-trees T1 and T2 , respectively. If T1 e T2 and if there is an embedding function f such that f (r2 ) = r1 , then f is a root-embedding function, and we write T1 r T2 or T2 r T1 . An out-forest G 2 Gof is said to be uniform (respectively r-uniform) if all its subtrees fT (v); v 2 Gg can be ordered by the embedding (respectively root-embedding) relation. The class of uniform (respectively r-uniform) forests is denoted by Guof (respectively Grof ). It is clear that Grof Guof Gof . The graph illustrated in Figure 4.6 is a uniform out-forest. However, it is not runiform. An example of r-uniform out-forest is given in Figure 4.7.

1 2 3

Figure 4.6

An example of uniform out-forest

1 2 3

Figure 4.7

An example of r-uniform out-forest

Theorem 4.5.3 LTF stochastically minimizes the makespan of out-forest G,

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

PROFILE SCHEDULING BY LIST ALGORITHMS

17

if G 2 Guof is uniform and M is bounded by 2, or if G 2 Grof is r-uniform and M is arbitrary.

The scheme of the proof is again similar to that of Theorem 4.5.1, with the majorization relation being de ned as the embedding relation between uniform out-forests: Let G1 = (V 1 ; E 1 ) and G2 = (V 2 ; E 2 ) be two uniform out-forests. Assume that the vertices of G1 and G2 are indexed in such a way that

TG1 (1) e TG1 (2) e

e TG1 (jV 1 j):

TG2 (1) e TG2 (2) e e TG2 (jV 2 j): Out-forest G1 is embedded in G2 , referred to as G1 e G2 , if and only if jV 1 j jV 2 j; and 8i; 1 i jV 1 j : TG1 (i) e TG2 (i):

Similarly, G1 r G2 if and only if TG1 (1) r TG1 (2) r

r TG1 (jV 1 j);

TG2 (1) r TG2 (2) r r TG2 (jV 2 j); jV 1 j jV 2 j; and 8i; 1 i jV 1 j : TG1 (i) r TG2 (i):

REFERENCES

1] M. Bartusch, R.H. Mohring, and F.J. Radermacher, \M-machine unit time scheduling: a report on ongoing research", Lecture notes in economics and mathematical systems, 304 (1988), pp 165-212, Springer, Berlin. 2] J.L. Bruno, \Deterministic and stochastic problems with tree-like precedence constraints", NATO conference, Durham England, July 1981. 3] J.L. Bruno, \On scheduling tasks with exponential service times and in-tree precedence constraints", Acta Informatica, 22 (1985), pp 139{148. 4] P. Brucker, M. R. Garey and D. S. Johnson, \Scheduling equal-length tasks under treelike precedence constraints to minimize maximum lateness", Math. of Oper. Res., 2 (1977), pp 275{284. 5] K. M. Chandy and P. F. Reynolds, \Scheduling partially ordered tasks with probabilistic execution times", Operating System Review, 9 (1975), pp 169{177. 6] E. G. Co man, Jr. (ed.) Computer and job-shop scheduling theory, Wiley, New York, 1976. 7] E. G. Co man, Jr. and R. L. Graham, \Optimal scheduling for two-processor systems", Acta Informatica, 1 (1972), pp 200{213. 8] E. G. Co man, Jr. and Z. Liu, \On the optimal stochastic scheduling of out-forests", Opns Res., 40 (1992), pp S67{S75. 9] D. Dolev and M. K. Warmuth, \Scheduling precedence graphs of bounded height", J. of Algorithms, 5 (1984), pp 48{59. 10] D. Dolev and M. K. Warmuth, \Scheduling at graphs", SIAM J. on Comput., 14 (1985), pp 638{657. 11] D. Dolev and M. K. Warmuth, \Pro le scheduling of opposing forests and level orders", SIAM J. Alg. Disc. Meth., 6 (1985), pp 665{687. 12] E. Frostig, \A stochastic scheduling problem with intree precedence constraints", Opns Res., 36 (1988), pp 937{943.

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

18

Z. Liu and E. Sanlaville

13] M. R. Garey and D. S. Johnson, \Scheduling tasks with nonuniform deadlines on two processors", J. of the ACM, 23 (1976), pp 461{467. 14] M. R. Garey and D. S. Johnson, \Two-processor scheduling with start-times and deadlines", SIAM J. on Computing, 6 (1977), pp 416-426. 15] M. R. Garey, D. S. Johnson, R. E. Tarjan et M. Yannakakis, \Scheduling opposite forests", SIAM J. Alg. Disc. Meth., 4 (1983), pp 72{93. 16] T. F. Gonzales and D. B. Johnson, \A new algorithm for preemptive scheduling of trees", J. of the ACM, 27 (1980), pp 287{312. 17] R. L. Graham, E. L. Lawler, J. K. Lenstra and A. H. G. Rinnooy Kan, \Optimization and approximation in deterministic sequencing and scheduling: a survey", Ann. Discr. Math., 5 (1979), pp 287{326. 18] T.C. Hu, \Parallel sequencing and assembly line problems", Opns Res., 9 (1961), pp 841{848. 19] E. L. Lawler, \Preemptive scheduling of precedence constrained jobs on parallel machines", in Deterministic and Stochastic Scheduling, Dempster et al. (editors), Reidel, 1982, pp 101{123. 20] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy Kan and D. B. Shmoys, \Sequencing and scheduling: algorithms and complexity", Report BS-R8909, CWI, Amsterdam, Holland, 1989. 21] Z. Liu and E. Sanlaville, \Preemptive scheduling with variable pro le, precedence constraints and due dates", Rapport de Recherche MASI No. 92.5, Univ. P. et M. Curie, Paris, 1992, to appear in D.A.M. 22] Z. Liu and E. Sanlaville, \Stochastic scheduling with variable pro le and precedence constraints". Rapport de Recherche INRIA, No. 1525, 1991, Submitted for publication. 23] R. McNaughton, \Scheduling with deadlines and loss functions", Man. Sci., 6 (1959), pp 1{12. 24] R. R. Muntz and E. G. Co man, Jr., \Optimal preemptive scheduling on two-processor systems", IEEE Trans. on Comp., C-18 (1969), pp 1014{1020. 25] R. R. Muntz and E. G. Co man, Jr., \Preemptive scheduling of real-time tasks on multiprocessor systems", J. of the ACM, 17 (1970), pp 325{338. 26] C. H. Papadimitriou and J. N. Tsitsiklis, \On stochastic scheduling with in-tree precedence constraints", SIAM J. Comput., 16 (1987), pp 1{6. 27] C. H. Papadimitriou and M. Yannakakis, \Scheduling interval-ordered tasks", Report 11.78, center for research in computer technology Harvard, Cambridge Ma, 1978. 28] M. Pinedo and G. Weiss, \Scheduling jobs with exponentially distributed processing times and intree precedence constraints on two parallel machines", Opns Res., 33 (1985), pp 1381{1388. 29] E. Sanlaville, Conception et analyse d'algorithmes de liste en ordonnancement preemptif. These de l'universite P. et M. Curie, Paris, 1992. 30] G. Schmidt, \Scheduling independent tasks with deadlines on semi-identical processors", J. Opnl Res. Soc., 39 (1988), pp 271{277. 31] J. D. Ullman, \NP-complete scheduling problems" J. Comp. Sys. Sci., 10 (1975), pp 384{393.

Zhen LIU : INRIA, Centre Sophia Antipolis, 2004 Route des Lucioles, B.P. 93, 06902 Sophia Antipolis, FRANCE Eric SANLAVILLE : Laboratoire LITP, Universite Pierre et Marie Curie,

4, place Jussieu, 75252 Paris Cedex 05, FRANCE The work of this author was partially supported by INRIA while visiting the GERAD laboratory, Montreal, Canada.

19/2/1994 17:41|PAGE PROOFS for John Wiley & Sons Ltd (using jwcbmc01, Vers 01.02 OCT 1993)|bonas

相关文章:

- Finite Capacity Scheduling_11.20_图文.ppt
- ? Demand
*Profile*KW 300 200 100 1 2 3*4*...(a representative, but not exhaustive*list*) ? MRP...Rules out use of certain*scheduling**algorithms*...

- Instruction scheduling for low power dissipation in....pdf
- If a region of code has a heavy
*profile*weight...Instructions in the ready*list*with lower power ...*by*these*scheduling**algorithms*in order to obtain...

- A NOVEL LIST- SCHEDULING ALGORITHM FOR THE LOW-ENER....pdf
*by*optimizing the*algorithms*used, and, finally,*by*optimizing the software...3.*4*Implementation of the*list**scheduling**algorithm*The dependence table ...

- Child based Level-Wise List Scheduling Algorithm(IJ....pdf
- and performance are provided
*by**algorithms*based on the*list**scheduling*. In...*4*. Average Computation Time of a Child (ACTC): The ACTC of a task ni...

- 8 Scheduling in Clouds_图文.ppt
- Management
*algorithms*for load migration ? Selection...*profile*of a resource and aims to detect abrupt...subset of guests that are on top of the*list*....

- Improving Processor Selection in List Scheduling Algorithms_....pdf
- Processor Selection in
*List**Scheduling**Algorithms*_...A best processor is determined*by*the criteria ...*4**Profile**Scheduling*b... 18页 免费 Gemund. ...

- SCHEDULING INPUT-QUEUED SWITCHES BY SHADOW DEPARTUR....pdf
- between inputs and outputs
*by*setting the preference*list*for every input and...Practical*Scheduling*A...*4*页 免费*Scheduling**Algorithms*... 129页 ...

- ...Genetic-Search-Based Algorithm for Scheduling Ar....pdf
- processors,
*List**Scheduling*, Parallel*Algorithms*. ...*by*using three genetic n1 2*4*1 n2 3 1 1 ...

- Multitasking in hardware-software codesign for reco....pdf
- The partitioning and
*scheduling*software used is adapted from tabu search and*list**scheduling**algorithms*reported earlier*by*the authors [*4*]. After getting ...

- 05算法调度问题演示demo-list-schedule.ppt
*4*Load Balancing:*List**Scheduling*E F H I J G A B C 0 D Machine 1 Machine 2 Machine 3 Time 5 Load Balancing:*List**Scheduling*F H I J G A...

- 近似算法Approximation-PartI.pdf
- Approximation
*Algorithms*Q. Suppose I need to ...*4*Load Balancing:*List**Scheduling**List*-*scheduling*...Pf.*By*construction of*algorithm*. 19 Center ...

- Genetic Algorithms versus Tabu Search for Instructi....pdf
- of
*list**scheduling*if it can be replaced*by*a strong method of sequencing...[*4*] S.J. Beaty. Instruction*Scheduling*Using Genetic*Algorithms*. PhD ...

- Scheduling algorithm exploiting spare capacity and ....pdf
*Scheduling**Algorithms*Exploiting Spare Capacity and ...rst k tasks in the*list*. The*schedule*is then...ed. Example*4*: Consider the same problem ...

- ...Systems Applied to Multiprocessor Scheduling.pdf
- 146.
*4*1481 409.7 178.3 evaluations [?103 ] ...(IBS) with a*list*-*scheduling**algorithm*(EFT, ...As the values returned*by*both*algorithms*are ...

- ...Algorithms for Hybrid Flow-Shop Scheduling Probl....pdf
*Algorithms*for Hybrid Flow-Shop*Scheduling*Problem with Multiprocessor Tasks ...We can observe that if the*list*π1 equals (1, 3, 2,*4*, 6, 5),...

- Performances of List Scheduling for Set Partition P....pdf
- approximation
*algorithms*,*list**scheduling*, partitions...Motivated*by**scheduling*problems, the min-max m-...3.*4*The Min-Ratio Problem γLS = w(Q1)/w(...

- A Note on Scheduling Algorithms for Processors with....pdf
- A Note on
*Scheduling**Algorithms*for Processors ...The position of the instructions in the*list*...(this can be achieved*by**4*procedure make heads...

- Scheduling.pdf
- A
*list*of properties that are desirable in any ...*scheduling**algorithms*be able to optimize certain ...*by*the following relationship: e (*4*) WtdJitter(...

- 05近似算法approx-alg.ppt
- Approximation
*Algorithms*Mingyu XIAO School of ...*4*Load Balancing:*List**Scheduling**List*-*scheduling*...Pf.*By*construction of*algorithm*. 19 Center ...

- ...Low-complexity Task Scheduling for Heterogeous C....pdf
- [22], an ordered
*list*of tasks is constructed*by*assigning priority for ...Section*4*introduces our*scheduling**algorithms*(the HEFT and the CPOP ...

更多相关标签:

- Gemund. On the complexity of list scheduling algorithms for distributed-memory systems
- List 1-3.4-by_Franklin
- Abstract Genetic Algorithms and Instruction Scheduling
- Learning blackboard-based scheduling algorithms for computer vision
- Fieldbus Unit Profile with Parameter List
- Profile-based dynamic voltage scheduling using program checkpoints
- Effective local search algorithms for routing and scheduling problems with general time win
- Home Energy Resource Scheduling Algorithms
- Algorithms for Molecular Biology Research SMOTIF efficient structured pattern and profile m
- Preemptive scheduling with variable profile, precedence constraints and due dates