minimizing makespan and preemption costs on a system of uniform machines hadas shachnai bell labs...
TRANSCRIPT
Minimizing Makespan and Preemption Costs ona System of Uniform
Machines
Hadas ShachnaiBell Labs and The Technion IIT
Tami TamirUniv. of Washington
Gerhard J. WoegingerUniv. of Twente
2
Generalized Multiprocessor Scheduling
We need to schedule n jobs on m uniform machines.
•Jj can be preempted at most aj times, aj 0.
•The machine Mi has the rate ui
1.
•The job Jj has processing time tj.
Non-preemptive
(aj=0).
Preemptive
(aj=).
The goal:
Minimum makespan
Generalizes the classic preemptive and non-preemptive scheduling problems
3
What is Known:
The non-preemptive scheduling problem (j, aj = 0)
is
strongly NP-hard (admits PTAS [HS88, ES99]).
The preemptive scheduling problem (j, aj=) can
be
solved optimally, using at most 2(m-1) preemptions
[GS78] (at most m-1 preemptions on identical machines).
Note: Similar solvability/approximability results in the identical and uniform machine environments.
Question: How many preemptions (in total, or per job) suffice in order to guarantee an optimal polynomial time algorithm?
4
We Investigate this Hardness Gap:
1. The GMS problem (generalized multiprocessor scheduling): Minimize the makespan, where we have job-wise or total bound on the number of preemptions throughout a feasible schedule.
2. The MPS problem (minimum preemptions scheduling): The only feasible schedules are preemptive schedules with smallest possible makespan.The goal is to find a feasible schedule that minimizes the overall number of preemptions.
5
Our Results• Hardness of GMS for j, aj=1 and for any total < 2(m-1).
• For MPS, matching upper and lower bound on the number of preemptions required by any optimal schedule. Jj can be processed simultaneously by j
machines (i.e., j, j 1).
• A PTAS for GMS instances with fixed number of machines. The scheme has linear running time, and can be applied to instances with release dates, unrelated machines, and arbitrary preemption costs.
• PTASs for arbitrary number of machines, and a bound on the total number of preemptions.
(A distinction between the uniform and identical machine environments)
6
The Power of Unlimited Preemption
For a given instance I, with aj 0 and m uniform
machines, let w denote the minimum makespan of a schedule with preemptions.
Theorem 1: It is easy to compute w (a function of the tj’s and the ui’s).
Consider the LPT algorithm, that assigns jobs of I to the machines in order of non-increasing processing times. This is a feasible non-preemptive schedule of I.
Theorem 2: Any LPT schedule yields a 2-approximation for w.Holds also for parallelizable jobs (i.e., j, j 1).
7
The Power of Unlimited Preemption
This 2 bound is tight already for identical jobs and identical machines:
Consider an instance with m machines and m+1 jobs,
j, aj 0, tj =t. i, ui =1.
preemptive
Makespan = t(1+1/m).
Non-preemptive
Makespan = 2t.
8
A PTAS for GMS: Overview ‘Guess’ the minimum makespan, Topt, and the
maximum load on any machine, P, to within factor of (1+). Partition the jobs to the sets ‘big’, ‘small’ and ‘tiny’ (as in [SW-98] ).
• Find a feasible preemptive schedule of the big jobs in the interval [0, Topt (1+)].
• The small jobs have non-negligible processing times; however, the overall processing time of the set is small relative to an optimal solution. Schedule the small jobs non-preemptively.
•Add the tiny jobs greedily, with no preemptions.
9
A PTAS for GMS (Cont.)
big
0 P
u1 u2 u3 u4 ; after scheduling B: C1 = C2 = C3 = C4 =
Topt
small
tiny
1
2
3
4
a1 = 3
a2 = 6
10
Analysis of the Scheme The big jobs can be scheduled optimally with preemptions in [0, Topt (1+)] in polynomial time, by
taking
B= {Jj| tj > P}, (for some (0,1]).
Use a fixed number of scheduling points, and a fixed number of segment sizes.
Lemma: There exists (,m)(0,1], such that taking S= {Jj | ·P < tj P}, we get that tj P.
JjS
We may add all the small jobs at the end of the schedule on the fastest machine.
Let = (, m) be a parameter.
11
Analysis of the Scheme (Cont.)
The tiny jobs contribute at most P processing units to the maximally loaded machine, since the number of tiny jobs that extend the schedule is bounded by the number of ‘holes’ on any machine. We take
T= {Jj| tj P/m}.
Theorem: The above scheme yields a (1+ )-approximation
to the minimum makespan, in O( ) steps.
3/12
)αε
1(n
12
Minimizing the Number of Preemptions
We count the number of segments, Ns(I), generated for an instance I.
Ns(I) = #preemptions(I) + n.
1. Lower Bound:
m,b, there exists an instance, I, in which j, j b, and in any optimal schedule of I, Ns(I) m+n+m/b-22. Upper Bound:
We present an algorithm that produces for any m,b, and any instance I in which j, j b, an optimal schedule with Ns(I) m+n+m/b-2
13
Upper Bound Proof (Algorithm)Assume that j, j =1.
Step 1: calculate w (the optimal makespan).
Step 2: schedule the jobs one after the other, each job is scheduled on one DPS.
A DPS (Disjoint Processor System): a union of disjoint idle-segments of r machines with non-decreasing rates, such that the union of the idle segments is the interval [0,w].
0 w
An idle machine forms a DPS
(r=1)
0 w
Two machines that form a DPS
(r=2)
idle
idle
14
1/2
Upper Bound Proof (Algorithm)
An Example: 3 machines, with rates 2,3,5
3 jobs, with lengths 4,3,3.In this case, w=tj/ui = 1.
Initially, each machine forms a DPS.
532
M3
M2
M1
M2,3
M1253
The longest job (t=4) is scheduled for 1/2 time unit on each of the machines M2, M3.
Available DPSs
The remainders of M2 and M3 form a new DPS.
15
2/31/2
Upper Bound Proof (Algorithm)
532
M3
M2
M1
M2,3
M1253
Available DPSs
The next (t=3) is scheduled on a DPS consisting of M2,3 and M1. We need to solve one equation in order to find the time 2/3 (3 = 3*1/2 + 5*1/6 + 2*1/3).
We are left with one DPS, consisting of the remaining idle segments of M1,M3. The last job is scheduled on this DPS (3 = 2*2/3 + 5*1/3).
M3
M2
M1
M1,3 52
16
Algorithm Analysis
• Using amortized analysis, we show that the total number of segments allocated by the algorithm is at most n+2(m-1).
•Since each job is scheduled on a single DPS, the requirement j, j =1 is preserved.
•For arbitrary values of j’s the algorithm schedules each job on at most j consecutive DPSs, and by similar arguments we have Ns(I) n+m+m/b-2.
•For the special case of j, j =1, our algorithm and its analysis are simpler than the known algorithm [GS-78].
17
Related Work
• Preemptive scheduling (aj= ,j) .
(Horvath, Lam and Sethi, 1977).• Non-preemptive scheduling (aj=0, j).
LPT is 2-optimal (Gonzalez, Ibarra and Sahni, 1977)
PTASs (Hochbaum and Shmoys, 1987; Hochbaum and Shmoys, 1988; Epstein and Sgall, 1999)
• MPS for non-parallelizable jobs
(Gonzalez and Sahni, 1978)
• A wide literature on scheduling parallelizable jobs
18
What is Known: The non-preemptive scheduling problem (j, aj =
0) is strongly NP-hard (admits PTAS [Hochbaum and Shmoys, 1987, 1988; Epstein and Sgall, 1999]).
The preemptive scheduling problem (j, aj=) can
be solved optimally [Horvath, Lam and Sethi, 1977], using at most 2(m-1) preemptions [Gonzalez and Sahni 1978] (at most m-1 preemptions on identical machines [McNaughton 1959]).Note: Similar solvability/approximability results in the identical and uniform machine environments. Question: How many preemptions (in total, or per job) suffice in order to guarantee an optimal polynomial time algorithm?
19
The Power of Unlimited Preemption
This 2 bound is tight already for identical jobs and identical machines:
Consider an instance with m machines and m+1 jobs,
j, aj 0, tj =t. i, ui =1.
Preemptive, Cmax=t(1+1/m).
Non-preemptive, Cmax= 2t.
•This result extends the result known for non-preemptive scheduling [Gonzalez, Ibarra and Sahni, 1977].