cs 550 operating systems spring 2018huilu/slides/7-os-cpu-scheduling-2.pdfmultilevel feedback queue...
TRANSCRIPT
Priority Scheduling● A priority number (integer) is associated with each process
● The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority)
● Again two types● Preemptive● nonpreemptive
● SJF is a priority scheduling algorithm where priority is the predicted next CPU burst time
● Problem ≡ Starvation ➔ low priority processes may never execute
● Solution ≡ Aging ➔ as time progresses increase the priority of a lower priority process that is not receiving CPU time.
Example with four priority classes
2
Multilevel Feedback Queue (MLFQ)
• Basic setup– The MLFQ has a number of distinct queues, each assigned
a different priority level.
– At any given time, a job that is ready to run is on a single queue.
– MLFQ uses priorities to decide which job should run at a given time: a job with higher priority (i.e., a job on a higher queue) is chosen to run.
– For jobs in the same queue (same priority), RR is used.
• First two basic rules for MLFQ– Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
– Rule 2: If Priority(A) = Priority(B), A & B run in RR
3
• The key to MLFQ scheduling is to properly set priorities.
• MLFQ varies the priority of a job based on its Job types
– For a job that uses the CPU intensively for long periods of time, what should MLFQ do with its priority? • Should reduce its priority
– For a job that repeatedly relinquishes CPU and waits for I/O, what should MLFQ do with its priority? • Should keep its priority high
– MLFQ uses history of a job to predict its future behavior
5
Multilevel Feedback Queue (MLFQ)
Changing priority
• Rules on changing job priority– Rule 3: When a job enters the system, it is set with the highest
priority (and placed in the topmost queue)
– Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue)
– Rule 4b: If a job gives up CPU before the time slice is up, it stays at the same priority level
6
Example
• Two jobs: A (CPU-intensive), and B (interactive)
• B arrives at T=100 ms and will run for 20 ms.
• MLFQ approximates SJF for job B in this case:– A has run some time drops to the
lowest priority queue.– When B comes in, the scheduler doesn’t
know whether a job is short or long-running, it first assumes it is a short job by placing it in the highest priority queue.
– If it actually is a short job, it will run quickly and complete; if it is not a short job, it will slowly move down the queues, and thus soon prove itself to be a intensive. CPU
8
Time slice length = 10 ms
B
AA
B
Interactive Jobs
• Rule 4b: If a job gives up CPU before the time slice is up, it stays at the same priority level.
– If an interactive job, for example, is doing a lot of I/O, it will relinquish the CPU before its time slice is complete; in such case, we don’t wish to penalize the job and thus simply keep it at the same level.
9
B
A
Multilevel Feedback Queue (MLFQ)
• MLFQ rules thus far– Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).
– Rule 2: If Priority(A) = Priority(B), A & B run in RR.
– Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue).
– Rule 4: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue); If a job gives up CPU before the time slice is up, it stays at the same priority level.
Any problems with the current version of MLFQ?
10
Multilevel Feedback Queue (MLFQ)
• Problems with the current version of MLFQ
– Starvation
• Too many interactive jobs, and thus long-running jobs will never receive any CPU time
– Game the scheduler
• Doing something sneaky to trick the scheduler into giving more than fair shares of the resources.
– What if a CPU-bound job turns to I/O-bound
• There is no mechanism to move the job up to queues with higher priorities!
11
Boosting priority
• Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
• What problems does this rule solve?– Starvation– When a CPU-bound job turns to I/O bound.
How to deal with those gamer processes?
12
CPU time accounting
• The scheduler keeps track of how much of a time slice a job used at a given level; once a job has used its time allotment, it is demoted to the next priority queue.
Rule 4: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue); If a job gives up CPU before the time slice is up, it stays at the same priority level.
Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
13
Tuning MLFQ
• How to parameterize MLFQ ?– How many queues?
– How big should time slice be per queue?
– How often should priority be boosted?
• No easy answers and need experience with workloads– Varying time-slice length across different queues (e.g., high-
priority queues are usually given short time slices, and low-priority queues are with long time slices)
– Some schedulers reserve the highest priority levels for operating system work.
– Some systems also allow some user advice to help set priorities (e.g., for example, by using the command-line utility nice you can increase or decrease the priority of a job (somewhat) and thus increase or decrease its chances of running at any given time).
15
MLFQ summary
• Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t)
• Rule 2: If Priority(A) = Priority(B), A & B run in RR
• Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue)
• Rule 4: Demotion: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue)
• Rule 5: Promotion: After some time period S, move all the jobs in the system to the topmost queue
• Instead of demanding a priori knowledge of a job, MLFQ instead observes the execution of a job and prioritizes it accordingly
• MLFQ manages to achieve the best of both worlds: it can deliver excellent overall performance (similar to SJF) for interactive jobs, and is relatively fair and makes progress for long-running CPU-intensive workloads
16
Fair Scheduling● Notion of “fairness” does not necessarily mean equal
CPU share for all processes.
● Say you have N processes
● Each process Pi is assigned a weight wi
● The the CPU time will be divided among processes in proportion to their weights.
● Let's say some process does not use its assigned CPU time.
● The the “spare” CPU time is divided among the remaining ready processed according to the ratio of their weights.
● Examples: lottery scheduling, stride scheduling, etc.
17
Work-conserving versus non-work-conserving
● Work-conserving scheduler
● CPU will not remain idle if there are processes in the ready queue
● Non-work-conserving scheduler
● Under some conditions, scheduler may decide to “waste” CPU time even though there may be processes sitting in the ready queue
● E.g. Sometimes real-time process cannot be started before a given time (release time).
18
Real-Time Scheduling● When each task needs to be completed before a given deadline
● Hard real-time systems
● Required to complete a critical task before its deadline
● For example, in a flight control system, or nuclear reactor
● Soft real-time systems
● Meeting deadlines desirable, but not essential
● For example, video or audio
● Schedulability criteria
● Given m periodic events, where event i occurs within period Piand requires Ci computation time each period
● Then the load can be handled only if
sumi (Ci/Pi) <= 1● The above condition is necessary but NOT sufficient.
19
So far, schedulers
● FCFS
● Shortest-Job First
● Round-robin
● Priority-based
● MLFQ
● Fair Scheduling
● Real-Time Scheduling
20
Multiprocessor scheduling
• So far what we discussed focused on single-processor scheduling.
• How can we extend those ideas to work on multiple CPUs?
• What new problems must we overcome?
21
Single-Queue Multiprocessor Scheduling (SQMS)
• Put all jobs that need to be scheduled into a single, global ready queue.
• Advantages and disadvantages?
– Advantage: simplicity
• Easy to adapt the existing single-processor policies to work on more than one CPU.
– Disadvantages: locking mechanisms need to be applied to the scheduler code
• Performance degradation.
22
Single-Queue Multiprocessor Scheduling (SQMS)
23
Performance degradation can also caused by the cache affinity.
• Multiple queues, one per CPU.
• Each queue follows a particular scheduling policy, e.g., round robin.
• When a job enters the system, it is placed on exactly one scheduling queue.
• Then it is scheduled essentially independently on its particular CPU.
Multi-Queue Multiprocessor Scheduling (MQMS)
24
• How to solve the load imbalance problem?
– Process migration
– Basic approach: work stealing
Multi-Queue Multiprocessor Scheduling (MQMS)
27
xv6 scheduling
• One global queue across all CPUs
• Local scheduling algorithm: RR
• scheduler() in proc.c
28
Linux scheduling overview
• O(1) scheduler– Multiple queues– priority-based (similar to MLFQ)
• Completely Fair Scheduler (CFS)– Multiple queues– deterministic proportional-share approach
• BF Scheduler (BFS)– Single queue– proportional-share, based on a more complicated
scheme known as Earliest Eligible Virtual Deadline First (EEVDF)
29
Linux scheduler implementations
• Linux 2.4: global queue, O(N) – Simple– Poor performance on multiprocessor/core – Poor performance when n is large
• Linux 2.5 and early versions of Linux 2.6: O(1) scheduler, per-CPU run queue – Solves performance problems in the old scheduler – Complex, error prone logic to boost interactivity– No guarantee of fairness
• Linux 2.6: completely fair scheduler (CFS) – Fair– Naturally boosts interactivity
30