OS Spring’04
Processes
Operating Systems Spring 2004
OS Spring’04
What is a process? An instance of an application
execution Process is the most basic
abstractions provided by OSAn isolated computation context for each application
Computation contextCPU state + address space + environment
OS Spring’04
CPU state=Register contents Process Status Word (PSW)
exec. mode, last op. outcome, interrupt level
Instruction Register (IR)Current instruction being executed
Program counter (PC) Stack pointer (SP) General purpose registers
OS Spring’04
Address space Text
Program code
DataPredefined data (known in compile time)
HeapDynamically allocated data
StackSupporting function calls
OS Spring’04
Environment External entities
TerminalOpen filesCommunication channels Local With other machines
OS Spring’04
Process control block (PCB)
statememory
files
accounting
priorityuser
CPU registersstorage
text
data
heap
stack
PSW
IR
PC
SP
generalpurposeregisters
PCB CPUkernel user
OS Spring’04
Process States
running
ready blocked
created
schedule
preempt
event done
wait for event
terminated
OS Spring’04
UNIX Process Statesrunning
user
runningkernel
readyuser
readykernel blocked
zombie
sys. callinterrupt
schedule
created
return
terminated
wait for event
event done
schedule
preempt
interrupt
OS Spring’04
Multiprocesses mechanisms Context switch Create process and dispatch (in
Unix, fork()/exec()) End process (in Unix, exit() and
wait())
OS Spring’04
Threads Thread: an execution within a process A multithreaded process consists of
many co-existing executions Separate:
CPU state, stack
Shared:Everything else Text, data, heap, environment
OS Spring’04
Thread Support Operating system
Advantage: thread scheduling done by OS Better CPU utilization
Disadvantage: overhead if many threads
User-levelAdvantage: low overheadDisadvantage: not known to OS E.g., a thread blocked on I/O blocks all the
other threads within the same process
OS Spring’04
Multiprogramming Multiprogramming: having multiple
jobs (processes) in the systemInterleaved (time sliced) on a single CPUConcurrently executed on multiple CPUsBoth of the above
Why multiprogramming?Responsiveness, utilization, concurrency
Why not?Overhead, complexity
OS Spring’04
ResponsivenessJob 1arrives
Job 1terminates
Job1 Job2 Job3
Job 2terminates
Job 3terminates
Job 2arrives
Job 3arrives
Job1Job3
Job2
Job 1 terminates Job 3 terminates
Job 2 terminates
OS Spring’04
Workload matters!? Would CPU sharing improve
responsiveness if all jobs were taking the same time?
No. It makes it worse! For a given workload, the answer depends
on the value of coefficient of variation (CV) of the distribution of job runtimes
CV=stand. dev. / meanCV < 1 => CPU sharing does not helpCV > 1 => CPU sharing does help
OS Spring’04
Real workloads Exp. Dist: CV=1;Heavy Tailed Dist:
CV>1 Dist. of job runtimes in real systems is
heavy tailedCV ranges from 3 to 70
Conclusion: CPU sharing does improve
responsiveness CPU sharing is approximated byTime slicing: interleaved execution
OS Spring’04
Utilization
idle
idle
idle
idle
idle
idle
1st I/Ooperation
I/Oends
2nd I/Ooperation
I/Oends
3rd I/Ooperation
CPU
Disk
CPU
Disk idle idle
idle
idleJob1 Job1
Job1 Job1Job2
Job2
Job2
OS Spring’04
Workload matters!? Does it really matter?Yes, of course:
If all jobs are CPU bound (I/O bound), multiprogramming does not help to improve utilization
A suitable job mix is created by a long-term scheduling
Jobs are classified on-line to be CPU (I/O) bound according to the job’s history
OS Spring’04
Concurrency Concurrent programming
Several process interact to work on the same problem ls –l | more
Simultaneous execution of related applications Word + Excel + PowerPoint
Background execution Polling/receiving Email while working on
smth else
OS Spring’04
The cost of multiprogramming Switching overhead
Saving/restoring context wastes CPU cycles
Degrades performanceResource contentionCache misses
ComplexitySynchronization, concurrency control, deadlock avoidance/prevention
OS Spring’04
Short-Term Scheduling
running
ready blocked
created
schedule
preempt
event done
wait for event
terminated
OS Spring’04
Short-Term scheduling Process execution pattern consists
of alternating CPU cycle and I/O wait CPU burst – I/O burst – CPU burst – I/O
burst...
Processes ready for execution are hold in a ready (run) queue
STS schedules process from the ready queue once CPU becomes idle
OS Spring’04
Metrics: Response time
Job arrives/becomes ready to run Starts running
Job terminates/blocks waiting for I/O
TwaitTrun
Tresp
Tresp= Twait + Trun
Response time (turnaround time) is the average over the jobs’Tresp
OS Spring’04
Other Metrics
Wait time: average of Twait
This parameter is under the system control
Response ratio or slowdownslowdown=Tresp / Trun
Throughput, utilization depend on user imposed workload=>
Less useful
OS Spring’04
Note about running time (Trun)
Length of the CPU burstWhen a process requests I/O it is still “running” in the systemBut it is not a part of the STS workload
STS view: I/O bound processes are short processes
Although text editor session may last hours!
OS Spring’04
Off-line vs. On-line scheduling Off-line algorithms
Get all the information about all the jobs to schedule as their inputOutputs the scheduling sequencePreemption is never needed
On-line algorithmsJobs arrive at unpredictable timesVery little info is available in advancePreemption compensates for lack of knowledge
OS Spring’04
First-Come-First-Serve (FCFS) Schedules the jobs in the order in
which they arriveOff-line FCFS schedules in the order the jobs appear in the input
Runs each job to completion Both on-line and off-line Simple, a base case for analysis Poor response time
OS Spring’04
Shortest Job First (SJF) Best response time
Short Long job
Long job Short
Inherently off-lineAll the jobs and their run-times must be available in advance
OS Spring’04
Preemption Preemption is the action of stopping
a running job and scheduling another in its place
Context switch: Switching from one job to another
OS Spring’04
Using preemption On-line short-term scheduling algorithms
Adapting to changing conditions e.g., new jobs arrive
Compensating for lack of knowledge e.g., job run-time
Periodic preemption keeps system in control
Improves fairnessGives I/O bound processes chance to run
OS Spring’04
Shortest Remaining Time first (SRT)
Job run-times are known Job arrival times are not known When a new job arrives:
if its run-time is shorter than the remaining time of the currently executing job:preempt the currently executing job and schedule the newly arrived job
else, continue the current job and insert the new job into a sorted queue
When a job terminates, select the job at the queue head for execution
OS Spring’04
Round Robin (RR) Both job arrival times and job run-
times are not known Run each job cyclically for a short
time quantumApproximates CPU sharing
Job 1arrives
Job1 3
Job 2arrives
Job 3arrives
2 1 2 31 2 31 2 1 Job2
Job 3terminates
Job 1terminates
Job 2terminates
OS Spring’04
ResponsivenessJob 1arrives
Job 1terminates
Job1 Job2 Job3
Job 2terminates
Job 3terminates
Job 2arrives
Job 3arrives
Job1Job3
Job2
Job 1 terminates Job 3 terminates
Job 2 terminates
OS Spring’04
Priority Scheduling RR is oblivious to the process past
I/O bound processes are treated equally with the CPU bound processes
Solution: prioritize processes according to their past CPU usage
16
1
8
1
4
1
2
1:
2
1
10 ,)1(
3211
1
nnnnn
nnn
TTTTE
ETE
Tn is the duration of the n-th CPU burst En+1 is the estimate of the next CPU burst
OS Spring’04
Multilevel feedback queues
quantum=10
quantum=20
quantum=40
FCFS
new jobs terminated
OS Spring’04
Multilevel feedback queues Priorities are implicit in this scheme Very flexible Starvation is possibleShort jobs keep arriving => long jobs
get starved Solutions:
Let it beAging
OS Spring’04
Priority scheduling in UNIX Multilevel feedback queues
The same quantum at each queueA queue per priority
Priority is based on past CPU usagepri=cpu_use+base+nice
cpu_use is dynamically adjustedIncremented each clock interrupt: 100 sec-1
Halved for all processes: 1 sec-1
OS Spring’04
Fair Share scheduling Given a set of processes with
associated weights, a fair share scheduler should allocate CPU to each process in proportion to its respective weight
Achieving pre-defined goalsAdministrative considerations Paying for machine usage, importance of the
project, personal importance, etc.
Quality-of-service, soft real-time Video, audio
OS Spring’04
Perfect Fairness
),( interval time theduringA processby
received CPU ofamount :),(
Aclient of share alproportion :
)(),(
21
21
1221
tt
ttW
S
S
SttttW
A
A
i i
AA
A fair share scheduling algorithm achieves perfect fairness in time interval (t1,t2) if
OS Spring’04
Wellness criterion for FSS
• An ideal fair share scheduling algorithm achieves perfect fairness for all time intervals
The goal of a FSS algorithm is to receive CPU allocations as close as possible to a perfect FSS algorithm
OS Spring’04
Fair Share scheduling: algorithms
Weighted Round RobinShares are not uniformly spread in time
Lottery scheduling:Each process gets a number of lottery tickets proportional to its CPU allocationThe scheduler picks a ticket at random and gives it to the winning clientOnly statistically fair, high complexity
OS Spring’04
Fair Share scheduling: VTRR Virtual Time Round Robin (VTRR)
Order ready queue in the order of decreasing shares (the highest share at the head)Run Round Robin as usualOnce a process that exhausted its share is encountered:Go back to the head of the queue