cosc 3407: operating systems lecture 3: processes

26
Concurrency Uniprogramming: one process at a time (e.g., MS/DOS, Macintosh) Easier for operating system builder: get rid of problem of concurrency by defining it away. For personal computers, idea was: one user does only one thing at a time. Harder for user: can’t work while waiting for printer Multiprogramming: more than one process at a time (UNIX, OS/2, Windows NT/2000/XP). Note: This is often called multitasking, but multitasking sometimes has other meanings – so not used in this course.

Upload: amie-horton

Post on 28-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COSC 3407: Operating Systems Lecture 3: Processes

Concurrency Uniprogramming: one process at a time (e.g.,

MS/DOS, Macintosh)– Easier for operating system builder: get rid of problem of

concurrency by defining it away. – For personal computers, idea was: one user does only

one thing at a time.– Harder for user: can’t work while waiting for printer

Multiprogramming: more than one process at a time (UNIX, OS/2, Windows NT/2000/XP). – Note: This is often called multitasking, but multitasking

sometimes has other meanings – so not used in this course.

Page 2: COSC 3407: Operating Systems Lecture 3: Processes

Concurrency The basic problem of concurrency involves

resources:– Hardware: single CPU, single DRAM, single I/O devices.– Multiprogramming API: users think they have machine to

themselves. OS has to coordinate all the activity on a machine:

– multiple users, I/O, interrupts, etc.– How can it keep all these things straight?

Answer: use Virtual Machine abstraction– Decompose hard problem into simpler ones. – Abstract the notion of an executing program– Then, worry about multiplexing these abstract machines

Page 3: COSC 3407: Operating Systems Lecture 3: Processes

FetchExec

R0…

R31F0…

F30PC

…Data1Data0

Inst237Inst236

…Inst5Inst4Inst3Inst2Inst1Inst0Addr 0

Addr 232-1

Recall: What happens during execution?

Execution sequence:– Fetch Instruction at PC – Decode– Execute (possibly using

registers)– Write Results to registers– PC = Next Instruction(PC)– Repeat

PCPCPCPC

Page 4: COSC 3407: Operating Systems Lecture 3: Processes

How can we give the illusion of multiple processors?

CPU3CPU2CPU1

Shared Memory How do we provide the illusion of multiple

processors?– Multiplex in time!

Each virtual “CPU” needs a structure to hold:– Program Counter (PC)– Registers (Integer, Floating point, others…?)

How switch from one CPU to the next?– Save PC and registers in current state block– Load PC and registers from new state block

What triggers switch?– Timer, voluntary yield, I/O, other things

CPU1 CPU2 CPU3 CPU1 CPU2

Time

Page 5: COSC 3407: Operating Systems Lecture 3: Processes

Properties of this simple multiprogramming technique All virtual CPUs share same non-CPU resources

– I/O devices the same– Memory the same

Consequence of sharing:– Each thread can access the data of every other

thread (good for sharing, bad for protection)– Threads can share instructions

(good for sharing, bad for protection)– Can threads overwrite OS functions?

This (unprotected) model common in:– Embedded applications– Windows 3.1/Machintosh (switch only with yield)– Windows 95/ME? (switch with both yield and timer)

Page 6: COSC 3407: Operating Systems Lecture 3: Processes

Modern Technique: SMT/Hyperthreading Hardware technique

– Exploit natural propertiesof superscalar processorsto provide illusion of multiple processors

– Higher utilization of processor resources

Can schedule each threadas if were separate CPU– However, not linear

speedup!– If have multiprocessor,

should schedule eachprocessor first

Original technique called “Simultaneous Multithreading”– See http://www.cs.washington.edu/research/smt/– Alpha, SPARC, Pentium 4 (“Hyperthreading”), Power 5

Page 7: COSC 3407: Operating Systems Lecture 3: Processes

How to protect threads from one another?

Need three important things:– Protection of memory

» Every task does not have access to all memory

– Protection of I/O devices» Every task does not have access to every

device

– Preemptive switching from task to task» Use of timer» Must not be possible to disable timer from

usercode

Page 8: COSC 3407: Operating Systems Lecture 3: Processes

Pro

gra

m A

dd

ress S

pace

Recall: Program’s Address Space Address space the set of

accessible addresses + state associated with them:– For a 32-bit processor there are

232 = 4 billion addresses What happens when you read or

write to an address?– Perhaps Nothing– Perhaps acts like regular

memory– Perhaps ignores writes– Perhaps causes I/O operation

» (Memory-mapped I/O)– Perhaps causes exception (fault)

Page 9: COSC 3407: Operating Systems Lecture 3: Processes

Providing Illusion of Separate Address Space:Load new Translation Map on Switch

Page 10: COSC 3407: Operating Systems Lecture 3: Processes

Traditional Unix Process Process: Operating system abstraction to represent what

is needed to run a single program – Often called a “HeavyWeight Process”– Formally, a process is a sequential stream of execution

in its own address space. Two parts to a (traditional Unix) process:1. Sequential program execution: the code in the

process is executed as a single, sequential stream of execution (no concurrency inside a process). – This is known as a thread of control.

2. State Information: everything specific to a particular execution of a program: Encapsulates protection: address space– CPU registers– Main memory (contents of address space)– I/O state (in UNIX this is represented by file descriptors)

Page 11: COSC 3407: Operating Systems Lecture 3: Processes

Process != Program More to a process than just a

program:– Program is just part of process

state.– We both run Firefox – same

program, different processes. Less to a process than a

program:– A program can invoke more than

one process to get the job done– cc starts up cpp, cc1, cc2, as, ld

(each are programs themselves)

Main(){

}

A(){

…} PROGRAM

Main(){

}

A(){

}

PROCESS

heap

Stack

A

Main

Registers, PC

Page 12: COSC 3407: Operating Systems Lecture 3: Processes

Process states As a process executes, it changes state

– new: The process is being created.– running: executing on the CPU – waiting: waiting for another event (I/O, lock)– ready: waiting to be assigned to CPU – terminated: The process has finished execution.

Page 13: COSC 3407: Operating Systems Lecture 3: Processes

Process Control Block (PCB)

Information associated with each process. Process state

– new, ready, running, waiting, halted Program counter

– address of the next instruction to be executed for this process

CPU registers– accumulators, index registers, stack pointers,

general-purpose registers, condition code registers

CPU scheduling information– process priority, pointers to scheduling queues,

etc.

Page 14: COSC 3407: Operating Systems Lecture 3: Processes

Process Control Block (PCB) Memory-management information

– value of the base and limit registers, page tables, segment tables

Accounting information– amount of CPU and real time used,

time limits, account numbers, job or process numbers

I/O status information– list of I/O devices allocated to this

process, a list of open files

PCB

Page 15: COSC 3407: Operating Systems Lecture 3: Processes

How do we multiplex processes? The current state of process held in a process

control block (PCB):– This is a “snapshot” of the execution and

protection environment– Only one PCB active at a time

Give out CPU time to different processes (Scheduling):– Only one process “running” at a time– Give more time to important processes

Give pieces of resources to different processes (Protection):– Controlled access to non-CPU resources– Sample mechanisms:

» Memory Mapping: Give each process their own address space

» Kernel/User duality: Arbitrary multiplexing of I/O through system calls

Page 16: COSC 3407: Operating Systems Lecture 3: Processes

CPU Switch From Process to Process This is also called a

“context switch” Code executed in

kernel above is overhead – Overhead sets

minimum practical switching time

– Less overhead with SMT/hyperthreading, but… contention for resources instead

Page 17: COSC 3407: Operating Systems Lecture 3: Processes

State Queues PCB’s are organized into queues according to

their state…– Ready, waiting,

An example of such an arrangement is given below

Page 18: COSC 3407: Operating Systems Lecture 3: Processes

Representation of Process Scheduling

PCBs move from queue to queue as they change state– Decisions about which order to remove from queues

are Scheduling decisions– Many algorithms possible (few weeks from now)

Page 19: COSC 3407: Operating Systems Lecture 3: Processes

Context Switch Very machine dependent. Must save:

– the state of the old process, load the saved state for the new process.

Pure overhead! Speed depends on:

– memory speed, number of registers which must be copied, existence of special instructions, such as

– single instruction to load or store all registers. Sometimes special hardware to speed up SUN UltraSPARC provides multiple sets of registers.

– change the pointer to the current register set.

Page 20: COSC 3407: Operating Systems Lecture 3: Processes

Process Creation Parent process creates children processes, which, in

turn create other processes, forming a tree of processes.

A process will need certain resources, like CPU time, memory, files, I/O devices, to accomplish its task.

Resource sharing– Parent and children share all resources.– Children share subset of parent’s resources.– Parent and child share no resources.

Also, initialization data may be passed to the child process, e.g. name of a file to be displayed on the terminal.

Page 21: COSC 3407: Operating Systems Lecture 3: Processes

Process Creation (Cont.) Execution

– Parent and children execute concurrently.– Parent waits until children terminate.

Address space– Child process is a duplicate of the parent process.– Child process has a program loaded into it.

UNIX examples– fork system call creates new process– each process has a process identifier, which is a unique

integer– A new process consists of a copy of the address space of

the original process

Page 22: COSC 3407: Operating Systems Lecture 3: Processes

Example: Unix Both processes continue execution at the

instruction after the fork. The return code for the child process is zero,

whereas the process identifier (non zero PID) of the child is returned to the parent.

execlp system call used after a fork to replace the process’ memory space with a new program.

The execlp system call loads a binary file into memory and starts its execution.

The parent can create more children, or it can issue a wait system call to move itself off the ready queue until the termination of the child.

Page 23: COSC 3407: Operating Systems Lecture 3: Processes

C program forking a separate process

#include <stdio.h>

void main(int argc, char *argv[]){int pid;

/* fork another process */pid = fork();

if (pid < 0) { /* error occurred */fprintf(stderr, "Fork Failed\n");exit(-1);

}else if (pid == 0) { /* child process */

execlp("/bin/ls","ls",NULL);}else { /* parent process */

/* parent will wait for the child to complete */wait(NULL);printf("Child Complete\n");exit(0);

}}

Page 24: COSC 3407: Operating Systems Lecture 3: Processes

Multiple Processes to Contribute on Task

High Creation/memory Overhead (Relatively) High Context-Switch Overhead Need Communication mechanism:

– Separate Address Spaces Isolates Processes– Shared-Memory Mapping

» Accomplished by mapping addresses to common DRAM

» Read and Write through memory– Message Passing

» send() and receive() messages» Works across network

Proc 1 Proc 2 Proc 3

Page 25: COSC 3407: Operating Systems Lecture 3: Processes

Shared Memory Communication

Prog 1Virtual

AddressSpace 1

Prog 2Virtual

AddressSpace 2

Data 2

Stack 1

Heap 1

Code 1

Stack 2

Data 1

Heap 2

Code 2

Shared

Communication occurs by “simply” reading/writing to shared address page– Really low overhead communication– Introduces complex synchronization problems

CodeDataHeapStack

Shared

CodeDataHeapStack

Shared

Page 26: COSC 3407: Operating Systems Lecture 3: Processes

Inter-process Communication (IPC) Mechanism for processes to communicate and to

synchronize their actions Message system – processes communicate with each

other without resorting to shared variables IPC facility provides two operations:

– send(message) – message size fixed or variable – receive(message)

If P and Q wish to communicate, they need to:– establish a communication link between them– exchange messages via send/receive

Implementation of communication link– physical (e.g., shared memory, hardware bus)– logical (e.g., logical properties)