cop 4600 operating systems spring 2011
DESCRIPTION
COP 4600 Operating Systems Spring 2011. Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM. Last time: Midterm solutions Today: Virtualization for the three abstractions Threads Virtual Memory Bounded buffer The kernel of an operating system Threads State - PowerPoint PPT PresentationTRANSCRIPT
COP 4600 Operating Systems Spring 2011
Dan C. MarinescuOffice: HEC 304Office hours: Tu-Th 5:00-6:00 PM
Last time: Midterm solutions
Today: Virtualization for the three abstractions
Threads Virtual Memory Bounded buffer
The kernel of an operating system Threads
State Thread manager Thread state Kernel and application threads
Next time Processor switching
Lecture 15 – Thursday, March 17, 2011
Lecture 15 2
Virtualization – relating physical with virtual objects Virtualization
simulating the interface to a physical object by:1. Multiplexing create
multiple physical objects from one instance of a physical object.
2. Aggregation create one virtual object from multiple physical objects
3. Emulation construct a virtual object from a different type of a physical object. Emulation in software is slow.
Method Physical Resource
VirtualResource
Multiplexing processor thread
real memory virtual memory
communication channel virtual circuit
processor server (e.g., Web server)
Aggregation disk RAID
core multi-core processor
Emulation disk RAM disk
system (e.g. Macintosh) virtual machine (e.g., Virtual PC)
Multiplexing + Emulation
real memory + disk virtual memory with paging
communication channel +processor
TCP protocol
3Lecture 15
Virtualization of the three abstractions
We analyze virtualization for the three abstractions1. Interpreter Threads2. Communication link/channel Bounded Buffer3. Storage Virtual Memory
Lecture 15 4
(1) Virtualization of interpreter - Threads Process/Threads a virtual processor Multiplexing or processor sharing is possible because
there is a significant discrepancy between processor bandwidth and the bandwidth of memory and of I/O devices
threads spend a significant percentage of their lifetime waiting for external events. Called:
Time-sharing Processor multiplexing Multiprogramming Multitasking
Processes versus thread: Both represent a module in execution A process may consist of multiple threads A thread is a light-weight process, less overhead to create it.
5Lecture 15
Lecture 15 6
Tight coupling between interpreter and storage
We need a memory enforcement mechanism; to prevent a thread running the code of one module from overwriting the data of another module.
Address space the range of memory addresses a thread is allowed to access.
Close relationship between a thread and an address space.
Lecture 15 7
(2) Virtualization of storage - Virtual memory
Address space – the storage a thread is allowed to access. Virtual address space – an address space of a standard size regardless
of the amount of physical memory. The physical memory may be too small to fit an application; otherwise each
application would need to manage its own memory. The size of the address space is a function of the number of bits in an
address. For example, 32 bits addresses allow an address space of 232 (4 Gbytes), 64 bit addresses allow 264
Virtual Memory - a scheme to allow each thread to access only its own virtual address space (collection of virtual addresses).
8Lecture 15
Memory map of a process
Lecture 15 9
Lecture 15 10
(3) Virtualization of communication link - Bounded buffers
Bounded buffers implement the communication channel abstraction Bounded the buffer has a finite size. We assume that all messages are of
the same size and each can fit into a buffer cell. A bounded buffer will only accommodate N messages.
Threads use the SEND and RECEIVE primitives.
11Lecture 15
Lecture 15 12
1 2 N-1N-2
out
in
Read from the bufferlocation
pointed by out
Write to the bufferlocation
pointed by out
shared structure buffer message instance message[N] integer in initially 0 integer out initially 0
procedure SEND (buffer reference p, message instance msg) while p.in – p.out = N do nothing /* if buffer full wait p.message [p.in modulo N] ß msg /* insert message into buffer cell p.in ß p.in + 1 /* increment pointer to next free cell
procedure RECEIVE (buffer reference p) while p.in = p.out do nothing /* if buffer empty wait for message msgß p.message [p.in modulo N] /* copy message from buffer cell p.out ß p.out + 1 /* increment pointer to next message return msg
0 1
Basic concepts Principle of least astonishment - study and understand simple phenomena
or facts before moving to complex ones. For example: Concurrency - an application requires multiple threads that run at the same time.
Tricky. Understand sequential processing first. Examine a simple operating system interface to the three abstractions
At the same time we need concepts that can be extended; Recall the extension of the UNIX file systems to NFS!!
Create a framework which can allow us to deal with more complex phenomena. Example: how to deal with concurrency
Serialization Critical sections code that must be executed by a single process to completion. Locks mechanisms to ensure serialization
Interrupts – external events which require suspension of the current thread, identification of the cause of the interrupt, and reaction to the interrupt.
State – critical concepts which allows us to interrupt the execution of a thread and restart it at a later point in time.
Event – a change of the state of an interpreter (thread, processor)
Lecture 15 13
Side effects of concurrency
Race condition error that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly.
Race conditions depend on the exact timing of events thus are not reproducible. A slight variation of the timing could either remove a race condition or
create. Very hard to debug such errors.
Lecture 15 14
Lock
Lock a mechanism to guarantee that a program works correctly when multiple threads execute concurrently a multi-step operation protected by a lock behaves like a single operation can be used to implement before-or after atomicity shared variable acting as a flag (traffic light) to coordinate access to a
shared variable works only if all threads follow the rule check the lock before accessing a
shared variable.
15Lecture 15
The kernel implements the three abstractions
The kernel is responsible for the management of all system resources including the
processor performs operations on behalf of users in response to system calls; in other
words extends the instruction set of a processor with a number of operations. Two modes of operation of a computing system
kernel/privileged mode user mode
Multiple functions Scheduling and thread management Event handling Memory management Communication management
Lecture 15 16
Thread and VM management – virtual computer The kernel supports thread and virtual memory management Thread management:
Creation and destruction of threads Allocation of the processor to a ready to run thread Handling of interrupts Scheduling – deciding which one of the ready to run threads should be allocated
the processor Virtual memory management maps virtual address space of a
thread to physical memory. Each module runs in own address space; if one module runs
multiple threads all share one address space. Thread + virtual memory virtual computer for each module.
Lecture 15 17
Threads and the Thread Manager
Thread virtual processor - multiplexes a physical processor a module in execution; a module may have several threads. sequence of operations:
Load the module’s text Create a thread and lunch the execution of the module in that thread.
Scheduler system component which chooses the thread to run next
Thread manager implements the thread abstraction. Interrupts processed by the interrupt handler which interacts with
the thread manager Exception interrupts caused by the running thread and processed by
exception handlers Interrupt handlers run in the context of the OS while exception
handlers run in the context of interrupted thread.
Lecture 15 18
The state of a thread; kernel versus application threads
Thread state: Thread Id unique identifier of a thread Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers
Application threads – threads running on behalf of users Kernel threads – threads running on behalf of the kernel
Lecture 15 19
Virtual versus real; the state of a processor
Virtual objects need a physical support. In addition to threads we should be concerned with the processor or
cores running a thread. A system may have multiple processors and each processor may
have multiple cores The state of the processor or core:
Processor Id/Core Id unique identifier of a processor /core Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers
Lecture 15 20
Lecture 15 21
Thread Layer
Processor Layer
Thread 1 Thread 2 Thread 7
Processor A Processor B
IDSPPC
PMAR
IDSPPC
PMAR
IDSPPC
PMAR
IDSPPC
PMAR
IDSPPC
PMAR
22Lecture 15
Virtual machines
Allow multiple operating systems on the same processor. a processor with a certain instruction set to emulate another one with a
different instruction set e.g., the Java Virtual Machines runs under Window, Linux, Mac OS, etc.
Lecture 15 23
VM Monitor
Windows GNU/Linux
Kernel Mode
User Mode
FirefoxInternetExplorer X WindowsWord
Lecture 15 24
Basic primitives for processor virtualization
Memory CREATE/DELETE_ADDRESS SPACEALLOCATE/FREE_BLOCKMAP/UNMAPUNMAP
Interpreter ALLOCATE_THREAD DESTROY_THREADEXIT_THREAD YIELDAWAIT ADVANCETICKETACQUIRE RELEASE
Communication channel
ALLOCATE/DEALLOCATE_BOUNDED_BUFFERSEND/RECEIVE
25Lecture 15
Switching the processor from one thread to another
Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD function implemented by the kernel to allow a thread to wait for an
event. Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread
YIELD cannot be implemented in a high level language, must be implemented in the machine
language. can be called from the environment of the thread, e.g., C, C++, Java allows several threads running on the same processor to wait for a lock. It replaces the
busy wait we have used before.
Lecture 15 26
Thread states and state transitions
Lecture 15 27
The state of a thread and its associated virtual address space
Lecture 15 28