virtual memory prof. sin-min lee department of computer science

78
Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Upload: amelia-matthews

Post on 13-Dec-2015

220 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Virtual Memory

Prof. Sin-Min Lee

Department of Computer Science

Page 2: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 3: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 4: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 5: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Fixed (Static) Partitions• Attempt at multiprogramming using fixed partitions

– one partition for each job– size of partition designated by reconfiguring the system– partitions can’t be too small or too large.

• Critical to protect job’s memory space.

• Entire program stored contiguously in memory during entire execution.

• Internal fragmentation is a problem.

Page 6: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Simplified Fixed Partition Memory Table (Table 2.1)

Partitionsize

Memoryaddress

Access Partitionstatus

100K 200K Job 1 Busy25K 300K Job 4 Busy25K 325K Free50K 350K Job 2 Busy

Page 7: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Original State After Job Entry

100K Job 1 (30K)

Partition 1 Partition 1

Partition 2 25K Job 4 (25K) Partition 2

Partition 3 25K Partition 3

Partition 450K Job 2 (50K)

Partition 4

Job List :J1 30KJ2 50KJ3 30KJ4 25K

Table 2.1 : Main memory use during fixed partition allocation of Table 2.1. Job 3 must wait.

Page 8: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Dynamic Partitions

• Available memory kept in contiguous blocks and jobs given only as much memory as they request when loaded.

• Improves memory use over fixed partitions.

• Performance deteriorates as new jobs enter the system– fragments of free memory are created between blocks

of allocated memory (external fragmentation).

Page 9: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Dynamic Partitioning of Main Memory & Fragmentation (Figure 2.2)

Page 10: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Dynamic Partition Allocation Schemes

• First-fit: Allocate the first partition that is big enough.– Keep free/busy lists organized by memory location (low-

order to high-order).– Faster in making the allocation.

• Best-fit: Allocate the smallest partition that is big enough– Keep free/busy lists ordered by size (smallest to largest). – Produces the smallest leftover partition.– Makes best use of memory.

Page 11: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

First-Fit Allocation Example (Table 2.2)

J1 10K J2 20K J3 30K* J4 10K

Memory Memory Job Job Internallocation block size number size Status fragmentation10240 30K J1 10K Busy 20K40960 15K J4 10K Busy 5K56320 50K J2 20K Busy 30K107520 20K Free

Total Available: 115K Total Used: 40K

Job List

Page 12: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Best-Fit Allocation Example(Table 2.3)

J1 10K J2 20K J3 30K J4 10K

Memory Memory Job Job Internallocation block size number size Status fragmentation40960 15K J1 10K Busy 5K107520 20K J2 20K Busy None10240 30K J3 30K Busy None56230 50K J4 10K Busy 40KTotal Available: 115K Total Used: 70K

Job List

Page 13: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

First-Fit Memory RequestBefore request After request

Beginning

address

Memory

block size

Beginning

address

Memory

block size

4075 105 4075 105

5225 5 5225 5

6785 600 *6985 400

7560 20 7560 20

7600 205 7600 205

10250 4050 10250 4050

15125 230 15125 230

24500 1000 24500 1000

Page 14: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Best-Fit Memory RequestBefore request After request

Beginning

address

Memory

block size

Beginning

address

Memory

block size

4075 105 4075 105

5225 5 5225 5

6785 600 6785 600

7560 20 7560 20

7600 205 *7800 5

10250 4050 10250 4050

15125 230 15125 230

24500 1000 24500 1000

Page 15: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Best-Fit vs. First-Fit

First-Fit• Increases memory use• Memory allocation

takes less time• Increases internal

fragmentation• Discriminates against

large jobs

Best-Fit• More complex

algorithm• Searches entire

table before allocating memory

• Results in a smaller “free” space (sliver)

Page 16: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Release of Memory Space : Deallocation

• Deallocation for fixed partitions is simple– Memory Manager resets status of memory block to

“free”.

• Deallocation for dynamic partitions tries to combine free areas of memory whenever possible– Is the block adjacent to another free block?

– Is the block between 2 free blocks?

– Is the block isolated from other free blocks?

Page 17: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Case 1: Joining 2 Free BlocksBefore Deallocation After Deallocation

Beginningaddress

Memoryblock size

Status Beginningaddress

Memoryblock size

Status

4075 105 Free 4075 105 Free

5225 5 Free 5225 5 Free

6785 600 Free 6785 600 Free

7560 20 Free 7560 20 Free

(7600) (200) (Busy)1 *7600 205 Free

*7800 5 Free 10250 4050 Free

10250 4050 Free 15125 230 Free

15125 230 Free 24500 1000 Free

24500 1000 Free

Page 18: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Case 2: Joining 3 Free BlocksBefore Deallocation After Deallocation

Beginningaddress

Memoryblock size

Status Beginningaddress

Memoryblock size

Status

4075 105 Free 4075 105 Free

5225 5 Free 5225 5 Free

6785 600 Free 6785 600 Free

7560 20 Free 7560 245 Free

(7600) (200) (Busy)1 * (null)

*7800 5 Free 10250 4050 Free

10250 4050 Free 15125 230 Free

15125 230 Free 24500 1000 Free

24500 1000 Free

Page 19: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Case 3: Deallocating an Isolated Block

Busy List Before Busy List After

Beginningaddress

Memoryblock size

Status Beginningaddress

Memoryblock size

Status

7805 1000 Busy 7805 1000 Busy

*8805 445 Busy * (null entry)

9250 1000 Busy 9250 1000 Busy

Page 20: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Relocatable Dynamic Partitions

• Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block.

• Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space.

• Memory Manager optimizes use of memory & improves throughput by compacting & relocating.

Page 21: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Compaction Steps

• Relocate every program in memory so they’re contiguous.

• Adjust every address, and every reference to an address, within each program to account for program’s new location in memory.

• Must leave alone all other values within the program (e.g., data values).

Page 22: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Memory Before & After Compaction (Figure 2.5)

Page 23: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Contents of relocation register & close-up of Job 4 memory area (a) before relocation & (b) after relocation and compaction (Figure 2.6)

Page 24: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

24

Virtual Memory

Virtual Memory (VM) = the ability of the CPU and the operating system software to use the hard disk drive as additional RAM when needed (safety net)

Good – no longer get “insufficient memory” errorBad - performance is very slow when accessing VMSolution = more RAM

Page 25: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Motivations for Virtual Memory• Use Physical DRAM as a Cache for the Disk

– Address space of a process can exceed physical memory size– Sum of address spaces of multiple processes can exceed physical memory

• Simplify Memory Management– Multiple processes resident in main memory.

• Each process with its own address space

– Only “active” code and data is actually in memory• Allocate more memory to process as needed.

• Provide Protection– One process can’t interfere with another.

• because they operate in different address spaces.

– User process cannot access privileged information• different sections of address spaces have different permissions.

Page 26: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Virtual Memory

Page 27: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Levels in Memory Hierarchy

CPUCPU

regsregs

Cache

MemoryMemory diskdisk

size:speed:$/Mbyte:line size:

32 B1 ns

8 B

Register Cache Memory Disk Memory

32 KB-4MB2 ns$100/MB32 B

128 MB50 ns$1.00/MB4 KB

20 GB8 ms$0.006/MB

larger, slower, cheaper

8 B 32 B 4 KB

cache virtual memory

Page 28: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

DRAM vs. SRAM as a “Cache”• DRAM vs. disk is more extreme than SRAM vs.

DRAM– Access latencies:

• DRAM ~10X slower than SRAM• Disk ~100,000X slower than DRAM

– Importance of exploiting spatial locality:• First byte is ~100,000X slower than successive bytes on disk

– vs. ~4X improvement for page-mode vs. regular accesses to DRAM

– Bottom line: • Design decisions made for DRAM caches driven by enormous

cost of misses

DRAMSRAM Disk

Page 29: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Locating an Object in a “Cache” (cont.)

Data

243

17

105

•••

0:

1:

N-1:

X

Object Name

Location

•••

D:

J:

X: 1

0

On Disk

“Cache”Page Table

• DRAM Cache

– Each allocate page of virtual memory has entry in page table

– Mapping from virtual pages to physical pages

• From uncached form to cached form

– Page table entry even if page not in memory

• Specifies disk address

– OS retrieves information

Page 30: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

CPU

0:1:

N-1:

Memory

A System with Physical Memory Only• Examples:

– most Cray machines, early PCs, nearly all embedded systems, etc.

Addresses generated by the CPU point directly to bytes in physical memory

PhysicalAddresses

Page 31: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

A System with Virtual Memory• Examples:

– workstations, servers, modern PCs, etc.

Address Translation: Hardware converts virtual addresses to physical addresses via an OS-managed lookup table (page table)

CPU

0:1:

N-1:

Memory

0:1:

P-1:

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Page 32: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Faults (Similar to “Cache Misses”)

• What if an object is on disk rather than in memory?

– Page table entry indicates virtual address not in memory

– OS exception handler invoked to move data from disk into memory

• current process suspends, others can resume

• OS has full control over placement, etc.

CPU

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

CPU

Memory

Page Table

Disk

VirtualAddresses

PhysicalAddresses

Before fault After fault

Page 33: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Terminology

• Cache: a small, fast “buffer” that lies between the CPU and the Main Memory which holds the most recently accessed data.

• Virtual Memory: Program and data are assigned addresses independent of the amount of physical main memory storage actually available and the location from which the program will actually be executed.

• Hit ratio: Probability that next memory access is found in the cache.

• Miss rate: (1.0 – Hit rate)

4

Page 34: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Importance of Hit Ratio• Given:

– h = Hit ratio– Ta = Average effective memory access time by CPU– Tc = Cache access time– Tm = Main memory access time

• Effective memory time is:Ta = hTc + (1 – h)Tm

• Speedup due to the cache is:Sc = Tm / Ta

• Example:Assume main memory access time of 100ns and cache access time of 10ns and there is a hit

ratio of .9.Ta = .9(10ns) + (1 - .9)(100ns) = 19nsSc = 100ns / 19ns = 5.26

Same as above only hit ratio is now .95 instead:Ta = .95(10ns) + (1 - .95)(100ns) = 14.5nsSc = 100ns / 14.5ns = 6.9

5

Page 35: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Cache vs Virtual Memory

• Primary goal of Cache:

increase Speed.

• Primary goal of Virtual Memory: increase Space.

6

Page 36: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Cache Replacement Algorithms

• Replacement algorithm determines which block in cache is removed to make room.

• 2 main policies used today– Least Recently Used (LRU)

• The block replaced is the one unused for the longest time.

– Random• The block replaced is completely random – a

counter-intuitive approach.

15

Page 37: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

LRU vs Random

• As the cache size increases there are more blocks to choose from, therefore the choice is less critical probability of replacing the block that’s needed next is relatively low.

Cache

Size

Miss Rate:

LRU

Miss Rate:

Random

16KB 4.4% 5.0%

64KB 1.4% 1.5%

256KB 1.1% 1.1%

• Below is a sample table comparing miss rates for both LRU and Random.

16

Page 38: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Virtual Memory Replacement Algorithms

1) Optimal

2) First In First Out (FIFO)

3) Least Recently Used (LRU)

17

Page 39: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Optimal

18

1 2 3 4 1 2 5 1 2 5 3 4 5

• Replace the page which will not be used for the longest (future) period of time.

Faults are shown in boxes; hits are not shown.

7 page faults occur

Page 40: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Optimal

• A theoretically “best” page replacement algorithm for a given fixed size of VM.

• Produces the lowest possible page fault rate.• Impossible to implement since it requires future

knowledge of reference string.• Just used to gauge the performance of real

algorithms against best theoretical.

19

Page 41: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

FIFO

• When a page fault occurs, replace the one that was brought in first.

20

1 2 3 4 1 2 5 1 2 5 3 4 5

Faults are shown in boxes; hits are not shown.

9 page faults occur

Page 42: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

FIFO

• Simplest page replacement algorithm.

• Problem: can exhibit inconsistent behavior known as Belady’s anomaly.– Number of faults can increase if job is given

more physical memory– i.e., not predictable

21

Page 43: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Example of FIFO Inconsistency

• Same reference string as before only with 4 frames instead of 3.

1 2 3 4 1 2 5 1 2 5 3 4 5

Faults are shown in boxes; hits are not shown.

10 page faults occur

22

Page 44: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

LRU

23

• Replace the page which has not been used for the longest period of time.

1 2 3 4 1 2 5 1 2 5 3 4 5

15

2

21

5

52

1

Faults are shown in boxes; hits only rearrange stack

9 page faults occur

Page 45: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

LRU

• More expensive to implement than FIFO, but it is more consistent.

• Does not exhibit Belady’s anomaly

• More overhead needed since stack must be updated on each access.

24

Page 46: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Example of LRU Consistency

1 2 3 4 1 2 5 1 2 5 3 4 5

1432

2143

1524

2154

5214

• Same reference string as before only with 4 frames instead of 3.

Faults are shown in boxes; hits only rearrange stack

7 page faults occur

25

Page 47: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Servicing a Page Fault

• Processor Signals Controller– Read block of length P

starting at disk address X and store starting at memory address Y

• Read Occurs– Direct Memory Access

(DMA)– Under control of I/O

controller• I / O Controller Signals

Completion– Interrupt processor– OS resumes suspended

process

diskDiskdiskDisk

Memory-I/O busMemory-I/O bus

ProcessorProcessor

CacheCache

MemoryMemoryI/O

controller

I/Ocontroller

Reg

(2) DMA Transfer

(1) Initiate Block Read

(3) Read Done

Page 48: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Handling Page Faults

• Memory reference causes a fault – called a page fault• Page fault can happen at any time and place

– Instruction fetch– In the middle of an instruction execution

• System must save all state• Move page from disk to memory• Restart the faulting instruction

– Restore state– Backup PC – not easy to find out by how much – need HW

help

Page 49: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Fault• If there is ever a reference to a page, first reference will trap to

OS page fault1. Hardware traps to kernel2. General registers saved3. OS determines which virtual page needed4. OS checks validity of address, seeks page frame5. If selected frame is dirty, write it to disk6. OS brings schedules new page in from disk7. Page tables updated8. Faulting instruction backed up to when it began 9. Faulting process scheduled10. Registers restored11. Program continues

Page 50: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

What to Page in

• Demand paging brings in the faulting page– To bring in additional pages, we need to know the

future

• Users don’t really know the future, but some OSs have user-controlled pre-fetching

• In real systems, – load the initial page – Start running– Some systems (e.g. WinNT will bring in additional

neighboring pages (clustering))

Page 51: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

VM Page Replacement

• If there is an unused page, use it.• If there are no pages available, select one (Policy?) and

– If it is dirty (M == 1)• write it to disk

– Invalidate its PTE and TLB entry– Load in new page from disk– Update the PTE and TLB entry!– Restart the faulting instruction

• What is cost of replacing a page?• How does the OS select the page to be evicted?

Page 52: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Measuring Demand Paging Performance

• Page Fault Rate (p)

0 < p < 1.0 (no page faults to every ref is a fault)

• Page Fault Overhead

= fault service overhead + read page + restart process overhead

– Dominated by time to read page in

• Effective Access Time= (1-p) (memory access) + p (page fault overhead)

Page 53: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Performance Example

• Memory access time = 100 nanoseconds• Page fault overhead = 25 millisec (msec)• Page fault rate = 1/1000• EAT = (1-p) * 100 + p * (25 msec)

= (1-p) * 100 + p * 25,000,000= 100 + 24,999,900 * p= 100 + 24,999,900 * 1/1000 = 25 microseconds!

• Want less than 10% degradation110 > 100 + 24,999,900 * p10 > 24,999,900 * pp < .0000004 or 1 fault in 2,500,000 accesses!

Page 54: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement Algorithms

• Want lowest page-fault rate.• Evaluate algorithm by running it on a particular

string of memory references (reference string) and computing the number of page faults on that string.

• Reference string – ordered list of pages accessed as process executes

Ex. Reference String is A B C A B D A D B C B

Page 55: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

The Best Page to Replace

• The best page to replace is the one that will never be accessed again

• Optimal Algorithm - Belady’s Algorithm– Lowest fault rate for any reference string– Basically, replace the page that will not be used for the longest

time in the future.– If you know the future, please see me after class!!– Belady’s Algorithm is a yardstick– We want to find close approximations

Page 56: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement - FIFO

• FIFO is simple to implement– When page in, place page id on end of list– Evict page at head of list

• Might be good? Page to be evicted has been in memory the longest time

• But?– Maybe it is being used– We just don’t know

• FIFO suffers from Belady’s Anomaly – fault rate may increase when there is more physical memory!

Page 57: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

FIFO vs. Optimal•Reference string – ordered list of pages accessed as process executesEx. Reference String is A B C A B D A D B C BOPTIMAL

A B C A B D A D B C B

toss A or Dtoss C5 Faults

FIFOA B C A B D A D B C B

toss A

ABCDABC

toss ?7 faults

System has 3 page frames

Page 58: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Second Chance

• Maintain FIFO page list

• On page fault– Check reference bit

• If R == 1 then move page to end of list and clear R

• If R == 0 then evict page

Page 59: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Clock Replacement

• Create circular list of PTEs in FIFO Order

• One-handed Clock – pointer starts at oldest page– Algorithm – FIFO, but check Reference bit

• If R == 1, set R = 0 and advance hand

• evict first page with R == 0

– Looks like a clock hand sweeping PTE entries– Fast, but worst case may take a lot of time

• Two-handed clock – add a 2nd hand that is n PTEs ahead– 2nd hand clears Reference bit

Page 60: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Not Recently Used Page Replacement Algorithm

• Each page has Reference bit, Modified bit– bits are set when page is referenced, modified

• Pages are classified1. not referenced, not modified

2. not referenced, modified

3. referenced, not modified

4. referenced, modified

• NRU removes page at random– from lowest numbered non empty class

Page 61: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Least Recently Used (LRU)• Replace the page that has not been used for the longest

time3 Page Frames Reference String - A B C A B D A D B C

A B C A B D A D B C

LRU – 5 faults

Page 62: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

LRU• Past experience may indicate future behavior• Perfect LRU requires some form of timestamp to be associated

with a PTE on every memory reference !!!• Counter implementation

– Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter.

– When a page needs to be changed, look at the counters to determine which are to change

• Stack implementation – keep a stack of page numbers in a double link form:– Page referenced: move it to the top– No search for replacement

Page 63: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

LRU Approximations

• Aging– Keep a counter for each PTE

– Periodically – check Reference bit• If R == 0 increment counter (page has not been used)

• If R == 1 clear the counter (page has been used)

• Set R = 0

– Counter contains # of intervals since last access

– Replace page with largest counter value

• Clock replacement

Page 64: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Contrast: Macintosh Memory Model

• MAC OS 1–9

– Does not use traditional virtual memory

• All program objects accessed through “handles”

– Indirect reference through pointer table

– Objects stored in shared global address space

P1 Pointer Table

P2 Pointer Table

Process P1

Process P2

Shared Address Space

A

B

C

D

E

“Handles”

Page 65: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Macintosh Memory Management• Allocation / Deallocation

– Similar to free-list management of malloc/free• Compaction

– Can move any object and just update the (unique) pointer in pointer table

“Handles”

P1 Pointer Table

P2 Pointer Table

Process P1

Process P2

Shared Address Space

A

B

C

D

E

Page 66: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Mac vs. VM-Based Memory Mgmt

• Allocating, deallocating, and moving memory:– can be accomplished by both techniques

• Block sizes:– Mac: variable-sized

• may be very small or very large– VM: fixed-size

• size is equal to one page (4KB on x86 Linux systems)• Allocating contiguous chunks of memory:

– Mac: contiguous allocation is required– VM: can map contiguous range of virtual addresses to disjoint ranges

of physical addresses• Protection

– Mac: “wild write” by one process can corrupt another’s data

Page 67: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

MAC OS X

• “Modern” Operating System– Virtual memory with protection– Preemptive multitasking

• Other versions of MAC OS require processes to voluntarily relinquish control

• Based on MACH OS– Developed at CMU in late 1980’s

Page 68: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 69: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 70: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 71: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 72: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 73: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 74: Virtual Memory Prof. Sin-Min Lee Department of Computer Science
Page 75: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement Policy

• Working Set:– Set of pages used actively & heavily– Kept in memory to reduce Page Faults

• Set is found/maintained dynamically by OS

• Replacement: OS tries to predict which page would have least impact on the running program

Common Replacement Schemes:Least Recently Used (LRU)First-In-First-Out (FIFO)

Page 76: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement Policies

• Least Recently Used (LRU)– Generally works well– TROUBLE:

• When the working set is larger than the Main Memory

Working Set = 9 pages

Pages are executed in sequence (08 (repeat))

THRASHING

Page 77: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement Policies

• First-In-First-Out(FIFO)– Removes Least Recently Loaded page– Does not depend on Use– Determined by number of page faults seen

by a page

Page 78: Virtual Memory Prof. Sin-Min Lee Department of Computer Science

Page Replacement Policies

• Upon Replacement– Need to know whether to write data back– Add a Dirty-Bit

Dirty Bit = 0; Page is clean; No writing

Dirty Bit = 1; Page is dirty; Write back