Download - Chapter 10 Virtual Memory - cc.ee.ntu.edu.tw
ContentsContents1.1. IntroductionIntroduction2.2. ComputerComputer--System StructuresSystem Structures3.3. OperatingOperating--System StructuresSystem Structures4.4. Processes Processes 5.5. Threads Threads 6.6. CPU Scheduling CPU Scheduling 7.7. Process Synchronization Process Synchronization 8.8. DeadlocksDeadlocks9.9. Memory ManagementMemory Management10.10. Virtual MemoryVirtual Memory11.11. File SystemsFile Systems
Chapter 10 Chapter 10 Virtual MemoryVirtual Memory
Virtual MemoryVirtual Memory
Virtual MemoryVirtual MemoryA technique that allows the execution A technique that allows the execution of a process that may not be completely of a process that may not be completely in memory.in memory.
Motivation:Motivation:An entire program in execution may An entire program in execution may not all be needed at the same time!not all be needed at the same time!
e.g. error handling routines, a large array, e.g. error handling routines, a large array, certain program features, etccertain program features, etc
Virtual MemoryVirtual MemoryPotential BenefitsPotential Benefits
Programs can be much larger that the amount of Programs can be much larger that the amount of physical memory. Users can concentrate on their physical memory. Users can concentrate on their problem programming.problem programming.The level of multiprogramming increases because The level of multiprogramming increases because processes occupy less physical memory.processes occupy less physical memory.Each user program may run faster because less I/O Each user program may run faster because less I/O is needed for loading or swapping user programs.is needed for loading or swapping user programs.
Implementation: demand paging, demand Implementation: demand paging, demand segmentation (more difficult),etc.segmentation (more difficult),etc.
Demand Paging Demand Paging –– Lazy SwappingLazy Swapping
Process image may reside on the backing Process image may reside on the backing store. Rather than swap in the entire process store. Rather than swap in the entire process image into memory, Lazy Swapper only swap image into memory, Lazy Swapper only swap in a page when it is needed!in a page when it is needed!
Pure Demand Paging Pure Demand Paging –– Pager Pager vsvs SwapperSwapperA Mechanism required to recover from the A Mechanism required to recover from the missing of nonmissing of non--resident referenced pages.resident referenced pages.
A A page faultpage fault occurs when a process references a occurs when a process references a nonnon--memorymemory--resident page.resident page.
Demand Paging Demand Paging –– Lazy Lazy SwappingSwapping
CPU p d f d
4 vi
6 vii
9 vii
Page Table...
9 -F
8
7
6 - C
5
4 - A
3
2
1
0
valid-invalid bit
invalid page?non-memory-resident page?
ABCDEF
Logical Memory
Physical Memory
A Procedure to Handle a Page FaultA Procedure to Handle a Page Fault
OS
iCPUFree
Frame
1. Reference
6. Return toexecute the instruction
5. Resetthe PageTable
2. Trap (valid disk-resident page)
3. Issue a ‘read”instruction & find a free frame
4. Bring in the missing page
A Procedure to Handle A Page A Procedure to Handle A Page FaultFault
Pure Demand Paging:Pure Demand Paging:Never bring in a page into the memory Never bring in a page into the memory until it is required!until it is required!
PrePre--PagingPagingBring into the memory all of the pages Bring into the memory all of the pages that that ““willwill”” be needed at one time!be needed at one time!Locality of referenceLocality of reference
Hardware Support for Demand PagingHardware Support for Demand Paging
New Bits in the Page TableNew Bits in the Page TableTo indicate that a page is now in To indicate that a page is now in memory or not.memory or not.
Secondary StorageSecondary StorageSwap space in the backing store Swap space in the backing store
A continuous section of space in the A continuous section of space in the secondary storage for better performance.secondary storage for better performance.
Crucial issuesCrucial issues
Example 1 Example 1 –– Cost in restarting an instructionCost in restarting an instructionAssembly Instruction: Add a, b, cAssembly Instruction: Add a, b, c
Only a short job! Only a short job!
ReRe--fetch the instruction, decode, fetch operands, fetch the instruction, decode, fetch operands, execute, save, etcexecute, save, etc
Strategy: Strategy:
Get all pages and restart the instruction from the Get all pages and restart the instruction from the beginning!beginning!
Crucial IssuesCrucial Issues
Example 2 Example 2 –– BlockBlock--Moving Moving Assembly InstructionAssembly Instruction
MVC x, y, 256 MVC x, y, 256
IBM System 360/ 370 IBM System 360/ 370
CharacteristicsCharacteristics
More expensiveMore expensive
““selfself--modifyingmodifying”” ““operandsoperands””
Solutions:Solutions:
PrePre--load pagesload pages
PrePre--save & recover before save & recover before pagepage--fault servicesfault services
x:
y:ABCD
ABCD
Page fault! Return??X is destroyed
MVC x, y, 4
Crucial IssuesCrucial Issues
(R2) +
- (R3)
Page Fault
When the page fault is serviced,R2, R3 are modified!
- Undo Effects!
Example 3 – Addressing Mode
MOV (R2)+, -(R3)
Performance of Demand PagingPerformance of Demand Paging
Effective Access Time:Effective Access Time:ma: memory access time for pagingma: memory access time for paging
p: probability of a page faultp: probability of a page fault
pftpft: page fault time: page fault time
(1 (1 -- p) * ma + p * p) * ma + p * pftpft
Performance of Demand PagingPerformance of Demand PagingPage fault time Page fault time -- major componentsmajor components
Components 1&3 (about 10Components 1&3 (about 103 3 ns ~ 10ns ~ 1055 ns)ns)Service the pageService the page--fault interruptfault interruptRestart the processRestart the process
Component 2 (about 25ms)Component 2 (about 25ms)Read in the page (multiprogramming! However, letRead in the page (multiprogramming! However, let’’s s get the taste!)get the taste!)pftpft ≈≈ 25ms = 25,000,000 ns25ms = 25,000,000 ns
Effect Access Time (when ma = 100ns)Effect Access Time (when ma = 100ns)(1(1--p) * 100ns + p * 25,000,000 nsp) * 100ns + p * 25,000,000 ns100ns + 24,999,900ns * p100ns + 24,999,900ns * p
Performance of Demand PagingPerformance of Demand Paging
Example Example (when ma = 100ns)(when ma = 100ns)p = 1/1000p = 1/1000
Effect Access Time Effect Access Time ≈≈ 25,000 ns25,000 ns
→→ Slowed down by 250 timesSlowed down by 250 times
How to only 10% slowHow to only 10% slow--down?down?110 > 100 + 25,000,000 * p110 > 100 + 25,000,000 * p
p < 0.0000004p < 0.0000004
p < 1 / 2,500,000p < 1 / 2,500,000
Performance of Demand PagingPerformance of Demand Paging
How to keep the page fault rate low?How to keep the page fault rate low?Effective Access Time Effective Access Time ≈≈ 100ns + 100ns + 24,999,900ns * p24,999,900ns * p
Handling of Swap Space Handling of Swap Space –– A Way to A Way to Reduce Page Fault Time (Reduce Page Fault Time (pftpft))
Disk I/O to swap space is generally faster Disk I/O to swap space is generally faster than that to the file system.than that to the file system.
Preload processes into the swap space before they Preload processes into the swap space before they start up.start up.
Demand paging from file system but do page Demand paging from file system but do page replacement to the swap space. (BSD UNIX)replacement to the swap space. (BSD UNIX)
Process CreationProcess CreationCopyCopy--onon--WriteWrite
Rapid Process Creation and Reducing of Rapid Process Creation and Reducing of New Pages for the New ProcessNew Pages for the New Process
fork(); fork(); execveexecve()()Shared pages Shared pages copycopy--onon--write pageswrite pages
Only the pages that are modified are Only the pages that are modified are copied!copied!
3
4
6
1
3
4
6
1
*data1
**
ed1
**
ed2
**
ed3?? ::
Page Table 1
Page Table 2
P1
P2
page 0 1 2 3 4 5 6 7 n
* Windows 2000, Linux, Solaris 2 support this feature!
Process CreationProcess Creation
CopyCopy--onon--WriteWritezerozero--fillfill--onon--demanddemand
ZeroZero--filled pagesfilled pages, e.g., those for the stack , e.g., those for the stack or heaps.or heaps.
vforkvfork() () vsvs fork() with copyfork() with copy--onon--writewritevforkvfork() lets the sharing of the page table () lets the sharing of the page table and pages between the parent and child and pages between the parent and child processes.processes.
Where to keep the needs of copyWhere to keep the needs of copy--onon--writewrite information for pages?information for pages?
MemoryMemory--Mapped FilesMapped FilesFile writes might not cause any disk write!File writes might not cause any disk write!
Solaris 2 uses memorySolaris 2 uses memory--mapped files for mapped files for open(), read(), write(), etc.open(), read(), write(), etc.
1 2 3 4 5 6
24
5163
2
45
1
6
3
2
45
1
6
3
Disk File
P1 VM P2 VM
Page ReplacementPage ReplacementDemand paging increases the Demand paging increases the multiprogramming level of a system by multiprogramming level of a system by ““potentiallypotentially”” overover--allocating memory.allocating memory.
Total physical memory = 40 framesTotal physical memory = 40 framesRun six processes of size equal to 10 frames Run six processes of size equal to 10 frames but with only five frames. => 10 spare framesbut with only five frames. => 10 spare frames
Most of the time, the average memory Most of the time, the average memory usage is close to the physical memory size usage is close to the physical memory size if we increase a systemif we increase a system’’s s multiprogramming level!multiprogramming level!
Page ReplacementPage Replacement
Q: Should we run the 7th processes?Q: Should we run the 7th processes?How if the six processes start to ask their How if the six processes start to ask their shares?shares?
What to do if all memory is in use, and What to do if all memory is in use, and more memory is needed?more memory is needed?AnswersAnswers
Kill a user process!Kill a user process!But, paging should be transparent to users?But, paging should be transparent to users?
Swap out a process!Swap out a process!Do page replacement!Do page replacement!
Page ReplacementPage Replacement
A PageA Page--Fault Service Fault Service Find the desired page on the disk!Find the desired page on the disk!Find a free frameFind a free frame
Select a victim and write the victim page out Select a victim and write the victim page out when there is no free frame!when there is no free frame!
Read the desired page into the selected Read the desired page into the selected frame.frame.Update the page and frame tables, and Update the page and frame tables, and restart the user process.restart the user process.
B
M
0
E7
A6
J5
M/B4
H3
D2
1
OS
v5
i
v4
v3
v2
v7
i
v6
3
2
1
0
J
Load M
H
3
2
1
0
E
D
B
A
P1
P2
PC
Page ReplacementPage TableLogical Memory
OS
Two page transfers per page fault Two page transfers per page fault if no frame is available!if no frame is available!
YV7
YV3
NV4
NV6
Modify (/Dirty) Bit! To “eliminate” ‘swap out” => Reduce I/O time by one-half
Page Replacement
Page Table
Valid-Invalid Bit
Modify Bit is set by the hardware automatically!
Page ReplacementPage Replacement
Two Major Pieces for Demand Paging Two Major Pieces for Demand Paging Frame Allocation AlgorithmsFrame Allocation Algorithms
How many frames are allocated to a process?How many frames are allocated to a process?
Page Replacement AlgorithmsPage Replacement AlgorithmsWhen page replacement is required, select the frame When page replacement is required, select the frame that is to be replaced!that is to be replaced!
Goal: A low page fault rate!Goal: A low page fault rate!
Note that a bad replacement choice does not Note that a bad replacement choice does not cause any incorrect execution!cause any incorrect execution!
Page Replacement AlgorithmsPage Replacement Algorithms
Evaluation of AlgorithmsEvaluation of AlgorithmsCalculate the number of page faults on strings of Calculate the number of page faults on strings of memory references, called reference strings, for a memory references, called reference strings, for a set of algorithmsset of algorithms
Sources of Reference StringsSources of Reference StringsReference strings are generated artificially.Reference strings are generated artificially.
Reference strings are recorded as system traces:Reference strings are recorded as system traces:
How to reduce the number of data?How to reduce the number of data?
Page Replacement AlgorithmsPage Replacement Algorithms
Two Observations to Reduce the Number of Data:Two Observations to Reduce the Number of Data:Consider only the page numbers if the page size is fixed.Consider only the page numbers if the page size is fixed.
Reduce memory references into page referencesReduce memory references into page references
If a page If a page pp is referenced, any immediately following is referenced, any immediately following references to page references to page pp will never cause a page fault.will never cause a page fault.
Reduce consecutive page references of page Reduce consecutive page references of page pp into one into one page reference.page reference.
Page Replacement AlgorithmsPage Replacement Algorithms
Does the number of page faults decrease when the Does the number of page faults decrease when the number of page frames available increases?number of page frames available increases?
XX XX
page# offset
0100, 0432, 0101, 0612, 0103, 0104, 0101, 0611
01, 04, 01, 06, 01, 01, 01, 06
01, 04, 01, 06, 01, 06
Example
FIFO AlgorithmFIFO Algorithm
A FIFO ImplementationA FIFO Implementation1.1. Each page is given a time stamp when it Each page is given a time stamp when it
is brought into memory.is brought into memory.
2.2. Select the oldest page for replacement!Select the oldest page for replacement!referencestring
pageframes
FIFOqueue
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0
7
0
1
2
0
1
2
3
1
4
3
0
2
3
0
4
2
0
4
2
3
0
2
3
7 70
701
012
123
230
304
042
423
230
0
1
3
0
1
2
7
1
2
7
0
2
7
0
1
301
012
127
270
701
FIFO AlgorithmFIFO Algorithm
The Idea behind FIFOThe Idea behind FIFOThe oldest page is unlikely to be used again.The oldest page is unlikely to be used again.
??Should we save the page which will be used in ??Should we save the page which will be used in the near future??the near future??
BeladyBelady’’ss anomalyanomalyFor some pageFor some page--replacement algorithms, the page replacement algorithms, the page fault rate may increase as the number of fault rate may increase as the number of allocated frames increases.allocated frames increases.
FIFO AlgorithmFIFO Algorithm
Run the FIFO algorithm on the following reference:
1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 2 3 4 1 1 1 2 5 52 2 3 4 1 2 2 2 5 3 3
3 4 1 2 5 5 5 3 4 4
1 1 1 1 1 1 2 3 4 5 1 22 2 2 2 2 3 4 5 1 2 3
3 3 3 3 4 5 1 2 3 44 4 4 5 1 2 3 4 5
Push out pages that will be used later!
string:
3 frames
4 frames
Optimal Algorithm (OPT)Optimal Algorithm (OPT)OptimalityOptimality
One with the lowest page fault rate.One with the lowest page fault rate.
Replace the page that will not be used for the Replace the page that will not be used for the longest period of time. longest period of time. Future PredictionFuture Prediction
referencestring
pageframes
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0
7
0
1
2
0
1
2
0
3
2
4
3
2
0
3
2
0
1
7
0
1
next 7next 0
next 1
LeastLeast--RecentlyRecently--Used Algorithm Used Algorithm (LRU)(LRU)
The Idea:The Idea:OPT concerns when a page is to be used!OPT concerns when a page is to be used!““DonDon’’t have knowledge about the futuret have knowledge about the future””?!?!
Use the history of page referencing in the Use the history of page referencing in the past to predict the future!past to predict the future!
S ? SS ? SR R ( S( SRR is the reverse of S !)is the reverse of S !)
LRU AlgorithmLRU Algorithm
referencestring
pageframes
LRUqueue
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7
0
7
0
1
2
0
1
2
0
3
4
0
3
4
0
2
4
3
2
0
3
2
0 07
107
210
302
032
403
240
324
032
1
3
2
1
0
2
7
0
7
123
213
102
710
071
021
302
230
021
107
a wrong prediction!
Remark: LRU is like OPT which “looks backward” in time.
Example
LRU Implementation LRU Implementation ––CountersCounters
CPU p d f d
frame # v/itimetag
p
f
cnt++
Time of Last Use!
……
…
Page Table for Pi
Logical Address
Physical Memory
Disk
Update the “time-of-use”field
A Logical Clock
LRU Implementation LRU Implementation ––CountersCounters
Overheads Overheads The logical clock is incremented for The logical clock is incremented for every memory reference.every memory reference.
Update the Update the ““timetime--ofof--useuse”” field for each field for each page reference.page reference.
Search the LRU page for replacement.Search the LRU page for replacement.
Overflow prevention of the clock & the Overflow prevention of the clock & the maintenance of the maintenance of the ““timetime--ofof--useuse”” field field of each page table.of each page table.
LRU Implementation – Stack
CPU p d f d
frame # v/i
p
f
……
…Page Table
Logical Address
Physical Memory
Disk
…
Head
Tail(The LRU page!)
A LRUStack
move
Overheads: Stack maintenance per memory reference ~ no search for page replacement!
A Stack AlgorithmA Stack Algorithm
Need hardware support for efficient Need hardware support for efficient implementations.implementations.
Note that LRU maintenance needs to Note that LRU maintenance needs to be done for every memory reference.be done for every memory reference.
memory-residentpages
memory-residentpages
⊆n framesavailable
(n +1) framesavailable
LRU Approximation AlgorithmsLRU Approximation Algorithms
MotivationMotivationNo sufficient hardware supportNo sufficient hardware support
Most systems provide only Most systems provide only ““reference bitreference bit””which only indicates whether a page is used or which only indicates whether a page is used or not, instead of their order.not, instead of their order.
AdditionalAdditional--ReferenceReference--Bit AlgorithmBit Algorithm
SecondSecond--Chance AlgorithmChance Algorithm
Enhanced Second Chance AlgorithmEnhanced Second Chance Algorithm
CountingCounting--Based Page ReplacementBased Page Replacement
AdditionalAdditional--ReferenceReference--Bits AlgorithmBits Algorithm
MotivationMotivation
Keep a history of reference bitsKeep a history of reference bits
1 011011010 10100011
0 111010101 00000001
… …
OS shifts allhistory registers rightby one bit at eachregular interval!!
referencebit
one byte per page in memory
History Registers
But, how many bits per history register should be used?
Fast but cost-effective!
The more bits, the better the approximation is.
0 0 0 0 0 0 0 00 0 0 0 0 0 0 1
1 1 1 1 1 1 1 01 1 1 1 1 1 1 1
LRU(smaller value!)
MRU
Not used for 8 times
Used at least once every time
AdditionalAdditional--ReferenceReference--Bits AlgorithmBits Algorithm
SecondSecond--Chance (Clock) AlgorithmChance (Clock) Algorithm
MotivationMotivationUse the reference bit Use the reference bit onlyonly
Basic Data Structure: Basic Data Structure: Circular FIFO QueueCircular FIFO Queue
Basic MechanismBasic MechanismWhen a page is selectedWhen a page is selected
Take it as a victim if its Take it as a victim if its reference bit = 0reference bit = 0Otherwise, clear the bit Otherwise, clear the bit and advance to the next and advance to the next pagepage
0
0
1
1
1
1
0
…
ReferenceBit
Page
…
0
0
0
0
1
1
0
…
ReferenceBit
Page
…
Enhanced SecondEnhanced Second--Chance Chance AlgorithmAlgorithm
Motivation:Motivation:Consider the cost in swapping outConsider the cost in swapping out”” pages.pages.
4 Classes (reference bit, modify bit)4 Classes (reference bit, modify bit)(0,0) (0,0) –– not recently used and not not recently used and not ““dirtydirty””(0,1) (0,1) –– not recently used but not recently used but ““dirtydirty””(1,0) (1,0) –– recently used but not recently used but not ““dirtydirty””(1,1) (1,1) –– recently used and recently used and ““dirtydirty””
low priority
high priority
Enhanced SecondEnhanced Second--Chance Chance AlgorithmAlgorithm
Use the secondUse the second--chance algorithm to chance algorithm to replace the first page encountered in replace the first page encountered in the lowest nonempty class.the lowest nonempty class.
=> May have to scan the circular queue => May have to scan the circular queue several times before find the right page.several times before find the right page.
Macintosh Virtual Memory Macintosh Virtual Memory Management Management
CountingCounting--Based AlgorithmsBased Algorithms
Motivation:Motivation:
CountCount the # of references made to each the # of references made to each page, instead of their referencing times.page, instead of their referencing times.
Least Frequently Used Algorithm (LFU) Least Frequently Used Algorithm (LFU) LFU pages are less actively used pages!LFU pages are less actively used pages!
Potential Hazard: Some heavily used pages Potential Hazard: Some heavily used pages may no longer be used !may no longer be used !
A Solution A Solution –– Aging Aging
Shift counters right by one bit at each Shift counters right by one bit at each regular interval.regular interval.
CountingCounting--Based AlgorithmsBased Algorithms
Most Frequently Used Algorithm Most Frequently Used Algorithm (MFU) (MFU)
Pages with the smallest number of Pages with the smallest number of references are probably just brought in references are probably just brought in and has yet to be used! and has yet to be used!
LFU & MFU replacement schemes can LFU & MFU replacement schemes can be fairly expensive!be fairly expensive!
They do not approximate OPT very They do not approximate OPT very well! well!
Page BufferingPage Buffering
Basic IdeaBasic Ideaa.a. Systems keep a pool of free framesSystems keep a pool of free frames
b.b. Desired pages are first Desired pages are first ““swapped inswapped in”” some some pages in the pool.pages in the pool.
c.c. When the selected page (victim) is later When the selected page (victim) is later written out, its frame is returned to the pool.written out, its frame is returned to the pool.
Variation 1Variation 1a.a. Maintain a list of modified pages.Maintain a list of modified pages.
b.b. Whenever the paging device is idle, a Whenever the paging device is idle, a modified page is written out and reset its modified page is written out and reset its ““modify bitmodify bit””..
Page BufferingPage Buffering
Variation 2Variation 2a.a. Remember which page was in each frame of Remember which page was in each frame of
the pool.the pool.b.b. When a page fault occurs, first check whether When a page fault occurs, first check whether
the desired page is there already. the desired page is there already. Pages which were in frames of the pool must be Pages which were in frames of the pool must be ““cleanclean””..““SwappingSwapping--inin”” time is saved!time is saved!
VAX/VMS with the FIFO replacement VAX/VMS with the FIFO replacement algorithm adopt it to improve the algorithm adopt it to improve the performance of the FIFO algorithm.performance of the FIFO algorithm.
Frame Allocation Frame Allocation –– Single UserSingle User
Basic Strategy:Basic Strategy:User process is allocated any free frame.User process is allocated any free frame.User process requests free frames from the User process requests free frames from the freefree--frame list.frame list.When the freeWhen the free--frame list is exhausted, page frame list is exhausted, page replacement takes place.replacement takes place.All allocated frames are released by the ending All allocated frames are released by the ending process.process.
VariationsVariationsO.S. can share with users some free frames for O.S. can share with users some free frames for special purposes.special purposes.Page Buffering Page Buffering -- Frames to save Frames to save ““swappingswapping””timetime
Frame Allocation Frame Allocation –– Multiple Multiple UsersUsers
Fixed AllocationFixed Allocationa.a. Equal AllocationEqual Allocation
m frames, n processes m frames, n processes m/n frames per m/n frames per processprocess
b.b. Proportional AllocationProportional Allocation1.1. Ratios of Frames Ratios of Frames ∝∝ SizeSize
S = S = ΣΣ SSii, A, Aii ∝∝ ((SSii / S) x m, where (sum <= m) & (A/ S) x m, where (sum <= m) & (Aii
>= minimum # of frames required)>= minimum # of frames required)
2.2. Ratios of Frames Ratios of Frames ∝∝ Priority Priority SSii : relative importance: relative importance
3.3. Combinations, or others.Combinations, or others.
Frame Allocation Frame Allocation –– Multiple Multiple UsersUsers
Dynamic AllocationDynamic Allocationa.a. Allocated frames Allocated frames ∝∝ the the
multiprogramming levelmultiprogramming levelb.b. Allocated frames Allocated frames ∝∝ Others Others
The minimum number of frames The minimum number of frames required for a process is determined required for a process is determined by the instructionby the instruction--set architecture.set architecture.
ADD A,B,CADD A,B,C 4 frames needed4 frames neededADD (A), (B), (C)ADD (A), (B), (C) 1+2+2+2 = 7 1+2+2+2 = 7 frames, where (A) is an indirect frames, where (A) is an indirect addressing.addressing.
Frame Allocation Frame Allocation –– Multiple Multiple UsersUsers
Minimum Number of Frames Minimum Number of Frames (Continued)(Continued)
How many levels of indirect How many levels of indirect addressing should be supported? addressing should be supported?
It may touch every page in the logical It may touch every page in the logical address space of a processaddress space of a process
=> Virtual memory is collapsing!=> Virtual memory is collapsing!
A long instruction may cross a page A long instruction may cross a page boundary.boundary.MVCMVC X, Y, 256 X, Y, 256 2 + 2 + 2 = 6 frames2 + 2 + 2 = 6 frames
The spanning of the instruction and the The spanning of the instruction and the operands.operands.
address
16 bits
1 indirect0 direct
Frame Allocation Frame Allocation –– Multiple Multiple UsersUsers
Global AllocationGlobal AllocationProcesses can take frames from others. For Processes can take frames from others. For example, highexample, high--priority processes can increase priority processes can increase its frame allocation at the expense of the lowits frame allocation at the expense of the low--priority processes!priority processes!
Local AllocationLocal AllocationProcesses can only select frames from their Processes can only select frames from their own allocated frames own allocated frames Fixed AllocationFixed AllocationThe set of pages in memory for a process is The set of pages in memory for a process is affected by the paging behavior of only that affected by the paging behavior of only that process.process.
Frame Allocation Frame Allocation –– Multiple Multiple UsersUsers
RemarksRemarksa.a. Global replacement generally results Global replacement generally results
in a better system throughputin a better system throughput
b.b. Processes can not control their own Processes can not control their own page fault rates such that a process page fault rates such that a process can effect each another easily.can effect each another easily.
ThrashingThrashing
Thrashing Thrashing –– A High Paging Activity:A High Paging Activity:A process is thrashing if it is spending A process is thrashing if it is spending more time paging than executing.more time paging than executing.
Why thrashing?Why thrashing?Too few frames allocated to a process!Too few frames allocated to a process!
Thrashing
low CPU utilizationDispatch a new process
under a global page-replacement algorithm
degree of multiprogramming
CP
U utilization
thrashing
ThrashingThrashing
Solutions:Solutions:Decrease the multiprogramming level Decrease the multiprogramming level
Swap out processes!Swap out processes!Use local pageUse local page--replacement algorithmsreplacement algorithms
Only limit thrashing effects Only limit thrashing effects ““locallylocally””Page faults of other processes also slow Page faults of other processes also slow down.down.
Give processes as many frames as they Give processes as many frames as they need!need!
But, how do you know the right number of But, how do you know the right number of frames for a process?frames for a process?
Locality ModelLocality Model
A program is composed of several different A program is composed of several different (overlapped) localities.(overlapped) localities.
Localities are defined by the program structures Localities are defined by the program structures and data structures (e.g., an array, hash tables)and data structures (e.g., an array, hash tables)
How do we know that we allocate enough How do we know that we allocate enough frames to a process to accommodate its frames to a process to accommodate its current locality?current locality?
localityi = {Pi,1,Pi,2,…,Pi,ni}
control flow
localityj = {Pj,1,Pj,2,…,Pj,nj}
WorkingWorking--Set ModelSet Model
The working set is an approximation The working set is an approximation of a programof a program’’s locality.s locality.
Page references
…2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 Δ
working-set windowt1
working-set(t1) = {1,2,5,6,7}
Δworking-set window
t2working-set(t2) = {3,4}
The minimum allocation
Δ ∞All touched pagesmay cover several localities.
WorkingWorking--Set ModelSet Model
where M is the total number of where M is the total number of available frames.available frames.
∑ ≤= MsizesetworkingD i..
Suspend some processes and
swap out their pages.
“Safe”
D>M
Extra frames are available, and initiate new processes.
D>M
D≦M
WorkingWorking--Set ModelSet Model
The maintenance of working sets is expensive!The maintenance of working sets is expensive!Approximation by a timer and the reference bitApproximation by a timer and the reference bit
Accuracy v.s. Timeout Interval!Accuracy v.s. Timeout Interval!
01
10
……
……
……
……shift or
copy
timer!
reference bit in-memory history
PagePage--Fault FrequencyFault Frequency
MotivationMotivation
Control thrashing directly through the Control thrashing directly through the observation on the pageobservation on the page--fault rate!fault rate!
increase # of frames!
decrease # of frames!
upper bound
lower bound
number of frames*Processes are suspended and swapped out if the number of available frames is reduced to that under the minimum needs.
page-fault rate
OS Examples OS Examples –– NT NT
Virtual Memory Virtual Memory –– Demand Paging Demand Paging with Clusteringwith Clustering
Clustering brings in more pages Clustering brings in more pages surrounding the faulting page!surrounding the faulting page!
Working SetWorking SetA Min and Max bounds for a processA Min and Max bounds for a process
Local page replacement when the max Local page replacement when the max number of frames are allocated.number of frames are allocated.
Automatic workingAutomatic working--set trimming reduces set trimming reduces allocated frames of a process to its min allocated frames of a process to its min when the system threshold on the when the system threshold on the available frames is reached.available frames is reached.
OS Examples OS Examples –– SolarisSolaris
Process Process pageoutpageout first clears first clears the reference bit of all pages the reference bit of all pages to 0 and then later returns all to 0 and then later returns all pages with the reference bit = pages with the reference bit = 0 to the system 0 to the system ((handspreadhandspread).).
4HZ 4HZ 100HZ when 100HZ when desfreedesfree is is reached!reached!
Swapping starts when Swapping starts when desfreedesfreefails for 30s.fails for 30s.
pageoutpageout runs for every request runs for every request to a new page when to a new page when minfreeminfree is is reached.reached.
lotsfree
100slowscan
8192fastscan
desfreeminfree
Other ConsiderationsOther Considerations
PrePre--PagingPagingBring into memory at one time all the pages Bring into memory at one time all the pages that will be needed! that will be needed!
IssueIssuePre-Paging Cost Cost of Page Fault Services
ready processes
suspended processesresumed
swappedout
Do pre-paging if the working set is known!
Not every page in the working set will be used!
Other ConsiderationsOther Considerations
Page SizePage Size
Trends Trends -- Large Page Size Large Page Size
∵∵ The CPU speed and the memory capacity The CPU speed and the memory capacity grow much faster than the disk speed!grow much faster than the disk speed!
small largep d
Smaller PageTable Size &Better I/OEfficiency
Better Resolution for Locality &InternalFragmentation 512B(29)~16,384B(212)
Page Size
Other ConsiderationsOther Considerations
TLB ReachTLB ReachTLBTLB--EntryEntry--Number * PageNumber * Page--SizeSize
WishWishThe working set is stored in the TLB!The working set is stored in the TLB!
Solutions Solutions Increase the page sizeIncrease the page size
Have multiple page sizes Have multiple page sizes –– UltraSparcUltraSparc II (8KB II (8KB -- 4MB) 4MB) + Solaris 2 (8KB or 4MB)+ Solaris 2 (8KB or 4MB)
Other ConsiderationsOther Considerations
Inverted Page TableInverted Page TableThe objective is to reduce the amount The objective is to reduce the amount of physical memory for page tables, of physical memory for page tables, but they are needed when a page fault but they are needed when a page fault occurs!occurs!
More page faults for page tables will More page faults for page tables will occur!!!occur!!!
Other ConsiderationsOther Considerations
Program Structure Program Structure Motivation Motivation –– Improve the system performance Improve the system performance by an awareness of the underlying demand by an awareness of the underlying demand paging.paging.
var A: array [1..128,1..128] of integer;
for j:=1 to 128
for i:=1 to 128
A(i,j):=0A(1,1)A(1,2)
.
.A(1,128)
A(2,1)A(2,2)
.
.A(2,128)
A(128,1)A(128,2)
.
.A(128,128)
……128words
128 pages
128x128 page faults if the process has less than 128 frames!!
Other ConsiderationsOther Considerations
Program Structures:Program Structures:Data StructuresData Structures
Locality: stack, hash table, etc.Locality: stack, hash table, etc.
Search speed, # of memory references, # of pages Search speed, # of memory references, # of pages touched, etc.touched, etc.
Programming LanguageProgramming LanguageLisp, PASCAL, etc.Lisp, PASCAL, etc.
Compiler & LoaderCompiler & LoaderSeparate code and dataSeparate code and data
Pack interPack inter--related routines into the same pagerelated routines into the same page
Routine placement (across page boundary?)Routine placement (across page boundary?)
I/O InterlockI/O Interlock
buffer Drive
•• DMA gets the following DMA gets the following information of the buffer:information of the buffer:•• Base Address in Memory Base Address in Memory •• Chunk SizeChunk Size
•• Could the bufferCould the buffer--residing residing pages be swapped out?pages be swapped out?Physical Memory
I/O InterlockI/O Interlock
SolutionsSolutionsI/O Device I/O Device System Memory System Memory User MemoryUser Memory
Extra Data Copying!!Extra Data Copying!!
Lock pages into memoryLock pages into memoryThe lock bit of a pageThe lock bit of a page--faulting page is set until faulting page is set until the faulting process is dispatched!the faulting process is dispatched!
Lock bits might never be turned off!Lock bits might never be turned off!
MultiMulti--user systems usually take locks as user systems usually take locks as ““hintshints”” only!only!
RealReal--Time ProcessingTime Processing
Solution:Solution:Go beyond locking hints Go beyond locking hints Allow Allow privileged users to require pages being privileged users to require pages being locked into memory!locked into memory!
Predictable Behavior
Virtual memory introduces unexpected, long-term delays in the execution of a program.
Demand SegmentationDemand SegmentationMotivationMotivation
Segmentation captures better the logical Segmentation captures better the logical structure of a process!structure of a process!
Demand paging needs a significant Demand paging needs a significant amount of hardware!amount of hardware!
MechanismMechanismLike demand paging!Like demand paging!
However, compaction may be needed!However, compaction may be needed!Considerable overheads!Considerable overheads!