computer architecture virtual memory

61
Computer Architecture 2011 – VM 1 Computer Architecture Virtual Memory Dr. Lihu Rappoport

Upload: garran

Post on 23-Feb-2016

43 views

Category:

Documents


0 download

DESCRIPTION

Computer Architecture Virtual Memory. Dr. Lihu Rappoport. Virtual Memory. Provides the illusion of a large memory Different machines have different amount of physical memory Allows programs to run regardless of actual physical memory size - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM1

Computer Architecture

Virtual Memory

Dr. Lihu Rappoport

Page 2: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM2

Virtual Memory· Provides the illusion of a large memory· Different machines have different amount of physical memory

– Allows programs to run regardless of actual physical memory size

· The amount of memory consumed by each process is dynamic– Allow adding memory as needed

· Many processes can run on a single machine– Provide each process its own memory space– Prevents a process from accessing the memory of other processes

running on the same machine– Allows the sum of memory spaces of all process to be larger than physical

memory

· Basic terminology– Virtual Address Space: address space used by the programmer– Physical Address: actual physical memory address space

Page 3: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM3

Virtual Memory: Basic Idea· Divide memory (virtual and physical) into fixed size blocks

– Pages in Virtual space, Frames in Physical space– Page size = Frame size– Page size is a power of 2: page size = 2k

· All pages in the virtual address space are contiguous· Pages can be mapped into physical Frames in any order· Some of the pages are in main memory (DRAM),

some of the pages are on disk· All programs are written using Virtual Memory Address Space· The hardware does on-the-fly translation between virtual and

physical address spaces– Use a Page Table to translate between Virtual and Physical

addresses

Page 4: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM4

· Main memory can act as a cache for the secondary storage (disk)

· Advantages:– illusion of having more physical memory– program relocation – protection

Virtual Memory

Virtual Addresses Physical Addresses

Disk Addresses

Address Translation

Page 5: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM5

Virtual to Physical Address translation

Page size: 212 byte =4K byte

47 Page offset

011Virtual Page Number

11 0Physical Page Number

39

Virtual Address

Physical Address

V D Phy. page #

1

Page table basereg

0

Valid bit

Dirty bit

12

AC

Access Control- Memory type

(WB, WT, UC, WP …)- User / Supervisor

12Page offset

Page 6: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM6

Page Tables

Valid

1

Physical Memory

Disk

Page TablePhysical Page

Or Disk Address

111

11

11

1

0

0

0

Virtual page number

Page 7: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM7

If V = 1 then page is in main memory at frame address stored in table Fetch data

else (page fault)need to fetch page from disk causes a trap, usually accompanied by a context switch: current process suspended while page is fetched from disk

Access Control (R = Read-only, R/W = read/write, X = execute only)If kind of access not compatible with specified access rights then

protection_violation_fault causes trap to hardware, or software fault handler

· Missing item fetched from secondary memory only on the occurrence of a fault demand load policy

Address Mapping Algorithm

Page 8: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM8

Page Replacement Algorithm· Not Recently Used (NRU)

– Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past

· If replacement is needed, choose any page frame such that its reference bit is 0. – This is a page that has not been referenced in the recent past

· Clock implementation of NRU:

While (PT[LRP].NRU) { PT[LRP].NRU LRP++ (mod table size) }

1 01 000

page table entry

Ref bit

1 0

· Possible optimization: search for a page that is both not recently referenced AND not dirty

Page 9: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM9

Page Faults· Page faults: the data is not in memory retrieve it from disk

– The CPU must detect situation– The CPU cannot remedy the situation (has no knowledge of the disk)– CPU must trap to the operating system so that it can remedy the situation– Pick a page to discard (possibly writing it to disk)– Load the page in from disk– Update the page table– Resume to program so HW will retry and succeed!

· Page fault incurs a huge miss penalty – Pages should be fairly large (e.g., 4KB)– Can handle the faults in software instead of hardware– Page fault causes a context switch– Using write-through is too expensive so we use write-back

Page 10: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM10

Optimal Page Size· Minimize wasted storage

– Small page minimizes internal fragmentation– Small page increase size of page table

· Minimize transfer time– Large pages (multiple disk sectors) amortize access cost– Sometimes transfer unnecessary info– Sometimes prefetch useful data– Sometimes discards useless data early

· General trend toward larger pages because– Big cheap RAM– Increasing memory / disk performance gap– Larger address spaces

Page 11: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM11

Translation Look aside Buffer (TLB)· Page table resides in memory

each translation requires an extra memory access

· TLB caches recently used PTEs – speed up translation– typically 128 to 256 entries,

4 to 8 way associative

· TLB Indexing

· On A TLB miss– Call PMH to get PTE from memory

AccessPage TableIn memory

Yes

NoTLB Hit ?

Virtual Address

Physical Addresses

TLB Access

Tag Set

OffsetVirtual page number

Page 12: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM12

TLB is a cache for recent address translations:

Making Address Translation Fast

Valid

111101101101

111101

Physical Memory

Disk

Virtual page number

Page Table

Valid Tag Physical PageTLB

Physical PageOr

Disk Address

Page 13: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM13

TLB Access

1

Tag Set

Offset

Set#

Hit/Miss

Way MUX

PTE

111111111

111111111

111111111

11111111

====

Way 0 Way 1 Way 2 Way 3 Way 0 Way 1 Way 2 Way 3

Virtual page number

OffsetPhysical page number

Page 14: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM14

Processor Caches· L2 and L3 are unified, as the memory – hold data and instructions· In case of STLB miss, PMH accesses the data cache for page walk

Platform On-dieCore

PMH

L1 Instructioncache

L1 Data Cache L2

DataTLB

InstructionTLB

translations translations

L3instructions

Data / PTEs

PTEsPTEs

STLB

PTEs

Memory

Page 15: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM15

Virtual Memory And Cache

TLB access is serial with cache access Page table entries are cached in L1 data cache, L2 cache and L3

cache (as data)

Yes

Page Walk: get PTE from memory Hierarchy

No

Access Cache

Virtual Address

L1Cache Hit ?

Yes

No

Physical Addresses Data

No AccessMemory

L2Cache Hit ?

TLBHit ?

AccessTLB

STLBHit ?

No

Page 16: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM16

Overlapped TLB & Cache Access

#Set is not contained within the Page Offset The #Set is not known until the physical page number is known Cache can be accessed only after address translation done

Virtual Memory view of a Physical Address

Cache view of a Physical Address

0

Page offset11

Physical Page Number1229

0

disp

13

tag

1429 5

set

6

Page 17: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM17

Overlapped TLB & Cache Access (cont)

In the above example #Set is contained within the Page Offset The #Set is known immediately

Cache can be accessed in parallel with address translation

Once translation is done, match upper bits with tags

Limitation: Cache ≤ (page size × associativity)

Virtual Memory view of a Physical Address

Cache view of a Physical Address

0

Page offset11

Physical Page Number1229

029 5

disptag set61112

Page 18: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM18

Overlapped TLB & Cache Access (cont)

Tag Set

Page offset

Set#

Virtual page number

set disp

Set#

Physical page number

TLB

Hit/Miss

Way MUX====

Cache

Way MUX= = = = = = = =

Hit/Miss

Data

Page 19: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM19

Overlapped TLB & Cache Access (cont)· Assume cache is 32K Byte, 2 way set-associative, 64 byte/line

– (215/ 2 ways) / (26 bytes/line) = 215-1-6 = 28 = 256 sets· In order to still allow overlap between set access and TLB access

– Take the upper two bits of the set number from bits [1:0] of the VPN

· Physical_addr[13:12] may be different than virtual_addr[13:12] – Tag is comprised of bits [31:12] of the physical address

· The tag may mis-match bits [13:12] of the physical address– Cache miss allocate missing line according to its virtual set address and

physical tag

0

Page offset11

Physical Page Number1229

0

disp13 12 11

tag

1429 5

set6

VPN[1:0]

Page 20: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM20

Overlapped TLB & Cache Access (cont)· Example:

· Two virtual pages can be mapped to the same physical page– With virtual indexing, 2 cache lines can map to the same physical address– Solution: allow only one virtual alias in the cache at any given time

On a cache miss, before writing the missed entry to the cache,search for virtual aliases already in the cache and evict them first

No special work is necessary during a cache hit

· An external Snoop supplies only the physical address– With virtual indexing, all the possible sets must be snooped

100100101101Virtual page number 10

Physical page number 01

Page offset

Access set 101001002 in cache Read out all tags from the set

Access TLB

Match all read tags against the full Physical page num – including also bits [1:0]

Page 21: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM21

More On Page Swap-out· DMA copies the page to the disk controller

– Reads each byte: Executes snoop-invalidate for each byte in the cache (both L1 and L2) If the byte resides in the cache:

if it is modified reads its line from the cache into memory invalidates the line

– Writes the byte to the disk controller– This means that when a page is swapped-out of memory

All data in the caches which belongs to that page is invalidated The page in the disk is up-to-date

· The TLB is snooped– If the TLB hits for the swapped-out page, TLB entry is invalidated

· In the page table – The valid bit in the PTE entry of the swapped-out pages set to 0 – All the rest of the bits in the PTE entry may be used by the operating

system for keeping the location of the page in the disk

Page 22: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM22

Context Switch· Each process has its own address space

– Each process has its own page table– When the OS allocates to each process frames in physical memory,

and updates the page table of each process– A process cannot access physical memory allocated to another process

Unless the OS deliberately allocates the same physical frame to 2 processes (for memory sharing)

· On a context switch– Save the current architectural state to memory

Architectural registers Register that holds the page table base address in memory

– Flush the TLB– Load the new architectural state from memory

Architectural registers Register that holds the page table base address in memory

Page 23: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM23

Virtually-Addressed Cache· Cache uses virtual addresses (tags are virtual)

· Only require address translation on cache miss– TLB not in path to cache hit

· Aliasing: 2 different virtual addr. mapped to same physical addr– Two different cache entries holding data for the same physical address– Must update all cache entries with same physical address

data

Trans-lation

Cache

MainMemory

VA

hit

PACPU

Page 24: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM24

Virtually-Addressed Cache (cont).· Cache must be flushed at task switch

– Solution: include process ID (PID) in tag

· How to share memory among processes– Permit multiple virtual pages to refer to same physical frame

· Problem: incoherence if they point to different physical pages– Solution: require sufficiently many common virtual LSB– With direct mapped cache, guarantied that they all point to same

physical page

Page 25: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM25

Paging in x86

Page 26: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM26

x86 Paging – Virtual memory· A page can be

– Not yet loaded– Loaded– On disk

· A loaded page can be– Dirty– Clean

· When a page is not loaded (P bit clear) Page fault occurs– It may require throwing a loaded page to insert the new one

OS prioritize throwing by LRU and dirty/clean/avail bits Dirty page should be written to Disk; Clean need not

– New page is either loaded from disk or “initialized”– CPU sets the page “access” flag when accessed, “dirty” when written

Page 27: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM27

Page Tables and Directories in 32bit Mode· 32 bit mode supports both 4KByte and 4MByte pages

– Bit CR4.PSE = 1 (Page Size Extensions) enables using large page size

· CR3 points to the current Page Directory (changed per process)

· Page directory – Holds 1024 page-directory entries (PDEs), each is 32 bits– Each PDE contains a PS (page size) flag

0 – entry points to a page table whose entries point to 4KByte pages 1 – entry points directly to a 4MByte

· Page table– Holds up to 1024 page-table entries (PTEs), each is 32 bit– Each PTE points to a 4KB page in physical memory

Page 28: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM28

32bit Mode: 4KB Page Mapping· 2-level hierarchical mapping

– Page directory and page tables– Pages / page tables are 4KB aligned– Address up to 220 4KB pages

· Linear address divided to– Directory (10 bit) – points to a PDE in

the Page Directory PS bit in PDE = 0

PDE provides a 20 bit, 4KB aligned base physical address of a page table

Present in PDE = 0 page fault– Table (10 bit) – points to a PTE in the

Page Table PTE provides a 20 bit, 4KB aligned

base physical address of a 4KB page– Offset (12 bits) – offset within the

selected 4KB page

OFFSET031

DIR TABLE

Linear Address Space (4K Page)1121

1K entryPage Table1K entry

Page Directory

PDE

4K Page

data

CR3 (PDBR)

10 10 12

PTE

20+12=32 (4K aligned)

20

20

Page 29: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM29

32bit Mode: 4MB Page Mapping· PDE directly maps up to 1024 4MB pages· Linear address divided to

– Dir (10 bit) – points to a PDE in the Page Directory

PS in the PDE = 1 PDE provides a 10 bit, 4MB aligned base physical address of a 4MB page

Present in PDE = 0 page fault– Offset (22 bits) – offset within selected 4MB page

· Mixing 4KByte and 4MByte Pages– When CR4.PSE=1, both 4MByte pages and page

tables for 4Kbyte pages are supported If CR4.PSE=0, 4M-pages are not supported

(PS flag setting in PDEs is ignored)– The processor maintains 4MByte page entries and

4KByte page entries in separate TLBs

OFFSET031

DIR

Linear Address Space (4MB Page)21

Page Directory

PDE

4MByte

Page

data

CR3 (PDBR)

10 22

20+12=32 (4K aligned)

10

Page 30: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM30

32bit Mode: PDE and PTE Format· 20 bit pointer to a 4K

Aligned address· Virtual memory

– Present– Accessed– Dirty (in PTE only)– Page size (in PDE only)– Global

· Protection– Writable (R#/W)– User / Supervisor #

2 levels/type only

· Caching– Page WT– Page Cache Disabled– PAT

· 3 bit for OS usage

GAVAILPage Frame Address 31:12 DPCD

PWT

U

U WA P

G

PAT

Page Frame Address 31:12 AVAIL 0 A APCD

PWT

W P

PresentWritableUser / SupervisorWrite-ThroughCache DisableAccessedPage Size (0: 4 Kbyte)GlobalAvailable for OS Use

Page Directory

Entry(4KB page

table)04 12357911 681231

PresentWritableUser / SupervisorWrite-ThroughCache DisableAccessedDirtyPATGlobalAvailable for OS Use

Page TableEntry

04 12357911 681231

-

-

Page 31: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM31

PTE (4K-Bbyte Page) Format

GP A T

Page Base Address 31:12 AVAIL D APCD

PWT

U/S

R/

WP

PresentWritableUser / SupervisorWrite-ThroughCache DisableAccessedDirtyPage Table Attribute IndexGlobal PageAvailable for OS Use

04 12357911 681231 -

Page 32: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM32

PDE (4K-Bbyte Page Table ) Format

G P SPage Table Base Address 31:12 AVAIL

A V L

APCD

PWT

U/S

R/

WP

PresentWritableUser / SupervisorWrite-ThroughCache DisableAccessedAvailablePage Size (0 indicates 4 Kbytes)Global Page (ignored)Available for OS Use

04 12357911 681231 -

Page 33: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM33

Reserved

PDE (4M-Bbyte Page) Format

G P S

Page BaseAddress 31:22

AVAIL D APCD

PWT

U/S

R/

WP

PresentWritableUser / SupervisorWrite-ThroughCache DisableAccessedDirtyPage Size (1 indicates 4 Mbytes)Global Page (ignored)Available for OS UsePage Table Attribute Index

04 12357911 681331 -22 21

P A T

12

Page 34: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM34

Page Table – Virtual Mem. Attributes · Present (P) flag

– When set, the page is in physical memory and address translation is carried out– When clear, the page is not in memory

if the processor attempts to access the page, it generates a page-fault exception Bits 1 through 31 are available to software

– The processor does not set or clear this flag it is up to the OS to maintain the state of the flag

– If the processor generates a page-fault exception, the OS generally needs to carry out the following operations:

1. Copy the page from disk into physical memory2. Load the page address into the PTE/PDE and set its present flag

Other flags, such as the dirty and accessed flags, may also be set at this time3. Invalidate the current PTE in the TLB 4. Return from the page-fault handler to restart the interrupted program (or task)

· Page size (PS) flag, in PDEs only– Determines the page size– When clear, the page size is 4KBytes and the PDE points to a page table– When set, the page size is 4Mbyte / 2 MByte (PAE=1), and the PDE points to a page

Page 35: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM35

Page Table – Virtual Mem. Attributes · Accessed (A) flag and Dirty (D) flag

– OS typically clears these flags when a page/PT is initially loaded into physical mem– The processor sets the A-flag the first time a page/PT is accessed (read or written) – The processor sets the D-flag the first time a page is accessed for a write operation

The D-flag is not used in PDEs that point to page tables– Both A and D flag are sticky

Once set, the processor does not implicitly clear it – only software can clear it– Used by OS to manage transfer of pages/PTs tables into and out of physical memory

· Global (G) flag– Indicates a global page when set (+page global enable (PGE) in reg. CR4 is set)– 1: PTE/PDE not invalidated in the TLB when register CR3 is loaded / task switch

Prevents frequently used pages (e.g. OS) from being flushed from the TLB– Only software can set or clear this flag– Ignored for PDEs that point to page tables (global att. of a page is set in PTEs)

Page 36: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM36

Page Table – Caching Attributes · Page-level write-through (PWT) flag

– Controls the write-through or write-back caching policy of the page / PT – 1: enable write-through caching – 0 : enable write-back caching – Ignored if the CD (cache disable) flag in CR0 is set

· Page-level cache disable (PCD) flag– Controls the caching of individual pages/PT– 1: caching of the associated page/PT is prevented

Used for pages that contain memory mapped I/O ports or that do not provide a performance benefit when cached

– 0: the page/PT can be cached– Ignored (assumes as set) if the CD (cache disable) flag in CR0 is set

· Page attribute table index (PAT) flag– Used along with the PCD and PWT flags to select an entry in the PAT,

which in turn selects the memory type for the page

Page 37: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM37

Page Table – Protection Attributes · Read/write (R/W) flag

– Specifies the read-write privileges for a page or group of pages (in the case of a PDE that points to a page table)

– 0: the page is read only– 1: the page can be read and written into

· User/supervisor (U/S) flag– Specifies the user-supervisor privileges for a page or group of pages

(in case of a PDE that points to a page table)– 0: supervisor privilege level– 1: user privilege level

Page 38: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM38

Misc Issues· Memory Aliasing

– Memory aliasing supported by allowing two PDEs to point to a common PTE– Software that implements memory aliasing must manage the consistency of

the accessed and dirty bits in the PDE and PTE Inconsistency may lead to a processor deadlock

· Base Address of the Page Directory– The physical address of the current page directory is stored in CR3 register

Also called the page directory base register or PDBR– PDBR is typically loaded with a new as part of a task switch– The page directory pointed by PDBR may be swapped out of physical

memory– The OS must ensure that the page directory indicated by the PDBR image

in a task's TSS is present in physical memory before the task is dispatched– The page directory must also remain in memory as long as the task is active

Page 39: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM39

PAE – Physical Address Extension· When PAE (physical address extension) flag in CR4 is set

– Physical addresses is extended to up to 52 bits Linear address remains 32 bit

– Each page table entry becomes 64 bits to hold the extra phy. address bits Page directory and page tables remain 4KB in size

number of entries in each is halved to 512 indexed by 9 instead of 10 bits

· A new 4 entry Page Directory Pointer Table is added– Indexed by bits [31:30] of the linear address– Each entry points to a page directory– CR3 contains the page-directory-pointer-table base address

Provides the m.s.bits of the physical address of the first byte of the page-directory pointer table

forcing the table to be located on a 32-byte boundary

Page 40: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM40

4KB Page Mapping with PAE· Linear address divided to

– Dir Ptr (2 bits) – points to a Page-directory-pointer-table entry

The selected entry provides the base physical address of a page directory

The base is M–12 bits, 4KB aligned

– Dir (9 bits) – points to a PDE in the Page Directory

PS in the PDE = 0 PDE provides a base physical address of a page table:M–12 bits, 4KB aligned

– Table (9 bit) – points to a PTE in the Page Table

The PTE provides a base physical address of a 4KB page M–12 bits, 4KB aligned

– Offset (12 bits) – offset within the selected 4KB page

Dir ptr

029

DIR TABLE OFFSET

Linear Address Space (4K Page)1120

512 entryPage Table512 entry

Page Directory

PDE

4KBytePage

data

9 9 12

PTE

CR3 (PDPTR)

32 (32B aligned)

M-12

M-12

12213031

4 entryPage

DirectoryPointerTable

Dir ptr entryM-12

2

Page 41: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM41

2MB Page Mapping with PAE· Linear address divided to

– Dir Ptr (2 bits) – points to a Page-directory-pointer-table entry The selected entry provides the base

physical address of a page directory The base has M–12 bits, 4KB aligned

– Dir (9 bits) – points to a PDE in the Page Directory PS in the PDE = 1

PDE provides a base physical address of a 2MB page

The base is M–21 bit, 2MB aligned

– Offset (21 bits) – offset within the selected 2MB page

Dir ptr

029

DIR OFFSET

Linear Address Space (2MB Page)20

Page Directory

PDE

2MBytePage

data

9 21

CR3 (PDPTR)

32 (32B aligned)

M-21

213031

Page DirectoryPointerTable

Dir ptr entryM-12

2

Page 42: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM42

PTE/PDE/PDP Entry Format with PAE

Page 43: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM43

Execute-Disable Bit· Supported only with PAE enabled / 64 bit mode

– Bit[63] in PML4 entry, PDP entry, PDE, PTE

· If the execute disable bit of a memory page is set– The page can be used only as data– An attempt to execute code from a memory page with the execute-disable

bit set causes a page-fault exception– Setting the execute-disable bit at any level of the paging structure,

protects all pages pointed from this entry

Page 44: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM44

Paging in 64 bit Mode· A 4th page mapping table added: the page map level 4 table (PML4)

– The base physical address of the PML4 is stored in CR3– A PML4 entry contains the base physical address a page directory pointer

table

· The page directory pointer table is expanded to 512 8-byte entries– Indexed by 9 bits of the linear address

· The size of the PDE/PTE tables remains 512 eight-byte entries– Each indexed by nine linear-address bits

· The total of linear-address index bits becomes 48

· PS flag in PDEs selects between 4KByte and 2MByte page sizes– CR4.PSE bit is ignored

· Each entry in PML4, PDP, DIR provides base address of next level table – maxphyaddr – 12 bits, 4KB aligned (for maxphyaddr = 40 28 bits)

Page 45: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM45

sign ext.

4KB Page Mapping in 64 bit Mode

029

DIR TABLE OFFSET

Linear Address Space (4K Page)1120

512 entryPage Table512 entry

Page Directory

PDE

4KBytePage

data

9 9 12

PTE

CR3 (PDPTR)

40 (4KB aligned)

M-12

M-12

12213038

512 entryPage

DirectoryPointerTable

PDP entryM-12

9

PDPPML4

394763

512 entryPML4Table

PML4 entry

9

M-12

Page 46: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM46

PML4 PDPsign ext.

2MB Page Mapping in 64 bit Mode

029

DIR OFFSET

Linear Address Space (2M Page)20

512 entryPage

Directory

PDE

2MBytePage

data

9 21

CR3 (PDPTR)

40 (4KB aligned)

M-21

213038

512 entryPage

DirectoryPointerTable

PDP entryM-12

9

394763

512 entryPML4Table

PML4 entry

9

M-12

Page 47: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM47

OFFSETPML4 PDPsign ext.

1GB Page Mapping in 64 bit Mode

029

Linear Address Space (1G Page)

2MBytePage

data

30

CR3 (PDPTR)

40 (4KB aligned)

M-30

3038

512 entryPage

DirectoryPointerTable

PDP entry

9

394763

512 entryPML4Table

PML4 entry

9

M-12

Page 48: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM48

PTE/PDE/PDP/PML4 Entry Format

Page 49: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM49

TLBs· The processor saves most recently used PDEs and PTEs in TLBs

– Separate TLB for data and instruction caches– Separate TLBs for 4KByte and 2/4MByte page sizes

· OS running at privilege level 0 can invalidate TLB entries– INVLPG instruction invalidates a specific PTE in the TLB

This instruction ignores the setting of the G flag – Whenever a PDE/PTE is changed (including when the present flag is set

to zero), OS must invalidate the corresponding TLB entry – All (non-global) TLBs are automatically invalidated when CR3 is loaded

· The global (G) flag prevents frequently used pages from being automatically invalidated in on a task switch – The entry remains in the TLB indefinitely– Only INVLPG can invalidate a global page entry

Page 50: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM50

Paging in VAX

Page 51: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM51

VM in VAX: Address Format

Physical Frame Number

Virtual Page Number

Page offset

Page offset

Page size: 29 byte = 512 bytes

31 08Virtual Address

930 29

0 0 - P0 process space (code and data)0 1 - P1 process space (stack)1 0 - S0 system space1 1 - S1

8 029

Physical Address9

Page 52: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM52

VM in VAX: Virtual Address Spaces

Process0 Process1 Process2 Process3

P0 process code & global vars grow upward

P1 process stack & local vars grow downward

S0 system space grows upward, generally static

0

7FFFFFFF80000000

Page 53: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM53

Page Table Entry (PTE)

V PROT M Z OWN S S0

Physical Frame Number31 20

Valid bit =1 if page mapped to main memory, otherwise page fault: • Page on the disk swap area • Address indicates the page location on the disk

4 Protection bits

Modified bit

3 ownership bits Indicate if the line was cleaned (zero)

Page 54: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM54

System Space Address Translation

PFN

0

PTE physical address

00

00

00VPN

VPN

Page offset

Page offset8 029 9

SBR (System page table base physical address)+

=00

PFN (from PTE)

8 029 9

Get PTE

Page 55: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM55

System Space Address Translation

SBR

VPN*4

PFN

10 offset8 029 9

VPN

offset8 029 9

PFN

31

Page 56: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM56

P0 Space Address Translation

000

PFN

VPN

Page offset

8 029 900 VPN Page offset

P0BR (P0 page table base virtual address)+

PTE S0 space virtual address=

00

00

PFN (from PTE)

8 029 9

Get PTE using system space translation algorithm

Page 57: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM57

P0 Space Address Translation (cont)

SBR

P0BR+VPN*4

Offset’8 029 9

PFN’

00 offset8 029 9

VPN31

10 Offset’8 029 9

VPN’31

PFN’

VPN’*4

Physical addrof PTE

PFN

Offset8 029 9

PFN

Page 58: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM58

P0 space Address translation Using TLB

Yes

No

ProcessTLB Access

ProcessTLB hit?

NoSystemTLB hit?

Yes

Get PTE of req page from the proc. TLB

Calculate PTE virtual addr (in S0): P0BR+4*VPN

System TLB Access

Get PTE from system TLB

Get PTE of req page from the process Page table

Access Sys Page Table inSBR+4*VPN(PTE)

Memory Access

Calculate physical address

PFN

PFN

Access Memory

00 VPN Page offset

Page 59: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM59

Backup

Page 60: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM60

Why virtual memory?· Generality

– ability to run programs larger than size of physical memory· Storage management

– allocation/deallocation of variable sized blocks is costly and leads to (external) fragmentation

· Protection– regions of the address space can be R/O, Ex, . . .

· Flexibility– portions of a program can be placed anywhere, without relocation

· Storage efficiency– retain only most important portions of the program in memory

· Concurrent I/O– execute other processes while loading/dumping page

· Expandability– can leave room in virtual address space for objects to grow.

· Performance

Page 61: Computer Architecture  Virtual Memory

Computer Architecture 2011 – VM61

Address Translation with a TLBvirtual addressvirtual page number page offset

physical address

n–1 0p–1p

valid physical page numbertag

valid tag data

data=

cache hit

tag byte offsetindex

=

TLB hit

TLB

Cache

. ..