aos lab 7: page tables

Post on 18-Nov-2014

1.256 Views

Category:

Technology

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

Lab 7: Page tablesAdvanced Operating Systems

Zubair Nabi

zubair.nabi@itu.edu.pk

March 27, 2013

Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a singlephysical memory space

• Protect the memories of different processes

• Map the same kernel memory in several address spaces

• Map the same user memory more than once in one addressspace (user pages are also mapped into the kernel’s physicalview of memory)

Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a singlephysical memory space

• Protect the memories of different processes

• Map the same kernel memory in several address spaces

• Map the same user memory more than once in one addressspace (user pages are also mapped into the kernel’s physicalview of memory)

Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a singlephysical memory space

• Protect the memories of different processes

• Map the same kernel memory in several address spaces

• Map the same user memory more than once in one addressspace (user pages are also mapped into the kernel’s physicalview of memory)

Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a singlephysical memory space

• Protect the memories of different processes

• Map the same kernel memory in several address spaces

• Map the same user memory more than once in one addressspace (user pages are also mapped into the kernel’s physicalview of memory)

Introduction

Page tables allow the OS to:

• Multiplex the address spaces of different processes onto a singlephysical memory space

• Protect the memories of different processes

• Map the same kernel memory in several address spaces

• Map the same user memory more than once in one addressspace (user pages are also mapped into the kernel’s physicalview of memory)

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure

• An x86 page table contains 220 page table entries (PTEs)

• Each PTE contains a 20-bit physical page number (PPN) andsome flags

• The paging hardware translates virtual addresses to physicalones by:

1 Using the top 20 bits of the virtual address to index into the pagetable to find a PTE

2 Replacing the top 20 bits with the PPN in the PTE3 Copying the lower 12 bits verbatim from the virtual to the physical

address

• Translation takes place at the granularity of 212 byte (4KB)chunks, called pages

Page table structure (2)

• A page table is stored in physical memory as a two-level tree

• Root of the tree: 4KB page directory

• Each page directory index: page table pages (PDE)• Each page table page: 1024 32-bit PTEs

• 1024 x 1024 = 220

Page table structure (2)

• A page table is stored in physical memory as a two-level tree

• Root of the tree: 4KB page directory

• Each page directory index: page table pages (PDE)• Each page table page: 1024 32-bit PTEs

• 1024 x 1024 = 220

Page table structure (2)

• A page table is stored in physical memory as a two-level tree

• Root of the tree: 4KB page directory

• Each page directory index: page table pages (PDE)• Each page table page: 1024 32-bit PTEs

• 1024 x 1024 = 220

Page table structure (2)

• A page table is stored in physical memory as a two-level tree

• Root of the tree: 4KB page directory

• Each page directory index: page table pages (PDE)• Each page table page: 1024 32-bit PTEs

• 1024 x 1024 = 220

Page table structure (2)

• A page table is stored in physical memory as a two-level tree

• Root of the tree: 4KB page directory

• Each page directory index: page table pages (PDE)• Each page table page: 1024 32-bit PTEs

• 1024 x 1024 = 220

Translation

• Use top 10 bits of the virtual address to index the page directory

• If the PDE is present, use next 10 bits to index the page tablepage and obtain a PTE

• If either the PDE or the PTE is missing, raise a fault• This two-level structure increases efficiency

• How?

Translation

• Use top 10 bits of the virtual address to index the page directory

• If the PDE is present, use next 10 bits to index the page tablepage and obtain a PTE

• If either the PDE or the PTE is missing, raise a fault• This two-level structure increases efficiency

• How?

Translation

• Use top 10 bits of the virtual address to index the page directory

• If the PDE is present, use next 10 bits to index the page tablepage and obtain a PTE

• If either the PDE or the PTE is missing, raise a fault• This two-level structure increases efficiency

• How?

Translation

• Use top 10 bits of the virtual address to index the page directory

• If the PDE is present, use next 10 bits to index the page tablepage and obtain a PTE

• If either the PDE or the PTE is missing, raise a fault• This two-level structure increases efficiency

• How?

Permissions

Each PTE contains associated flags

Flag DescriptionPTE_P Whether the page is presentPTE_W Whether the page can be written toPTE_U Whether user programs can access the pagePTE_PWT Whether write through or write backPTE_PCD Whether caching is disabledPTE_A Whether the page has been accessedPTE_D Whether the page is dirtyPTE_PS Page size

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space

• Each process has a private address space which is switched on acontext switch (via switchuvm)

• Each address space starts at 0 and goes up to KERNBASEallowing 2GB of space (specific to xv6)

• Each time a process requests more memory, the kernel:1 Finds free physical pages2 Adds PTEs that point to these physical pages in the process’ page

table3 Sets PTE_U, PTE_W, and PTE_P

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Process address space (2)

Each process’ address space also contains mappings (aboveKERNBASE) for the kernel to run. Specifically:

• KERNBASE:KERNBASE+PHYSTOP is mapped to 0:PHYSTOP

• The kernel can use its own instructions and data

• The kernel can directly write to physical memory (for instance,when creating page table pages)

• A shortcoming of this approach is that the kernel can only makeuse of 2GB of memory

• PTE_U is not set for all entries above KERNBASE

Example: Creating an address space for main

• main makes a call to kvmalloc

• kvmalloc creates a page table with kernel mappings aboveKERNBASE and switches to it

1 void kvmalloc(void)

2 {

3 kpgdir = setupkvm();

4 switchkvm();

5 }

Example: Creating an address space for main

• main makes a call to kvmalloc

• kvmalloc creates a page table with kernel mappings aboveKERNBASE and switches to it

1 void kvmalloc(void)

2 {

3 kpgdir = setupkvm();

4 switchkvm();

5 }

Example: Creating an address space for main

• main makes a call to kvmalloc

• kvmalloc creates a page table with kernel mappings aboveKERNBASE and switches to it

1 void kvmalloc(void)

2 {

3 kpgdir = setupkvm();

4 switchkvm();

5 }

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

setupkvm

1 Allocates a page of memory to hold the page directory2 Calls mappages to install kernel mappings (kmap)

• Instructions and data• Physical memory up to PHYSTOP• Memory ranges for I/O devices

Does not install mappings for user memory

Code: kmap

1 static struct kmap {

2 void ∗virt;3 uint phys_start;

4 uint phys_end;

5 int perm;

6 } kmap[] = {

7 { (void∗)KERNBASE, 0, EXTMEM, PTE_W}, // I/O space8 { (void∗)KERNLINK, V2P(KERNLINK), V2P(data), 0}, // kern text9 { (void∗)data, V2P(data), PHYSTOP, PTE_W}, // kern data

10 { (void∗)DEVSPACE, DEVSPACE , 0, PTE_W}, // more devices11 };

Code: setupkvm

1 pde_t∗ setupkvm(void) {2 pde_t ∗pgdir;3 struct kmap ∗k;45 if((pgdir = (pde_t∗)kalloc()) == 0)6 return 0;

7 memset(pgdir, 0, PGSIZE);

89 for(k = kmap; k < &kmap[NELEM(kmap)]; k++)

10 if(mappages(pgdir, k−>virt, k−>phys_end − k−>phys_start ,11 (uint)k−>phys_start , k−>perm) < 0)12 return 0;

13 return pgdir;

14 }

mappages

• Installs virtual to physical mappings for a range of addresses• For each virtual address:

1 Calls walkpgdir to find address of the PTE for that address2 Initializes the PTE with the relevant PPN and the desired

permissions

mappages

• Installs virtual to physical mappings for a range of addresses• For each virtual address:

1 Calls walkpgdir to find address of the PTE for that address2 Initializes the PTE with the relevant PPN and the desired

permissions

mappages

• Installs virtual to physical mappings for a range of addresses• For each virtual address:

1 Calls walkpgdir to find address of the PTE for that address2 Initializes the PTE with the relevant PPN and the desired

permissions

mappages

• Installs virtual to physical mappings for a range of addresses• For each virtual address:

1 Calls walkpgdir to find address of the PTE for that address2 Initializes the PTE with the relevant PPN and the desired

permissions

walkpgdir

1 Uses the upper 10 bits of the virtual address to find the PDE

2 Uses the next 10 bits to find the PTE

walkpgdir

1 Uses the upper 10 bits of the virtual address to find the PDE

2 Uses the next 10 bits to find the PTE

Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP isallocated on the fly

• Free pages are maintained through a linked list struct run*freelist protected by a spinlock

1 Allocation: Remove a page from the list: kalloc()2 Deallocation: Add the page to the list: kfree()

1 struct {

2 struct spinlock lock;

3 int use_lock;

4 struct run ∗freelist;5 } kmem;

Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP isallocated on the fly

• Free pages are maintained through a linked list struct run*freelist protected by a spinlock

1 Allocation: Remove a page from the list: kalloc()2 Deallocation: Add the page to the list: kfree()

1 struct {

2 struct spinlock lock;

3 int use_lock;

4 struct run ∗freelist;5 } kmem;

Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP isallocated on the fly

• Free pages are maintained through a linked list struct run*freelist protected by a spinlock

1 Allocation: Remove a page from the list: kalloc()2 Deallocation: Add the page to the list: kfree()

1 struct {

2 struct spinlock lock;

3 int use_lock;

4 struct run ∗freelist;5 } kmem;

Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP isallocated on the fly

• Free pages are maintained through a linked list struct run*freelist protected by a spinlock

1 Allocation: Remove a page from the list: kalloc()2 Deallocation: Add the page to the list: kfree()

1 struct {

2 struct spinlock lock;

3 int use_lock;

4 struct run ∗freelist;5 } kmem;

Physical memory allocation

• Physical memory between the end of the kernel and PHYSTOP isallocated on the fly

• Free pages are maintained through a linked list struct run*freelist protected by a spinlock

1 Allocation: Remove a page from the list: kalloc()2 Deallocation: Add the page to the list: kfree()

1 struct {

2 struct spinlock lock;

3 int use_lock;

4 struct run ∗freelist;5 } kmem;

exec

• Creates the user part of an address space from the programbinary, in Executable and Linkable Format (ELF)

• Initializes instructions, data, and stack

exec

• Creates the user part of an address space from the programbinary, in Executable and Linkable Format (ELF)

• Initializes instructions, data, and stack

Today’s task

• Most operating systems implement “anticipatory paging” in whichon a page fault, the next few consecutive pages are also loadedto preemptively reduce page faults

• Chalk out a design to implement this strategy in xv6

Reading(s)

• Chapter 2, “Page tables” from “xv6: a simple, Unix-like teachingoperating system”

top related