address translation

28
1 Address Translation Memory Allocation Linked lists Bit maps Options for managing memory Base and Bound – Segmentation – Paging • Paged page tables • Inverted page tables Segmentation with Paging

Upload: quito

Post on 11-Jan-2016

27 views

Category:

Documents


0 download

DESCRIPTION

Address Translation. Memory Allocation Linked lists Bit maps Options for managing memory Base and Bound Segmentation Paging Paged page tables Inverted page tables Segmentation with Paging. Memory Management with Linked Lists. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Address Translation

1

Address Translation

• Memory Allocation– Linked lists– Bit maps

• Options for managing memory– Base and Bound– Segmentation– Paging

• Paged page tables• Inverted page tables

– Segmentation with Paging

Page 2: Address Translation

2

Memory Management with Linked Lists

• Hole – block of available memory; holes of various size are scattered throughout memory.

• When a process arrives, it is allocated memory from a hole large enough to accommodate it.

• Operating system maintains two linked lists for a) allocated partitions b) free partitions (hole)

OS

process 5

process 8

process 2

OS

process 5

process 2

OS

process 5

process 2

OS

process 5

process 9

process 2

process 9

process 10

Page 3: Address Translation

3

Dynamic Storage-Allocation Problem

• First-fit: Allocate the first hole that is big enough.• Best-fit: Allocate the smallest hole that is big enough;

must search entire list, unless ordered by size. Produces the smallest leftover hole.

• Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.

How to satisfy a request of size n from a list of free holes.

First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

Page 4: Address Translation

4

Memory Allocation with Bit Maps

• Part of memory with 5 processes, 3 holes– tick marks show allocation units– shaded regions are free

• Corresponding bit map• Same information as a list

Page 5: Address Translation

5

Fragmentation

• External Fragmentation– Have enough memory, but not contiguous– Can’t satisfy requests– Ex: first fit wastes 1/3 of memory on average

• Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.

• Reduce external fragmentation by compaction– Shuffle memory contents to place all free memory together in

one large block.

Page 6: Address Translation

6

Segmentation

• A segment is a region of logically contiguous memory.

• Idea is to generalize base and bounds, by allowing a table of base&bound pairs.

• Break virtual memory into segments• Use contiguous physical memory per segment• Divide each virtual address into:

– segment number– segment offset– Note: compiler does this, not hardware

Page 7: Address Translation

7

Segmentation – Address Translation

• Use segment # to index into segment table– segment table is table of base/limit pairs

• Compare to limit, Add base – exactly the same as base/bound scheme

• If greater than limit (or illegal access)– the segmentation fault

Page 8: Address Translation

8

Segmentation Hardware

Page 9: Address Translation

9

For example, what does it look like with this segment table, in virtual memory and physical memory? Assume 2 bit segment ID, and 12 bit segment offset

virtual segment #

physical segment start

segment size

code 0x4000 0x700

data 0 0x500

- 0 0

stack 0x2000 0x1000

Page 10: Address Translation

10

0

6ff

1000

14ff

3000

3fff

0

4ff

2000

2fff

4000

46ff

virtual memory physical memory

Page 11: Address Translation

11

Segmentation (cont.)

• This should seem a bit strange: the virtual address space has gaps in it! Each segment gets mapped to contiguous locations in physical memory, but may be gaps between segments.

• But, a correct program will never address gaps; if it does, trap to kernel and then core dump.

• Minor exception: stack, heap can grow. In UNIX, sbrk() increase size of heap segment. For stack, just take fault, system automatically increase size of stack.

• Detail: Need protection mode in segmentation table. For example, code segment would be read-only. Data and stack segment would be read-write.

Page 12: Address Translation

12

Segmentation Pros & Cons

+ Efficient for sparse address spaces• Can keep segment table in registers

+ Easy to share whole segments (for example, code segment)

+ Easy for address space to grow

– Complicated Memory allocation: still need first fit, best fit, etc., and re-shuffling to coalesce free fragments, if no single free space is big enough for a new segment

How do we make memory allocation simple and easy?

Page 13: Address Translation

13

Paging

• Allocate physical memory in terms of fixed size chunks of memory, or pages.

• Simpler, because allows use of a bitmap. What is a bitmap?

001111100000001100Each bits represents one page of physical memory – 1 means allocated, 0 means unallocated. Lots simpler than base & bounds or segmentation

• OS controls mapping: any page of virtual memory can go anywhere in physical memory

Page 14: Address Translation

14

Paging (cont.)

• Avoids fitting variable sized memory units• Break physical memory into frames

– Determined by hardware• Break virtual memory into pages

– Pages and frames are the same size• Divide each virtual address into:

– Page number– Page offset

• Note:– With paging, hardware splits address– With segmentation, compiler generates segmented

code.

Page 15: Address Translation

15

Paging – Address Translation

• Index into page table with high order bits– Get physical frame

• Append that to offset

• Now present new address to memory

Note: kernel keeps track of free frames can be done with a bitmap

Page 16: Address Translation

16

Address Translation Architecture

Page 17: Address Translation

17

Paging Example

A

B

C

D

E

F

G

H

I

J

K

L

I

J

K

L

E

F

G

H

A

B

C

D

4

3

1

virtual memoryphysical memory

Page table

Where is virtual address 6? 9?

Note: Page size is 4 bytes

Page 18: Address Translation

18

Paging Issues

• Fragmentation

• Page Size

• Protection and Sharing

• Page Table Size

• What if page table too big for main memory?

Page 19: Address Translation

19

Fragmentation and Page Size• Fragmentation

– No external fragmentation• Page can go anywhere in main memory

– Internal fragmentation• Average: ½ page per address space

• Page Size– Small size?

• Reduces (on average) internal fragmentation

– Large size?• Better for page table size• Better for disk I/O• Better for number of page faults

– Typical page sizes: 4K, 8K, 16K

Page 20: Address Translation

20

Protection and Sharing

• Page protection – use a bit– code: read only– data: read/write

• Sharing– Just “map pages in” to your address space

• Ex: if “vi” is at frames 0 through 10, all processes can adjust their page tables to “point to” those frames.

Page 21: Address Translation

21

Page Tables can be Large

• Page table is kept in main memory.• Page-table base register (PTBR) points to the

page table.• Page-table length register (PRLR) indicates size

of the page table.• In this scheme every data/instruction access

requires two memory accesses. One for the page table and one for the data/instruction

• Page table can be huge (million or more entries)• Use multi-level page table

– 2 page numbers and one offset

Page 22: Address Translation

22

Two-Level Paging Example

• A logical address (on 32-bit machine with 4K page size) is divided into:– a page number consisting of 20 bits.– a page offset consisting of 12 bits.

• Since the page table is paged, the page number is further divided into:– a 10-bit page number. – a 10-bit page offset.

• Thus, a logical address is as follows:

where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table.

page number page offset

pi p2 d

10 10 12

Page 23: Address Translation

23

Address-Translation Scheme

• Address-translation scheme for a two-level 32-bit paging architecture

Page 24: Address Translation

24

Inverted Page Table (“Core Map”)

• One entry for each real page of memory.• Entry consists of the virtual address of the page

stored in that real memory location, with information about the process that owns that page.

• Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs.

• Use hash table to limit the search to one — or at most a few — page-table entries.

• Good for page replacement

Page 25: Address Translation

25

Inverted Page Table Architecture

Page 26: Address Translation

26

Paging the Segments

• Divide address into three parts– Segment number– Page number– Offset

• Segment table contains addr. of page table

• Use that and page # to get frame #

• Combine frame number and offset

Page 27: Address Translation

27

What does this buy you?

• Simple management of physical memory– Paging (just a bitmap)

• Maintain logical structure– Segmentation

• However,– Possibly 3 memory accesses to get to

memory!

Page 28: Address Translation

28

MULTICS Address Translation Scheme