cmpt 300 operating system i

43
CMPT 300 Operating System I Chapter 4 Memory Management

Upload: petra-sutton

Post on 09-Mar-2016

52 views

Category:

Documents


1 download

DESCRIPTION

CMPT 300 Operating System I. Chapter 4 Memory Management. Why Memory Management?. Why money management? Not enough money. Same thing for memory Parkinson’s law: programs expand to fill the memory available to hold them “640KB memory are enough for everyone” – Bill Gates - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CMPT 300 Operating System I

CMPT 300 Operating System I

Chapter 4Memory Management

Page 2: CMPT 300 Operating System I

2

Why Memory Management?

Why money management? Not enough money. Same thing for memory

Parkinson’s law: programs expand to fill the memory available to hold them

“640KB memory are enough for everyone” – Bill Gates

Programmers’ ideal: an infinitely large, infinitely fast memory, nonvolatile

Reality: memory hierarchy

Magnetic tapeMagnetic diskMain memory

Cache

Registers

Page 3: CMPT 300 Operating System I

3

What Is Memory Management?

Memory manager: the part of the OS managing the memory hierarchy Keep track of memory parts in use/not in use Allocate/de-allocate memory to processes Manage swapping between main memory and

disk Basic memory management: every

program is put and run in main memory as whole

Swapping & paging: move processes back and forth between main memory and disk

Page 4: CMPT 300 Operating System I

4

Outline Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation

Page 5: CMPT 300 Operating System I

5

Mono Programming One program at a time

Share memory with OS OS loads the program from disk to

memory Three variationsUser

program

OS in RAM 0

0xFFF… OS in ROMUser

program 0

Device drivers in

ROMUser

programOS in RAM 0

Page 6: CMPT 300 Operating System I

6

Multiprogramming With Fixed Partitions

Advantages of Multiprogramming? Scenario: multiple programs at a time

Problem: how to allocate memory? Divide memory up into n partitions, one

partition can at most hold one program (process) Equal partitions vs. unequal partitions Each partition has a job queue Can be done manually when system is up

A job arrives, put it into the input queue for the smallest partition large enough to hold it Any space in a partition not used by a job is lost

Page 7: CMPT 300 Operating System I

7

Example: Multiprogramming With Fixed Partitions

Partition 4

Partition 3

Partition 2

Partition 1OS

0

100K

200K

400K

700K

800K

Multiple input queues

A

B

Page 8: CMPT 300 Operating System I

8

Single Input Queue Disadvantag

e of multiple input queues Small jobs

may wait, while a queue with larger memory is empty

Solution: single input queue

Partition 4

Partition 3

Partition 2

Partition 1

OS0

100K

200K

400K

700K

800K

A

B

10K

250K

Page 9: CMPT 300 Operating System I

9

How to Pick Jobs? Pick the first job in the queue fitting an

empty partition Fast, but may waste a large partition on a

small job Pick the largest job fitting an empty

partition Memory efficient Smallest jobs may be interactive ones, need

best service, slow Policies for efficiency and fairness

Have at least one small partition around A job may not be skipped more than k times

Page 10: CMPT 300 Operating System I

10

A Naïve Model for Multiprogramming

Goal: determine the number of processes in main memory to keep the CPU busy Multiprogramming improves CPU utilization

If on average, a process computes 20% of the time it sitting in memory 5 processes can keep CPU busy all the time

Assume all processes never wait for I/O at the same time. Too optimistic!

Page 11: CMPT 300 Operating System I

11

A Probabilistic Model A process spends a fraction p of its time

waiting for I/O to complete 0<p<1

At once n processes in memory CPU utilization 1 – pn

Probability that all n processes are waiting for I/O: pn

Assume processes are independent to each other Not true in reality. A process has to wait another

process to give up CPU Using queue theory.

Page 12: CMPT 300 Operating System I

12

CPU Utilization 1 – pn

0

20

40

60

80

100

0 2 4 6 8 10

Degree of multiprogramming

CPU

util

izat

ion

(in p

erce

nt)

p=20%

p=50%

p=80%

Page 13: CMPT 300 Operating System I

13

Memory Management for Multiprogramming

Relocation When program is compiled, it assumes

the starting address is 0. (logical address)

When it is loaded into memory, it could start at any address. (physical address)

How to map logical address to physical address?

Protection A program’s access should be confined

to proper area

Page 14: CMPT 300 Operating System I

14

Relocation & Protection Logical address for programming

Call a procedure at logical address 100 Physical address

When the procedure is in partition 1 (started from physical address 100k), then the procedure is at 100K+100

Relocation problem: translation between logical address and physical address

Protection: a malicious program can jump to space belonging to other users Generate a new instruction on the fly that can

reads or writes any word in memory

Page 15: CMPT 300 Operating System I

15

Relocation/Protection Using Registers

Base register: start of the partition Every memory address generated adds

the content of base register Base register: 100K, CALL 100 CALL

100K +100 Limit register: length of the partition

Addresses are checked against the limit register

Disadvantage: perform addition and comparison on every memory reference

Page 16: CMPT 300 Operating System I

16

Outline Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation

Page 17: CMPT 300 Operating System I

17

In Time-sharing/Interactive Systems…

Not enough main memory to hold all currently active processes Intuition: excess processes must be kept on

disk and brought in to run dynamically Swapping: bring in each process in entirely

Assumption: each process can be held in main memory, but cannot finish at one run

Virtual memory: allow programs to run even when they are only partially in main memory No assumption about program size

Page 18: CMPT 300 Operating System I

18

Swapping

B

A

OS

A

OS

C

B

A

OS

C

B

OS

C

B

DOS

C

DOS

C

AD

OS

Time Swap A out Swap B outHole

Page 19: CMPT 300 Operating System I

19

Swapping V.S. Fixed Partitions

The number, location and size of partitions vary dynamically in swapping

Flexibility, improve memory utilization Complicate allocating, de-allocating and

keeping track of memory Memory compaction: combine “holes”

in memory into a big one More efficient in allocation Require a lot of CPU time Rarely used in real systems

Page 20: CMPT 300 Operating System I

20

Enlarge Memory for a Process Fixed size process: easy Growing process

Expand to the adjacent hole, if there is a hole

Otherwise, wait or swap some processes out to create a large enough hole

If swap area on the disk is full, wait or be killed

Allocate extra space whenever a process is swapped in or move

Page 21: CMPT 300 Operating System I

21

Handling Growing Processes

Room for growth of B

B

Room for growth of A

A

OS

B-StackRoom for growth

B-DataB-Program

A-StackRoom for growth

A-DataA-Program

OSProcesses with one growing data segment

Processes with growing data and stack segments

Page 22: CMPT 300 Operating System I

22

Memory Management With Bitmaps

Two ways to keep track of memory usage Bitmaps and free lists

Bitmaps Memory is divided into allocation

units One bit per unit: 0-free, 1-occupiedA B C D E

1 1 1 1 1 0 0 01 1 1 1 1 1 1 11 1 0 0 1 1 1 11 1 1 1 1 0 0 0

Page 23: CMPT 300 Operating System I

23

Size of Allocation Units 4 bytes/unit 1 bit in map for 32 bits of

memory bitmap takes 1/33 of memory Trade-off between allocation unit and

memory utilization Smaller allocation unit larger bitmap Larger allocation unit smaller bitmap On average, half of the last unit is wasted

When bring a k unit process into memory Need find a hole of k units Search for k consecutive 0 bits in the entire

map

Page 24: CMPT 300 Operating System I

24

Memory Management With Linked Lists

Two types of entries: hole(H)/process(P)

A B C D E

P 0 5 H 5 3 P 8 6 P 14 4

H 18 2 P 20 6 P 26 3 H 29 3 X

Length 6Starts at 20Process

Address: 20

List is kept sorted by address.

Page 25: CMPT 300 Operating System I

25

Updating Linked Lists Combine holes if possible

Not necessary for bitmap

A X BBefore process X terminates After process X terminates

A B

A X A

X B B

X

Page 26: CMPT 300 Operating System I

26

Allocate Memory for New Processes

First fit: find the first hole fitting requirement Break the hole into two pieces: P + smaller H

Next fit: start search from the place of last fit Empirical evidence: Slightly worse performance than

first fit Best fit: take the smallest hole that is adequate

Slower Generate tiny useless holes

Worst fit: always take the largest hole

A P HA HP 0 2 H 2 6 P 0 2 P 2 3 H 5 3

Page 27: CMPT 300 Operating System I

27

Using Distinct Lists Distinct lists for processes and holes

List of holes can be sorted on size Best fit becomes faster

Problem: how to free a process? Merging holes is very costly

Quick fit: grouping holes based on size Different lists for different sizes E.g., List 1 for 4KB holes, List 2 for 8KB

holes. How about a 5KB hole?

Speed up the searching Merging holes is still costly

Page 28: CMPT 300 Operating System I

28

Outline Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation

Page 29: CMPT 300 Operating System I

29

Why Virtual Memory? If the program is too big to fit in memory …

Split the program into pieces – overlays Swapping overlays in and out Problem: programmer does the work of splitting

the program into pieces. Virtual memory: OS takes care of

everything Size of program could be larger than the

physical memory available. Keep the parts currently used in memory Put other parts on disk

Page 30: CMPT 300 Operating System I

30

Virtual and Physical Addresses

Virtual addresses (VA) are used/generated by programs Each process has its own VA. E.g, MOV REG, 1000 ;1000 is VA

Physical addresses (PA) are used in execution

MMU: maps VA to PA

Bus

MemoryDisk

controller

CPU package

CPU MMU

Page 31: CMPT 300 Operating System I

31

Paging Virtual address space is divided into pages

Memories are allocated in the unit of page Page frames in physical memory

Pages and page frames are always the same size Usually, from 512B to 64KB

#Pages > #Page frames On a 32-bit PC, VA could be as large as 4GB, but PA <

1GB In hardware, a present/absent bit keeps track of which

pages are physically present in memory. Page fault: an unmapped page is requested

OS picks up a little-used page frame and write its content back to hard disk

Fetch the wanted page into the page frame just freed

Page 32: CMPT 300 Operating System I

Page 0: 0—4095 VA: 0 page 0 page

frame 2 PA: 8192 0—4095 8192--12287

VA: 8192 page 2 page frame 6 PA: 24567

VA: 8199 page 2, offset 7 page frame 6, offset 7 PA: 24567+7=24574

VA:32789 page 8 unmapped page fault

Virtual addres

s space60-64K X56-6K X

52-56K X48-52K X44-48K 740-44K X36-40K 532-36K X28-32K X24-28K X20-24K 316-20K 412-16K 08-12K 64-8K 10-4K 2

physical

address space28-32K24-28K20-24K16-20K12-16K8-12K4-8K0-4K

Pages

Page frames

Paging: An Example

012

8

Page 33: CMPT 300 Operating System I

The Magic in

MMU

Page 34: CMPT 300 Operating System I

34

Page Table Map virtual pages onto page frames

VA is split into page number and offset. Each page number has one entry in page

table. Page table can be extremely large

32 bits virtual addresses, 4kb/page 1M pages. How about 64 bits VA?

Each process needs its own page table

Page 35: CMPT 300 Operating System I

35

Typical Page Table Entry Entry size: usually 32 bits Page frame number: goal of page

mapping Present/absent bit: page in memory? Protection: what kinds of access

permitted Modified: Has the page been written? (If

so, need to write back to disk later) Dirty bit

Referenced: Has the page been referenced?

Caching disable: read from the disk?Page frame number

Present/absentCaching disabled Modified

Referenced Protection

Page 36: CMPT 300 Operating System I

36

Fast Mapping Virtual to physical mapping must

be fast several page table

references/instruction Unacceptable to store the entire page

table in main memory Have to seek for hardware solutions

Page 37: CMPT 300 Operating System I

37

Two Simple Designs for Page Table

Use fast hardware registers for page table Single physical page table in MMU: an array of fast

registers: one entry for each virtual page Requires no memory reference during mapping Load registers at every process switching Expensive if the page table is large

Cost of hardware and overhead of context switching Put the whole table in main memory

Only one register pointing to the start of table Fast switching Several memory references/instruction

Pure memory solution is slow, pure register solution is expensive, so …

Page 38: CMPT 300 Operating System I

38

Translation Lookaside Buffers (TLBs)

Observation: Most programs tend to make a large number of references to a small number of pages Put the heavily read fraction in

registers TLB/associative memoryTLBVirtual address

check

foundPage table

Not foundPhysical address

Page 39: CMPT 300 Operating System I

39

Outline Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation

Page 40: CMPT 300 Operating System I

40

Page Replacement When a page fault occurs, and all page

frames are full Choose one page to remove, if modified (called

dirty page), update its disk copy Better choose an unmodified page Better choose a rarely used page

Many similar problems in computer systems Memory cache page replacement Web page cache replacement in web server

Revisit: page table entry

Page 41: CMPT 300 Operating System I

41

Typical Page Table Entry Entry size: usually 32 bits Page frame number: goal of page

mapping Present/absent bit: page in memory? Protection: what kinds of access

permitted Modified: Has the page been written? (If

so, need to write back to disk later) Dirty bit

Referenced: Has the page been referenced?

Caching disable: read from the disk?Page frame number

Present/absentCaching disabled Modified

Referenced Protection

Page 42: CMPT 300 Operating System I

42

Optimal Algorithm Label each page in the main memory

with number of instructions will be executed before next reference E.g, a page labeled by “1” means this page

will be referenced by the next instruction. Remove the page with highest label

Put off page faults as long as possible Unrealizable!

Why? SJF process scheduling, Banker’s Algorithm for deadlock avoidance

Could be used as a benchmark

Page 43: CMPT 300 Operating System I

43

Remove Not Recently Used Pages

R and M are initially 0 Set R when a page is referenced Set M when a page is modified Done by hardware

Clear R bit periodically by software (OS) Four classes of pages when a page fault

Class 0 (R0M0): not referenced, not modified Class 1 (R0M1): not referenced, modified Class 2 (R1M0): referenced, not modified Class 3 (R1M1): referenced, modified

NRU removes a page at random from the lowest numbered nonempty class