os question solve final print

22
18 (Operating System Notes Developed by Prasun & Mainak ) What is Operating System? Ans:- The operating system controls and co-ordinates the use of the hardware among the various application programs for the various users. The operating system provides the means for the proper use of hardware, softwar e and data resources in the operation of the computer system. It simply provides an environment within which other programs can do useful works. Device management A program, as it is running, may need additional resources to proceed. Additional resources may be more memory, tape drives, access to files, and so on. If the resources are available/they can be granted, and control can be returned to the user program; otherwise, the program will have to wait until sufficient resourc es are available. Files can be thought of as abstract or virtual devices. Thus, many of the system calls for files are also needed for devices. If there are multiple users of the system, however, we must first request the device, to ensure exclusive use of it. After we are finished with the device, we must release it. These functions are similar to the open and close system calls for files. Once the device has been requested (and allocated to us), we can read, write, and (possibly) reposition the device, just as we can with ordinary files. In fact, the similarity between I/O devices and files is so great that many Operating systems, including UNIX and MS-DOS, merge the two into a combined file-device structu re. In this case, I/O devices are identified by special file names.  What is Dead lock? What are the necessary conditions for deadlock? Ans:- Necessary Conditions A deadlock situation can arise if the following four conditions hold simult aneously in a system: Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one process at a time can use a resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. Hold and wait: a process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes. No preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task. Circular wait: A set {P0, P1, …, Pn} of waiting processes must exists such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0. What is resources allocation graph? How will be deadlock be prevented? Ans:- Deadlocks can be described more precisely in terms of a directed graph called a system resource-allocation graph. A set of vertices V and a set of edges E. V is partitioned into two types: P = {P1, P2, …, Pn}, the set consisting of all the processes in the system. R = {R1, R2, …, Rm}, the set consisting of all resource types in the system. *Request edge – directed edge P1R  j *Assignment edge – directed edge R  j P i

Upload: sayantan-mukherjee

Post on 07-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 1/2218

(Operating System Notes Developed by Prasun & Mainak )

What is Operating System?Ans:- The operating system controls and co-ordinates the use of the hardware among thevarious application programs for the various users. The operating system provides themeans for the proper use of hardware, software and data resources in the operation of thecomputer system. It simply provides an environment within which other programs can douseful works.

Device managementA program, as it is running, may need additional resources to proceed. Additional resourcesmay be more memory, tape drives, access to files, and so on. If the resources areavailable/they can be granted, and control can be returned to the user program; otherwise,the program will have to wait until sufficient resources are available.Files can be thought of as abstract or virtual devices. Thus, many of the system calls forfiles are also needed for devices. If there are multiple users of the system, however, wemust first request the device, to ensure exclusive use of it. After we are finished with thedevice, we must release it. These functions are similar to the open and close system callsfor files.Once the device has been requested (and allocated to us), we can read, write, and(possibly) reposition the device, just as we can with ordinary files. In fact, the similaritybetween I/O devices and files is so great that many Operating systems, including UNIX andMS-DOS, merge the two into a combined file-device structure. In this case, I/O devices areidentified by special file names.

 What is Dead lock? What are the necessary conditions for deadlock?Ans:- Necessary ConditionsA deadlock situation can arise if the following four conditions hold simultaneously in asystem:Mutual exclusion: At least one resource must be held in a non-sharable mode; that is,only one process at a time can use a resource. If another process requests that resource,the requesting process must be delayed until the resource has been released.

Hold and wait: a process must be holding at least one resource and waiting to acquireadditional resources that are currently being held by other processes.No preemption: Resources cannot be preempted; that is, a resource can be released onlyvoluntarily by the process holding it, after that process has completed its task.Circular wait: A set {P0, P1, …, Pn} of waiting processes must exists such that P0 iswaiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …,Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is heldby P0.

What is resources allocation graph? How will be deadlock be prevented?Ans:- Deadlocks can be described more precisely in terms of a directed graph calleda system resource-allocation graph.

A set of vertices V and a set of edges E.V is partitioned into two types:✦ P = {P1, P2, …, Pn}, the set consisting of all theprocesses inthe system.✦ R = {R1, R2, …, Rm}, the set consisting of all resourcetypesin the system.*Request edge – directed edge P1→R j

*Assignment edge – directed edge R j→Pi

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 2/2218

(Operating System Notes Developed by Prasun & Mainak )

Resource Allocation Graph:

Resources instances:

• One instance of resource type R1.

•  Two instance of resource type R2.

• One instance of resource type R3.

 Three instance of resource type R4.Process States:

• Process P1 is holding an instance of resource type R2, and is waiting for an instance of resource type R1.

• Process P2 is holding an instance of R1 and R2, and is waiting for an instance of resource type R3.

• Process P3 is holding an instance of R3.Resource allocation Graph with a deadlock:

Resource allocation Graph with a cycle but no deadlock:

 

Basic facts:-

• If graph contains no cycles→ no deadlock.

• If graph contains a cycle →✦ if only one instance per resource type, then

deadlock.

✦ if several instances per resource type, possibility of deadlock

Deadlock Prevention:Restrain the ways request can be made.Mutual Exclusion – not required for sharable resources; must hold for non-sharableresources.Hold and Wait – must guarantee that whenever a process requests a resource, it does nothold any other resources.

✦ require process to request and be allocated all its resources before it begins

execution, orallow process to request resources only when the process has none.✦ Low resource utilization; starvation possible.

No Preemption –✦ If a process that is holding some resources requests another resource that cannotbe immediately allocated to it, then all resources currently being held are released.✦ Preempted resources are added to the list of resources for which the process is

waiting.✦ Process will be restarted only when it can regain its old resources, as well as thenew ones that it is requesting.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 3/2218

(Operating System Notes Developed by Prasun & Mainak )

Circular Wait – imposes a total ordering of all resource types, and requires that eachprocess requests resources in an increasing order of enumeration.

What are the advantages and disadvantages of assembly language program? Ans:- Advantages:-

i) The symbolic programming of assembly language is easier to understand and savesa lot of time and effort of the programmer.

ii) It is easier to correct errors and modify program instruction.iii)Assembly language has the same efficiency of execution as the machine level

language because this is one to one translator between assembly languageprogram and its corresponding machine language program.

Disadvantages:-One of the major disadvantages is that assembly language is machine dependent. Aprogram written for the computer might not run in other computers with differenthardware configuration.

What is debugging system?Ans:- Debugging system is a system program designed to aid the programmer in findingthe bugs and to determine the cause of the problem. Under either normal or abnormalcircumstances, the operating system must transfer control to the invoking commandinterpreter. The command interpreter then reads the next command. System callssometimes are helpful in debugging a program. This provision is useful for debugging.

What is starvation? How is the problem solved?Ans:- Indefinite blocking or starvation is a situation related to deadlock where processeswait indefinitely within the semaphore. Indefinite blocking may occur if we add and removeprocesses from the list with a semaphore in LIFO order.Starvation may be prevented by adding the following requirement to semaphoreimplementation:

i) A request to enter the critical section must be granted in finite time.ii) Given the assumption that each process spends a finite time executing the critical

section, this requirement can be met by using the FIFO discipline for choosingamong the waiting processes.

State the difference between process and thread.Process Threads

1. Process cannot share the samememory area (address space)

2. It takes more time to create aprocess.

3. It takes more time to complete theexecution and terminate.

4. Execution is very slow.

5. It takes more time to switchbetween two processes.6. Process is loosely coupled.7. Independent to each other.

1. Threads can share memory andfiles.

2. Less time.3. Less time to terminate.

4. Fast.

5. Less time.

6. Tightly coupled.7. May be dependent to each other.

Short note:Monitors.UNIX operating system.Mutual exclusion in distributed operating system.Real time operating system.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 4/2218

(Operating System Notes Developed by Prasun & Mainak )

Advantages of monitor.

What are the different states of process?Explain different states at a process using the process state hierarchy.Ans:-Process: A current day computer system allows multiple programs to be loaded intomemory & to be executed concurrently. This evolution requires firmer control & more

compartmentalization of the various programs. These needs resulted in the notion of aprocess, which is a program in execution. A process is a unit of work in a modern time-sharing system.Process State: As a process executes, it change states. The state of a process is defined inpart by the current activity of that process.

Each process may be one of the following states,New: The process is being created.Running: Instruction is being executed.Waiting: The process is waiting for some event to occur, (Such as an I/o completion orreception of a signal).Ready: The process is waiting to be assigned to a processor.Terminated: The process has finished execution.

 These states names are arbitrary & they vary across O/S. The states that they represent arefound on all systems; however Certain O/S more finely delineate process states. Only oneprocess can be running on any processor at any instant, although many processes may beready & waiting.

Process control block.:- Each process is

represented in the operating system by a process controlblock (PCB)- also called a task control block. A PCB isshown in this figure. It contains many pieces of information associated with a specific process, including

these:• Process state: The state may be new, ready, running,waiting, halted, and so on.• Program counter: The counter indicates the address of the next instruction to be executed for this process.• CPU registers: The registers vary in number and type,depending on the computer architecture. They includeaccumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information.

I/O or event

wait

Diagram Process

State

I/O or event

completion

Scheduling

dispatch

Exit

Interrup

t

admittedNe

Read

y

 

Runnin

g

Waitin

g

 

 Terminat

ed

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 5/2218

(Operating System Notes Developed by Prasun & Mainak )

Along with the program counter, this state information must be saved when an interruptoccurs, to allow the process to be continued correctly afterward.• CPU scheduling information: This information includes a process priority, pointers to

scheduling queues, and any other schedulingparameters.• Memory-management information: This informationmay include such information as the value of the

base and limit registers, the page tables, or thesegment tables depending on the memory systemused by the operating system• Accounting information: This information includesthe amount of CPU and real time used, time limits,account numbers, job or process numbers, and so on.• I/O status information: The information includes thelist of I/O devices allocated to this process, a list of open files, and so on.

 The PCB simply serves as the repository for anyinformation that may vary from

Process to process.

What is Context Switching? Sketch the steps for context – Switching?Ans:- Context Switching: Switching the CPU to another process require saving the stateof the old process and loading the save state for the new process. This task is known as acontext switch. The context of a process is represented in the PCB of a process; it includesthe value of the CPU registers, the process state and the memory management information.Context switch time is purer overhead, because does no useful work while switching.Context switch times are highly dependent on hardware support.Step of context Switching : The steps of control switching from 1 running process toanother can be follows:1st Step: The value of all the registers must be saved in the present state of a process.2nd Step: The states of all open files must be recorded and the present position of theprogram must be readed.3rd Step: The contain of memory management unit must be shared for the process.

What are buffering and spooling and Caching?Ans:- Buffering:- A buffer is a memory area that stores data while they are transferredbetween two devices or between a device and an application. Buffering is done for threereasons:-

1) One reason is to cope with a speed mismatch between the producer and consumer of a data stream.

2) Second use of buffering is to adapt between devices that have different data transfersizes.

3) A third use of buffering is to support copy semantics for application I/O. There are three types of buffering techniques viz. single, double and circular. The basic idea behind buffering is to keep both I/O and CPU busy and to increase the CPUutilization.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 6/2218

(Operating System Notes Developed by Prasun & Mainak )

Spooling:- Spooling is a process of buffering that holds output for a device, such as aprinter, that cannot accept interleaved data streams. Several applications may wish to writeon to a device concurrently, though the device can handle only one job at a time. In suchcase, each applications output is spooled to a separate disk file. When a applicationcompletes its writing to the device, the spooling systems queues the corresponding spoolfile for output to the device one at a time.Caching:- A cache is a region of fast memory that holds copies of data. Access to the

cached copy is more efficient than access to the original. Caching and buffering are distinctfunctions, but sometimes a region of memory can be used for both purposes.The difference between a buffer and a cache is that a buffer may hold the only existingcopy of a data item, where as a cache, by definition just holds a copy on faster storage of an item that resides elsewhere.

What is paging? Or explain briefly the memory management scheme of paging?Ans: Paging is a memory managementscheme that allows the physical addressspace of a process to be non-contiguous.Here physical memory is broken into fixedsized blocks called frames. Logical memoryis divided into blocks of equal size, calledpages. The pages belonging to a processare loaded into the available frames.Every address that is generated by CPUhave two parts:- page number and pageoffset. Page number is used as an indexinto page table. Page table contains baseaddress of each page in physical memory.

 This base address is combined with thepage offset to set the physical memory address. This is illustrated by the above figure.Advantages and Disadvantages of paging?Ans:- Advantages:-

1) It supports the time sharing system.2) It does not effect from fragmentation.3) It supports virtual memory.4) Sharing of common code is possible.Disadvantages:-1) This scheme may buffer ‘Page Break’. For example the logical address space is 17kb,

the page size is 4kb. So this job requires 5 frames. But the fifth frame consider of only one KB. So the remaining 3KB is wasted. It is said to be page brakes.

2) If the number of pages is high, it is difficult to maintain page tables.

What is Swapping?Ans:- For execution a process is loadedinto memory it can however be swappedout of memory temporarily to a backingstore and then brought back into memoryfor continued execution. It is particularlyuseful in multiprogramming environmentwith a round robin CPU schedulingalgorithm.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 7/2218

(Operating System Notes Developed by Prasun & Mainak )

• Backing store – fast disk large enough to accommodate copies of all memory imagesfor all users; must provide direct access to these memory images.

• Roll out, roll in – swapping variant used for priority-based scheduling algorithms;lower-priority process is swapped out so higher-priority process can be loaded andexecuted.

• Major part of swap time is transfer time; total transfer time is directly proportional to

the amount of memory swapped.• Modified versions of swapping are found on many systems, i.e., UNIX, Linux, and

Windows.

What is Demand paging? What is the advantage of demand paging over theswapping?Ans:- A demand-paging system is similar to a pagingsystem with swapping. Processes reside on secondarymemory (which is usually a disk). When we want toexecute a process, we swap it into memory. Ratherthan swapping the entire process into memory,however, we use a lazy swapper. A lazy swapper never

swaps a page into memory unless that page will beneeded. Since we are now viewing a process as asequence of pages, rather than one large contiguousaddress space, the use of the term swap is technicallyincorrect. A swapper manipulates entire processes,whereas a pager is concerned with the individual pagesof a process. We shall thus use the term pager, ratherthan swapper, in connection with demand paging. When a process is to be swapped in, the pager guesses which pages will be used before theprocess is swapped out again. Instead of swapping in a whole process, the pager bringsonly those necessary pages into memory. Thus, it avoids reading into memory pages thatwill not be used any way, decreasing the swap time and the amount of physical memory

needed. With this scheme, we need some form of hardware support to distinguish betweenthose pages that are in memory and those pages that are on the disk.

What is segmentation? Give its advantages.Ans:- Segmentation is a memory managementscheme, that supports the user view of memorywhere a logical address space is a collection of segments. Each segment has a name and alength. The address specifies both the segmentname and the offset within the segment. Theuser therefore specifies each address by twoquantities: a segment name and a offset.

For simplicity of implementation, segments arenumbered and are referred to by a segmentnumber, rather than by a segment name. Thus alogical address consists of a two tuple:<segmentnumber, offset>

 Advantages of Segmentation:-1. Eliminate fragmentation: by moving

segments around, fragmented memoryspace can be combined in to a single free area.

2. Segmentation supports virtual memory.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 8/2218

(Operating System Notes Developed by Prasun & Mainak )

3. Allow dynamically growing segments.4. Facilitate shared segments.5. Dynamically linking and loading of the segments.

What is advantage of segmentation over paging?Ans:-  The User’s view of memory is different from the actual physical memory. User’s viewis mapped on to physical memory. The mapping allows differentiation between logical andphysical memory. In paging scheme this user’s view of memory is not considered. But

segmentation supports the user’s view of memory. Here a logical address space is acollection of segments. User specifies each address by segment name and offset.What are the advantages and disadvantages of using paging and segmentation?Ans:- Paging and Segmentation both scheme having advantages and disadvantages,sometimes paging is useful and sometimes segmentation is useful. Consider the belowtable for better compression.

Paging Segmentation1. The main memory partitioned in

to frames or blocks.2. The logical address space divided

into pages by compiler or memorymanagement unit (MMU).

3. This scheme suffering frominternal fragmentation or pagebreaks.

4. The operating system maintains afree frames list need not to searchfor free frame.

5. The operating system maintains apage map table for mappingbetween frames and pages.

6. This scheme does not support theusers view of memory.

7. Processor uses the page numberand displacement to calculateabsolute address (P, D).

8. Multilevel paging is possible.

1. The main memory partitioned in tosegments.

2. The logical address space dividedinto segments, specified by theprogrammer.

3. This scheme suffering fromexternal fragmentation.

4. The operating system maintainsthe particulars of availablememory.

5. The operating system maintains asegment map table for mappingpurpose.

6. It supports user’s view of memory.

7. Processor uses the segmentnumber and displacement tocalculate absolute address(S, D).

8. Multilevel segmentation is alsopossible, but no use.

Compare paging with segmentation of memory management.Ans.:- Paging: Paging is a memory management technique, which permits user orprogrammers memory to be non-contiguous in to physical memory thus allowing a programto be allocated physical memory wherever it is possible.

 

PAGE

NOOFFSET

2000

Base

AddressOFFSET

1000

2000

3000

4000

Virtual Address Physical Address

2 1000 2000 1000

Physical MemoryBase Register 

[Starting Address]

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 9/2218

(Operating System Notes Developed by Prasun & Mainak )

 

In the other way segmentation is also a memory management technique.Segmentation:  This memory management technique supports programmer’s view of memory. Programmers never think their programs as a linear array of words, rather than

they think of their program as a collection of logical related entities such as subroutines,procedure, function, global or local data areas, stack etc.

What is virtual memory? Why it is needed? What is page fault?Ans :- Virtual memory:- Virtual memory is a technique that allows the execution of process that may not be completely in memory . Virtual memory abstracts main memory into a very large array of storage, separating logical memory from physical memory. Thistechnique frees programs from the concern of the limited amount of primary memoryavailable. User would be able to write programs for a very large virtual address space.Need of: - Virtual memory allows executions partially loaded process. It’s a way of making

the physical memory of a completion system, effectively larger than really it is. Rather thanusing minors, the operating system does this by determining which partially of the programoften sitting idle and makes a decision to entry their on a disk, thereby freeing a useful RAM. Not only could this, by the use of virtually memory on user be able to write program alsoreduce external fragmentation.Page Fault:A page fault is a hardware or software interrupt (depending on implementation), whichpasses control to the operating system.When a program finds a page fault, it must be suspended until the missing page is swappedin main memory.

 Then the o/s proceeds to locate the missing page in the swap area and move back into afree frame of physical memory.

Here is a list of steps o/s follows in handling a page fault:

Subroutine/Procedure

Seg-0

Global dataarea

Seg-1

STACK 

Seg-3

Seg-2

Functions

Localdataarea

Seg-4

Programmers View of aProgram

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 10/2218

(Operating System Notes Developed by Prasun & Mainak )

1. We check an internal table (usuallykept with the process control block) forthis process, to determine whether thereference was a valid or invalid memoryaccess.2. If the reference was invalid, weterminate the process. If it was valid, but

we have not yet brought In that page, wenow page it in.3. We find a free frame (by taking onefrom the free-frame list, for example).4. We schedule a disk operation to readthe desired page into the newly allocatedframe.5. When the disk read is complete, wemodify the internal table kept with theprocess and the page table to indicatethat the page is now in memory.6. We restart the instruction that was interrupted by the illegal address trap.

 The process can now access the page as though it had always been in memory.

What are the advantages and disadvantages of SJF scheduling?Ans:- Advantage:-The SJF scheduling algorithm is provably optimal, in that it gives theminimum average waiting time for a given set of processes. By moving a short processbefore a long one, the waiting time of the short process decreases more than it increasesthe waiting time of the long process. Consequently, the average waiting time decreases. Disadvantage:- The real difficulty of SJF algorithm is knowing the length of the next CPUrequest. It cannot be implemented at the level of short term CPU scheduling. We may notknow the length of the next CPU burst, we can only predict.

In what way is shortest job first scheduling just a particular form of priorityscheduling?Ans:- the SJF algorithm is a special case of the general priority scheduling algorithm. Apriority is associated with each process and the CPU is allocated to the process with thehighest priority.An SJF algorithm is simply a priority algorithm where the priority is the inverse of the(predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.

On a system using round robin scheduling, what would be the effect of includingone process twice in the list of process?Ans:- If we include or allocate one process twice in the list of process or CPU, that processis preempted and is put back in the ready queue.

What do you mean by External fragmentation and internal fragmentation?What is fragmentation? How can you classify fragmentation? How page sizeaffects fragmentation?Ans.:- FRAGMENTATION: Fragmentation is refers to the inability of O.S to allocateportions of unused memory. It can leads to wasted resources.

Classification of Fragmentation: Fragmentation is classified at two levels:a) Internal Fragmentation: an Internal fragmentations is a space wasted by

malloc in trying to fit data into a segment (Logical Memory).

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 11/2218

(Operating System Notes Developed by Prasun & Mainak )

b) External Fragmentation: External fragmentation is a space lying betweensegments in the physical memory.

Every object in the physical memory is allows the size of a page or frame and everyhole of memory must also be size of a page. So, one is guaranteed to be able to fit a pageblock into a page hole. Choosing a smallest page size for the system that means fewer bitewill be wasted per paper can minimize internal fragmentation. Hence the system overheadgrows larger as the page size is reduced.

What is the Translation Look aside Buffer (TLB)?Ans:- In a cached system, the base addresses of the last few referenced pages ismaintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory referencecauses 2 physical memory accesses-- one to fetch appropriate page-table entry, and one tofetch the desired data. Using TLB in-between, this is reduced to just one physical memoryaccess in cases of TLB-hit.In TLB the search is fast; the hardware, however, is expansive. Typically the no. of entries ina TLB is small, often numbering between 64 and 1024.

Why is page size always power of 2?Ans:- page sizes are always power of 2 . This makes the translation of logical address intopage number and page offset very easy. If the page size of logical address space is 2n andpage size is 2p then the logical address is as follows:-

Page number offset 

n-p P

Explain multilevel feedback queue?Ans: Multilevel feedback queue scheduling, allows a process to move between queues. Theidea is to separate processes with different CPU-burst characteristics. If a process uses toomuch CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-boundand interactive processesin the higher priority ques. Similarly a process that waits too long in a lower priority queuemay be moved to a higher-priority queueto prevent starvation.Example of Multilevel Feedback Queue

 Three queues:

•  Q0 – time quantum 8 milliseconds

•  Q1 – time quantum 16 milliseconds

• Q2 – FCFSScheduling

• A new job enters queue Q0 whichis served FCFS. When it gains CPU,

 job receives 8 milliseconds. If itdoes not finish in 8 milliseconds,

 job is moved to queue Q1.

• At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still doesnot complete, it is preempted and moved to queue Q2.

Define turnaround time of a job?Ans:- Throughput: One measure of work is the number of processes that are completedper time unit, called throughput. For long processes, this rate may be one process per hour;for short transactions, throughput might be 10 processes per second.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 12/2218

(Operating System Notes Developed by Prasun & Mainak )

Turnaround time: The interval from the time of submission of a process to the time of completion is The turnaround time. Turnaround time is the sum of the periods spent waitingto get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.Response time: Response time, is the amount of time a process takes to start responding,but not the time that it takes to output that response.

State the advantages of variable partitioning over fixed portioning in memory?Ans:- Advantages of variable partitioning over fixed portioning are as follows::-

1) Fixed portioning or static partitioning generally implies that the division of memory ismade at some time prior to the execution of user programs and that partition remainfixed thereafter, hence results in internal fragmentation.On the other hand in case of variable portioning, starting with the initial state of thesystem, partitions may be created dynamically to fit the needs of each requestingprocess. Hence results in no internal fragmentation.

2) Less wastage of memory in comparison to fixed partitioning.3) Fixed partitioning imposes restriction on programs size. But in case of variable

partitioning memory manager may continue to create and allocate partitions torequesting processes until all physical memory is exhausted or the maximumallowable degree of multiprogramming is reached.

What is critical section? What are the conditions that must be satisfied by thesolution to a critical problem?Ans: - Consider a system consisting of n processes {Po,P1,...,Pn-1}. Each process has asegment of code, called a critical section, in which the process may be changing commonvariables, updating a table, writing a file, and so on. The important feature of the system isthat, when one process is executing in its critical section, no other process is to be allowedto execute in its critical section. Thus, theExecution of critical sections by the processes is mutually exclusive in time. The critical-section problem is to design a protocol that the processes can use to cooperate. Eachprocess must request permission to enter its critical section. The section of codeimplementing this request is the entry section. The critical section may be followed by anexit section. The remaining code is the remainder section.A solution to the critical-section problem must satisfy the following three requirements:1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processescan be executing in their critical sections.2. Progress: If no process is executing in its critical section and there exist some processesthat wish to enter their critical sections, then only those processes that are not executing intheir remainder section can participate in the decision of which will enter its critical sectionnext, and this selectionCannot be postponed indefinitely.3. Bounded Waiting: There exist a bound on the number of times that other processes areallowed to enter their critical sections after a process has made a request to enter itscritical section and before that request is granted.-> Assume that each process executes at a nonzero speed-> No assumption concerning relative speed of the n processes.What is semaphore? How is it accessed?Ans:- to overcome the critical section problem we use a synchronization tool called asemaphore. A semaphore S is an integer variable that, apart from initialization, is accessedonly through two standard atomic operations: Wait and Signal. These operations whereoriginally termed P for wait, V for signal. The classical definition of wait in pseudo code is:-

Wait(S) {

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 13/2218

(Operating System Notes Developed by Prasun & Mainak )

While (S <= 0); // no operation

S-- ;}

 The classical definition of signal in pseudo code is:-Signal (S)

{

S ++ ;}

State and explain banker’s algorithm and its application in operating system witha suitable example.Ans:- Banker’s algorithm name was chosen  because this algorithm could be used in a banking system to

ensure that the bank never allocates its available cash such that it can no longer satisfy the needs of all its

customers.

Let n be the number of processes in the system and m be the number of resource types.We need the following data structures:  Available: A vector of length m indicates the number of available resources of each type.If  Available[j]=k, there are k instances of resource type Rj available.Max: An n x m matrix defines the maximum demand of each process. If Max [i,j]=k, thenprocess Pi; may request at most k instances of resource type Rj.

 Allocation: An n x m matrix defines the number of resources of each type currentlyallocated to each process. If  Allocation [i,j]=k, then process Pi is currently allocated k instances of resource type Rj.Need: An n x m matrix indicates the remaining resource need of each process. If Need [i,j]=k, then Pi may need k more instances of resource type Rj to complete its task. Notethat Need [i,j] = Max[i,j] - Allocation[i,j].Safety algorithm:1. Let Work and Finish be vectors of length m and n, respectively. Initialize Work :=

 Available and Finish[i] :=false for i= 1, 2,..., n.2. Find an i such that both

a. Finish[t]= falseb. Need i < Work 

If no such i exists, go to step 4.3. Work := Work + Allocationi

Finish[i] := truego to step 2.

4. If Finish[i] = true for all i, then the system is in a safe state.Resource-Request Algorithm:Let Request i be the request vector for process Pi. If Request i [j] = k, then process Pi, wants k instances of resource type R j;. When a request for resources is made by process Pi, thefollowing actions are taken:

1. If Request i ≤ Needi, go to step 2. Otherwise, raise an error condition, since the processhas exceeded its maximum claim.

2. If Request i ≤ Available, go to step 3. Otherwise, Pi must wait, since the resources arenot available.

3. Have the system pretend to have allocated the requested resources to process P i bymodifying the state as follows:

Available := Available - Request i ;Allocationi := Allocationi + Request i ; 

Need i := Need i - Request i ;

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 14/2218

(Operating System Notes Developed by Prasun & Mainak )

Explain the concept of overlay manage system with diagram.Ans:-  To enable a process to be larger than theamount of memory allocated to it, a techniquecalled overlays is sometimes used. The idea of overlays is to keep in memory only thoseinstructions and data that are needed at anygiven time. When other instructions are needed,

they are loaded into space that was occupiedpreviously by instructions that are no longerneeded.Implemented by user, no special support neededfrom operating system, programming design of overlay structure is complex.

Overlays for a Two-Pass AssemblerWhat are co-operative process and race condition?Ans:- Co-operative process:- A process is co-operating if it can effect a or be affected bythe other processes executing in the system. In other words any process that shares datawith other processes is a co-operating process. It provides information sharing, computationspeed up, modularity, convenience.Race condition: a situation where several process access and manipulate the same dataconcurrently and the outcome of the execution depends on the particular order in which theaccess takes place, is called a race condition. To guard against the race condition, we needto ensure that only one process at a time can be manipulating the variable counter.

Explain a deadlock situation? With a suitable example. How can we prevent dead-lock?Ans:-Dead-lock :- In a multiprogramming environment, several processes may compete for afinite number of resources A process requests resources; if the resources are not availableat that time, the process enters a wait state. Waiting processes may never again changestate, because the resources they have requested are held by after waiting process. Thissituation is called a dead-lock.Consider the fallowing that-i) Process P1 holds Resources R1ii) Process P2 holds Resources R2iii) Process P3 holds Resources R3We can graphically, represent. The dependency as.

i) Process P1 request for resources R3ii) Process P2 request for resources R1iii) Process P3 request for resources R2

So, on the basis of these request a circular wait is formed which is a true conditionfor dead-lock , until any resource is free.

R3

 

P

3

R1

R1R2

R2

Resource allocation graph with a

deadlock

 R3 R1

R1R3

R2 R2

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 15/2218

(Operating System Notes Developed by Prasun & Mainak )

Methods for handling deadlocks:Principally we can deal with the deadlock problem inone of three ways:i) We can use a protocol to prevent or avoid deadlock ensuring that system never enters adeadlock state.ii) We can allow the system to enter a deadlock state, detect it & recover.

iii) We can ignore the problem altogether & pretend the deadlocks never occur in a system.Prevention: Deadlock is a set of methods for ensuring that at least one of the necessaryconditions (Mutual exclusion, Hold & wait, etc.) cannot hold.

Deadlock prevention can be done in the following way -1) Denying Mutual Exclusion.2) Denying Hold & Wait.3) Denying NO pre-emption.4) Denying Circular Waiting.Detection: It a system does not employ either deadlock prevention or a dead-lockavoidance algorithm, and then a dead-lock situation may occur. In this environment systemmust provide:a) An algorithm that examines the state to determine whether a deadlock has occurred.b) An algorithm to recover from the deadlock.AVOIDANCE: Deadlock avoidance. On the other hand requires that the O/S be given inadvance additional information concerning which resources a process will request & useduring lifetime. With this additional knowledge, we can decide for each request whether ornot the process show wait.

Which one is better of the following -a) Detection & Recovery b) Avoidance c) Prevention.

Ans:a) Deadlock Detection & Recovery: If a system does not employ either a dead-lock-prevention or a deadlock-avoidance algorithm, then a deadlock situation may occur. In thisenvironment the system must provide -

i) An algorithm that examines the state of the system to determine whether adeadlock has occurred.

ii) An algorithm to recover from the deadlock.When a detection algorithm determines that a deadlock exists, several alternatives

exist. One possibility is to-inform the operator that a dead lock has occurred & to let theoperator deal with the dead lock, automatically. The other possibility is to let the systemrecover from the deadlock, automatically. There are two options for breaking a deadlock.One solution is simply to abort one or more processes to break the circular wait. The secondoption is to preempt some resources from one or more of the dead locked process.b) Avoidance: Dead-lock avoidance, requires that the operating system be given inadvance additional information concerning which resources a process will request & useduring it’s lifetime. With this additional information, we can decide for each request,weather or not the process should wait. It prevents deadlock by restraining how request canbe made. The restraining ensures that at least one of the necessary conditions for deadlockcannot hold.

A deadlock avoidance algorithm dynamically examines the resource allocation stateto ensure that a circular wait condition can never exist. The number of available & allocatedresources & the maximum demand of the process define the resource allocation state.c) Prevention: Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions for deadlock cannot occur. These methods prevent dead lock byconstraining how requests for resources can be made.

Deadlock prevention can be done in following way:

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 16/2218

(Operating System Notes Developed by Prasun & Mainak )

i) Denying Mutual Exclusion: The mutual execution condition must hold for non-sharableresources. For example, several processes cannot share a printer simultaneously. Sharableresources on the other hand, do not require mutually exclusion access & thus cannot beinvolved in a deadlock. Read only files are a good example of a sharable resource. It severalprocess attempt to open a read only file at the same time, they can be granted so a processnever needs to wait for a sharable resource.ii) Denying Hold & Wait: To ensure that the hold & wait condition never occurs in the

system, we must guarantee that, whenever a process requests a resource, it does not holdany other resources.iii) Denying No Preemption: The 3rd necessary condition is that there be no preemption of resources that have already been allocated. It a process is holding some resources &requests another resource that cannot be immediately allocated to it, and then allresources currently being held are preempted. These resources are added to the list of resources for which the process is waiting. The process will be restarted only when it canresign its old resources as well as the new ones that it is requesting.iv) Denying Circular Waiting: The fourth & final condition for dead lock is circular waiting. Toensure that this condition never holds is to impose a total ordering of all resources types &to require that each process requests resources is an increasing order of enumeration.

So, after discussing all 3 prevention for dead lock we can say that DeadlockAvoidance is better among them because it ensuring that the system will never enter adeadlock state.

What are the necessary conditions to have a deadlock?Ans:-Deadlock occurs when a number of processes are waiting for an event, which can onlybe caused by another of the waiting processes. The necessary conditions to have adeadlock.i) Mutual Execution: - At least one resource must be held in a non sharable mode, that isonly one process at a time can use the resource. It another process requests that resource;the requesting process must be delayed until resource has been released.ii) Hold and wait:- A process must be holding at least one resource and waiting to acquireadditional resources that are currently being held by other process.iii)No preemption :- Resources cannot be pre- empted that is , a resource can be releasedonly voluntarily by the process holding it, after that process has completed its task.iv) Circular wait: A set {Po, P1,.....Pn} of waiting processes must exist such thatPo is waiting for resource that is held by p1, p1 is waiting for a resource that is held byP2...Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that isheld by Po There are there methods for handling deadlock situations.Prevention, Recovery, Ostrich method.

How can you avoid deadlock?Ans:- Several points to avoid deadlocki) Tries to anticipate deadlock.ii) Will deny resource requests if algorithm decides that granting the request could lead todead lock.iii) Because of non-determinacy of O/S, algorithm is not always correct, and denies requeststhat would not have led to deadlock.iv) Deadlock does not occur immediately after allocating a resource, so cannot simply makeprojection on state graph and then run deadlock detection algorithm.

What is scheduling? How can you classify scheduling policies?Ans: - One most multitasking systems, only one process can truly be active at a time - thesystem must therefore share it’s between the executions of many processes. This sharing iscalled scheduling

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 17/2218

(Operating System Notes Developed by Prasun & Mainak )

 There are three types of scheduling policies-i) FCFS/FIFO (First come First serve / First In First out)ii) SJF (Shortest job First)iii) Round Robin

What is thread? Why do you use thread?Ans: - Thread: - A thread, sometimes called lightweight process, is a basic unit of CPU

utilization. A process is divided into a number of lightweight process. Each lightweightprocess is said to be a thread.

Using a thread, it’s possible to organize the execution o a program in such a way thatsomething is always being A one whenever the schedules gives the heavy weight processCPU time. We can summarize the use fullness of threads as below.

i) Threads allow a program to switch between lightweight processes when it is best forthe programmers.

ii) A process which uses thread does not get more CPU time. Than the ordinary process.iii) Inside a heavy weight process threads can scheduled on a FCFS basis, unless the

program decided to forces certain threads to wait for other threads.iv) Threads can context switch without the any involvement of kernel.

What are the levels of threads available in modern O/S? Point out someimplementation of threads.Ans:- There are two levels at which threads can operates. One is system or kernel threadand another is user level threads.a) System level threads:-A system level threads or kernel level threads behaves like a virtual CPU on a power pointto which user processes can connect in order to get completing power.b) User level threads:-A user level thread is a thread, which can be active at any one time and is equal to thenumber of system level thread. The kernel has a many system level threads as it has CPU’sand each of this must be shared between all the user threads on the system.

Explain why round robin scheduling policy would not be appropriate formanaging print – queue?Ans :- The Round Robin scheduling is primarily used in a tine sharing and multi-user systemwhere the main requirement is to provide reasonably good response time and in general isto share the system fairly among all the processes.

 The success or failure of Round Robin scheduling is depending on time slash on timequantum.

So if the times slash of a process in over then it switched over to another process. Incase of print queue there are several processes waiting for printer but the use of roundrobin will give a disorder printing that is why a user cannot get a desire output. In thisreason the round robin is not used.

 Discuss the merits and demerits of run - time linker.Ans:- The merits and demits of runtime linker is given below:-i) Considerable saving in disk space are made, because the standard library code is never

 joined to the executable file which is stared on disk, thus there is only one copy of theshared library on the system.ii)A saving , once loaded into RAM can often be shared by several programs,iii) A performance penalty is transferred from load time to runtime, the first time a functionis accessed the library must be loaded from disk during the execution of the program. In thelong run, it would otherwise have taken to load the library for programs, which now canshare it. Also, the amount of RAM needed to support programs is now considerably less.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 18/2218

(Operating System Notes Developed by Prasun & Mainak )

 What do you mean by logical address and physical address? How logical addressis converted to physical addresses?Ans: - Keeping physical and logical address completely separate introduces a new level of abstraction to the memory concept. User programs know only logical addresses. Logicaladdresses are mapped in to real physical addresses, at some location, which is completelytransparent to the user, by means of a conversion

 Table. The conversion can be assisted by hard ware processors, which are speciallydesigned to deal with address mapping.

The part of the system, which performs the conversion, is called the memorymanagement unit (MMU). The conversion table of addresses is kept for each process in itsprocess control block (PCB) and must be downloaded in to MMU during context switching.

 The conversion of logical addresses into physical addresses is familiar in manyprogramming languages and is achieved by the use of pointers.

Instead of referring to data directly, one uses a pointer variable, which holds the trueaddress at which the data are kept. In machine language, the same scheme is called “indirect addressing “ the difference between logical addresses and pointers is that allpointers are user object and thus pointers only point from one point in logical memory toanother place in logical memory. The mapping from logical to physical is only visible to thedesigner of the system.

What is thrashing? When it occurs?Ans :- A sequence of operations could take of the order or milliseconds under favorableconditions. It is possible for the system to get into a state where there are so manyprocesses competing for limited resources that it spends more time servicing page faultsand swapping in and out processes than it does executing the processes. This sorry state iscalled thrashing.

 Thrashing can occur when there are too many active processes for the availablememory. It can be alleviated in certain cases by making the system page at on earlierthreshold of memory usages than normal. In most cases, the best way to recover fromthrashing is to suspend processes and forbid new ones, to try to clear some of the others byallowing them to execute. The interplay between swapping and paging is important heretoo, since swapping effectively suspends job.

Show the steps for creating a process? Explain process hierarchy.Ans. :- For creating a process the following steps are required:-

a) Name: The name of the program, which is to run, as a new process must beknown.

b) Process Id and Process Control Block: The system creates a new process controlblock or locates an unused block is an array. Keeping track of resources and priorities it isused to follow the execution of the program. Each PCB is identified by it’s process Identifiesor PID.

c) Locate the program: The program should be located for it’s execution on disk andallocate memory for the code segment in RAM.

d) Load the Program: The program should be loaded into the code segment andinitialized the registers of the PCB with the start address of the program and appropriatestarting values for resources.

e) Priority: Priority must be computed for the process.f) Scheduling: Schedules the process for execution.

What is process scheduling? Discuss different types of scheduling with O.S.Examples.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 19/2218

(Operating System Notes Developed by Prasun & Mainak )

Ans.:- Process Scheduling: On most multi - tasking system only one process couldbe active at a time. Thus mean among many processes a single process would get priorityfor a moment, for its execution. Thus time would be shared among different process forexecution which known as “Process Scheduling”.

 There are mainly three types of process scheduling --I) FCFS/FIFO: First come first serve is a process scheduling technique in which jobs arrivedfirst would be executed first. It’s a very simple technique and incase almost no system

overhead. This is appropriate for serial or batch jobs, like print spooling and request from aserver. It’s also known as “FIFO”, mean First in First out Technique.ii) SJF: Shortest job first is also a technique in which priority to the jobs are given by meansof the size of jobs. That means longer jobs would have to wait until the execution of theshortest job completed since each task in the queue must be evaluated since each task inthe queue must be evaluated to determine the shortest jobs it cost quite a lot systemoverhead.iii) Round Robin: This a time-sharing approach in which several tasks can co-exist. Thesuccess of failure of round robin scheduling depends on the time slice or time quantumGenerally; a short time slice (1sec, 2sec etc) is taken for that purpose for each process. So,the scheduling of processes will be fair, hence it is also known as “Round Robin” scheduling.

What are short-, long- and medium-term scheduling?Ans:- Long term scheduler determines which programs are admitted to the system forprocessing. It controls the degree of multiprogramming. Once admitted, a job becomes aprocess.Medium term scheduling is part of the swapping function. This relates to processes thatare in a blocked or suspended state. They are swapped out of real-memory until they areready to execute. The swapping-in decision is based on memory-management criteria.Short term scheduler, also known as a dispatcher executes most frequently, and makesthe finest-grained decision of which process should execute next. This scheduler is invokedwhenever an event occurs. It may lead to interruption of one process by preemption.

What is domain of protection?Ans:- A domain of protection can be defined as a collection of access rights, each of whichis a ordered pair set <object, right>. One domain may share its access right with otherdomains.Association between a process and a domain may be either static or dynamic. A domaincan be expressed in a variety of ways:

• Each user may be a domain. In this case, the set of objects that can be accessed

depends on the identity of the user. Domain switching occurs when the user ischanged — generally when one user logs out and another user logs in.

• Each process may be a domain. In this case, the set of objects that can be accesseddepends on the identity of the process. Domain switching corresponds to one processsending a message to another process, and then waiting for a response.

• Each procedure may be a domain. In this case, the set of objects that can beaccessed corresponds to the local variables defined within the procedure. Domainswitching occurs when a procedure call is made.

What is access matrix?

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 20/2218

(Operating System Notes Developed by Prasun & Mainak )

Ans:- in access matrix protection is viewed as a matrix. A row represents the domains anda column represents objects.Access (i,j) is the set of operation that a process executing in Domaini can invoke onObject j.

 The model of protection of access matrix can be represented with the help of a twodimensional matrix, where both hardware and software objects are included.Blank entries are indicating no access rights.

File F3 is sharing by D4 and also executable indomain D3.

 The model of protection can be representedwith the help of a matrix, where each rowrepresents a domain and the columnsrepresents objects. This matrix is calledaccess matrix.Uses:-

Can be expanded to dynamic protection.

Operations to add, delete access rights.

Special access rights:• Access matrix design separates mechanism from policy.

Mechanism@ Operating system provides access-matrix + rules.@ If ensures that the matrix is only manipulated by authorized agents andthat rules are strictly enforced.

Policy@ User dictates policy.@ Who can access what object and in what mode.

Difference between logical and physical address space?Ans:- An address generated by the CPU is commonly referred to as a logical address,whereas an address seen by the memory unit (that is, the one loaded into the memory 

address register of the memory) is commonly referred to as a physical address. The compile-time and load-time address-binding methods generate identical logical andphysical addresses. However, the execution-time address-binding scheme results indiffering logical and physical addresses. In this case, we usually refer to the logical addressas a virtual address. We use logical address and virtual address interchangeably in this text.

 The set of all logical addresses generated by a program is referred to as a logical addressspace; the set of all physical addressesCorresponding to these logical addresses is referred to as a physical address space. Thus, inthe execution-time address-binding scheme, the logical and physical address spaces differ.

 The run-time mapping from virtual to physical addresses is done by a hardware devicecalled the memory-management unit (MMU)

 The base register is now called a relocation register. The value in the relocation register is

added to every address generated by a user process at the time it is sent to memory. Forexample, if the base is at 14,000, then an attempt by the user to address location 0 isdynamically relocated to location 14,000; an access to location 346 is mapped to location14,346.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 21/2218

(Operating System Notes Developed by Prasun & Mainak )

Fig: Dynamic relocation using a relocation register  The user program supplies logical addresses; these logical addresses must be mapped tophysical addresses before they are used.

 The concept of a logical address space that is bound to a separate physical address space iscentral to proper memory management.

Define Compaction. Drawbacks of compaction?Ans:- It is a technique to reduce external fragmentation. The goal is to shuffle the memorycontents to place all free memory together in one large block. The simplest compactionalgorithm is to move all processes toward one end of memory, all holes move in otherdirection, producing one large hole of available memory. This scheme can be expensive.Drawback of compaction:-Compaction is always not possible. If relocation is static and is done at assembly or loadtime, compaction cannot be done; compaction is possible only if relocation is dynamic andis done at execution time.

Define the functions of assembler, Loader and Linker.What do you mean by “RELOCATBLE LOADER”.What is the advantage of two pass assembler over a single pass assembler?What is the purpose of “PASS 1” in a two pass assembler?What are the different tables used for designing a two pass assembler?

Explain the functions pass1 and pass2 of two pass assembler.Why mnemonic table and symbol table both required in synthesis phase of a twopassassembler?What is the role of compiler. Diagrammatically represents its different phases?What is the role of the assembler?Compare and contrast pass1 and pass2 assembler.Short note: (*)Dynamic partioning. (*) Scanning and pursing(lexical andsyntactic analysis)Briefly disscuss remote procedure call(RPC)mechanism.

8/3/2019 Os Question Solve Final Print

http://slidepdf.com/reader/full/os-question-solve-final-print 22/22

(Operating System Notes Developed by Prasun & Mainak )

Explain how does IPC take place?What is dining philosopher’s problem? Describe an algorithm to solve theproblem using semaphore.Or Explain the the dinning philosopher’s problem and give the solution of it,using semaphore.Worse fit algorithm performs better when it is used for variable partition: Justifyor contradict.

Illustrate how the critical region method overcomes the limitation of usingsemaphores.State the difference between compiler and interpreter?