operatin

21
OPERATING SYSTEM 1. What is meant by a system call? A system call instructs the kernel to do various operations for the calling process and exchange data between the kernel and the process. Process can directly access the kernel, only through system calls. The set of system calls defines the programming interface offered by the kernel to user process. 2. What is called bootstrap? Bootstrap is a procedure of booting the system by placing the copy of the kernel in the systems main memory and to start executing it. 3. What are the events that cause the system to enter into kernel mode? Device interrupts Software interrupts Exceptions 4. Distinguish between interrupts and exceptions. Interrupts Exceptions Interrupts are asynchronous events caused by peripheral devices such as disks, terminals etc. It requires rapid servicing. It must not be blocked, except while the process accessing the critical regions. Exceptions are synchronous events to the process and are caused by process itself, such as dividing by zero. Doesn’t require rapid servicing. It can be blocked. 5. What are signals? Signals are used to inform a process of asynchronous events and to handle exceptions.

Upload: arunsmile

Post on 20-Apr-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Operatin

OPERATING SYSTEM

1. What is meant by a system call?A system call instructs the kernel to do various operations for the calling

process and exchange data between the kernel and the process. Process can directly access the kernel, only through system calls. The set of system calls defines the programming interface offered by the kernel to user process.

2. What is called bootstrap?Bootstrap is a procedure of booting the system by placing the copy of the

kernel in the systems main memory and to start executing it.

3. What are the events that cause the system to enter into kernel mode? Device interrupts Software interrupts Exceptions

4. Distinguish between interrupts and exceptions.

Interrupts Exceptions Interrupts are asynchronous

events caused by peripheral devices such as disks, terminals etc.

It requires rapid servicing. It must not be blocked, except

while the process accessing the critical regions.

Exceptions are synchronous events to the process and are caused by process itself, such as dividing by zero.

Doesn’t require rapid servicing. It can be blocked.

5. What are signals?Signals are used to inform a process of asynchronous events and to handle

exceptions.

6. What is meant by scheduler?Scheduler is the component of the operating system that determines which

process to run at any given time and for how long it should run. The primary functions of the scheduler are to decide when to perform a context switch and which process to run.

7. What is meant by dispatch latency?Dispatch latency is the delay between the time when any process become

runnable and the time they actually begin running.

8. What is meant by preemption point?Preemption point is a place in the kernel code, where all the kernel data

structures are in steady or stable state and the kernel is about to do some lengthy computation. Kernels may have several preemption points.

9. How real-time processes are different from time-sharing process?

Page 2: Operatin

Real-time process Time-sharing process The dispatch latency time

should be minimum as much as possible.

The response time should be good enough.

Exceptions should be handled neatly.

They have fixed priority and time quantum.

No need to concentrate on this issue too much.

Need not be.

Need not be.

They don’t have any fixed priority and time quantum.

10. True real-time system should have ________ type kernel, so that it can achieve __________.

Ans: Fully preemptible kernel, good response time.

11. What is meant by symmetric multiprocessing and asymmetric multiprocessing system?

In symmetric multiprocessing system, all the CPU’s are treated equally. The kernel code is shared by all the CPU’s and any CPU can access any part of the kernel code. Any CPU can run any kernel process or user process at any given time.

On the other hand, in asymmetric multiprocessing system, all the CPU’s are not treated equally. Individual subsystems are allocated to individual processors and one processor cannot access the subsystem allocated to another processor.

12. What are the three types of multiprocessing system? Master-slave Asymmetric Symmetric

13. What is meant by master-slave multiprocessing system?In this system, a single processor is given more control (called master) and

it is responsible for controlling other processors (called slaves). It is asymmetric in nature.

14. How are the applications classified? Interactive Batch Real-time applications

15. What is the need for the classification of application with respect to scheduling?

In the case of interactive applications, the scheduling goal is to reduce the average time and variance between the user action and application response. E.g. Editors, GUI.

The background jobs are referred to as batch applications. Here the goal is to reduce the task’s computation time and throughput in the presence of other activities.

Page 3: Operatin

Time-critical applications are referred to as real-time applications. The goal is to have the guaranteed response times.

16. What is a semaphore?A semaphore is n integer-valued object that have two atomic operations

defined (i.e.) P() and V(). The P() operation decrements the value of a semaphore if its value is greater than 0 and the V() operation increment its value. If the resulting value becomes greater than or equal to zero the V() operation wakes up a process.

17. What is meant by inter-process communication?In a complex programming environment, many processes will be

communicating with each other to share resources and information and the kernel provides necessary mechanisms to achieve this goal. These mechanisms are collectively referred to as inter-process communications.

18. What are the distinct purposes of using inter-process communication? Data transfer Resource sharing Data sharing Event notification Process control

19. _________ provides the fastest mechanism for processes to share data.Ans: Shared memory.

20. What are the services offered by semaphores? It can provide mutual exclusion on a resource. It can be used to wait for an event. It can be used for allocating countable resource.

21. What is the advantage of using semaphore over the sleep/wakeup technique?In sleep/wakeup technique, when a process releases the resource held by it,

it wakes up all the processes waiting for the resource to become free, even though only one process can acquire the resource. Therefore, it is unnecessary to wake up all the processes.

But by using semaphore, when a process wakes up with in a P() operation, the woken up process is guaranteed to have the resource. It also ensures that the ownership is completely transferred to the woken up process and hence in the mean time if any other process tries to acquire the resource, it cannot be able to do so.

22. What is meant by a spin lock?A spin lock is a scalar variable which is used as the simplest locking

primitive. If any resource is protected by spin lock, processes trying to acquire the resource will busy-wait, until the resource is unlocked. It is also known as simple lock or a simplex mutex.

23. Is it advisable to use spin locks at all situations?

Page 4: Operatin

No. It is advisable to use the spin locks only when the process holding spin lock executes for extremely shorter duration because the processor gets ties up by the executing process, while waiting for the lock to be released.

24. What is meant by a condition variable?A condition variable is a variable associated with a predicate based on

some shared data. It allows processes to block on it and provides facilities to wake up one or more blocked process, when the result of the predicate changes. It is more useful for waiting on events than for resource locking.

25. What is meant by read-write lock?Read-write lock is a complex lock, which permits both shared and

exclusive access modes for resources and is built on top of the simple locks and conditions. It may permit either a single writer or multiple readers.

26. What are the two distinct types of multitasking? Process-based multitasking Thread-based multitasking

27. Briefly explain about the sleep/wakeup mechanism involved in synchronization.

The sleep/wakeup mechanism involves the locked and wanted flags with shared resources. When a process wants to access a sharable resource, it first checks its locked flag. If the flag is clear, then the process sets the locked flag and proceeds to use the resource. If any other process wants to access the same resource, it first checks the locked flag and since it is set, it will go to sleep after setting the wanted flag associated with the resource. When the first process finished using the resource it will clear the locked flag and wakes up the processes waiting for the resource to become free, by checking the wanted flag.

28. Briefly explain about lost-wakeup problem?In multiprocessor system, let us consider a process A has locked a

resource. Another process B running on another processor tries to acquire the same resource and finds it locked. Hence process B calls the sleep() function to wait for the resource become free. Between the time B finds the resource locked and the time it calls the sleep() function, the process A release the resource and wakes up all the processes blocked on it. Since B has not yet put on the sleep queue, it will miss the wake up and hence even though the resource is not locked and if no other processes try to acquire the resource, the process B could block indefinitely. This is known as lost-wakeup problem.

29. What is meant by thundering herd problem?In a multiprocessor system, consider a resource is held by any process A.

Several processes could be blocked on a resource which was held by process A. When the process A finishes using the resource, it calls wakeup() procedure, to wakeup all the processes waiting for the resource become free. Waking them all may cause them to be simultaneously scheduled on different processors and they would all fight for the same resource again. This is referred to as thundering herd problem.

Page 5: Operatin

30. What are the two common techniques used to avoid deadlock? Hierarchial locking Stochastic locking

31. Explain hierarchial locking and stochastic locking.In hierarchial locking, the locking mechanism imposes an order on related

locks and requires that all threads take locks in the same order. As long as the ordering is strictly followed, deadlock cannot occur.

Stochastic locking is used in the situations, where the ordering should be violated. When a process attempts to acquire a lock that would violate the hierarchy, it uses try-lock() operation instead of lock(). This function attempts to acquire the lock, but returns failure instead of blocking if the lock is already held.

32. What is meant by advisory processor locks?Advisory processor locks are basically a recursive lock which contains a

hint for contending processes. The hint specifies if contending threads should spin or sleep and whether the hint is advisory or mandatory.

33. Distinguish between parallelism and concurrency.The parallelism of a multiprocessor application is the actual degree of

parallel execution achieved and is therefore limited by the number of physical processors available to the application.

The application’s concurrency is the maximum parallelism it can achieve with an unlimited number of processors and it depends on how the application is written and how many threads of control can execute simultaneously, with the proper resources available.

34. Define thread.A thread is a dynamic object that represents a control point in the process

and that executes a sequence of instructions. The resources such as address space, open files, user credentials and so on are shared by all threads in the process. In addition, each thread has its private objects such as a program counter, a stack and a register context.

35. Briefly explain about device drivers.A device driver is a part of the kernel and is a collection of data structures

and functions that controls one or more devices and interacts with the rest of the kernel through a well defined interface. It is the only module that may interact directly with the device. They are extremely hardware dependent in nature.

36. What is meant by stream?A stream is a full-duplex processing and data transfer path between the

kernel space and a process in user space. It defines a framework for writing device drivers.

37. What is multi-processing?

Page 6: Operatin

It refers to the ability of an operating system to use more than one CPU in a single computer system. Symmetrical multiprocessing refers to the OS's ability to assign tasks dynamically to the next available processor, whereas asymmetrical multiprocessing requires that the original program designer choose the processor to use for a given task at the time of writing the program.

38. What is multitasking?Multitasking is a logical extension of multi-programming. This refers to

the simultaneous execution of more than one program, by switching between them, in a single computer system.

39. What is multithreading?Multithreading refers to concurrent processing of several tasks or threads

inside the same program or process. Here, several tasks can be processed parallely and no tasks have to wait for another to finish its execution.

40. Define compaction.Compaction refers to the mechanism of shuffling the memory portions

such that all the free portions of the memory can be aligned (or merged) together in a single large block. For the OS to overcome the problem of fragmentation, either internal or external, performs this mechanism, frequently. Compaction is possible only if relocation is dynamic and done at run-time, and if relocation is static and done at assembly or load-time compaction is not possible.

41. What do you mean by FAT (File Allocation Table)?It is a table that indicates the physical location on secondary storage of the

space allocated to a file. FAT allocates clusters (group of sectors) to files. It chains the clusters to define the contents of the file.

42. What is a kernel?Kernel is the nucleus or core of the operating system. This represents small

part of the code, which is thought to be the entire operating system, it is most intensively used. Generally, the kernel is maintained permanently in main memory, and other portions of the OS are moved to and from the secondary storage (mostly hard disk).

43. How memory-mapped I/O works?In memory-mapped I/O, communication between I/O devices and the

processor is done through physical memory locations in the address space. Each I/O device will occupy some locations in the I/O address space. I.e., it will respond when those addresses are placed on the bus. The processor can write those locations to send commands and information to the I/O device and read those locations to get information and status from the I/O device. Memory-mapped I/O makes it easy to write device drivers in a high-level language as long as the high-level language can load and store from arbitrary addresses.

44. What are the advantages of threads?

Page 7: Operatin

Threads provide parallel processing like processes but they have one important advantage over process, they are much more efficient.

Threads are cheaper to create and destroy because they do not require allocation and de-allocation of a new address space or other process resources.

It is faster to switch between threads. It will be faster since the memory-mapping does not have to be setup and the memory and address translation caches do not have to be violated.

Threads are efficient as they share memory. They do not have to use system calls (which are slower because of context switches) to communicate.

45. What are kernel threads?The processes that execute in the kernel-mode that processes are called

kernel threads.

46. What are the necessary conditions for deadlock to exist? Mutual Exclusion: Only one process may use a critical resource at a

time. Hold & Wait: A process may be allocated some resources while

waiting for others. No Pre-emption: No resource can be forcibly removed from a process

holding it. Circular Wait: A closed chain of processes exist such that each process

holds at least one resource needed by another process in the chain. These conditions are also referred to as Coffman's conditions.

47. What are the strategies for dealing with deadlock? Prevention: Place restrictions on resource requests so that deadlock

cannot occur. Avoidance: Plan ahead so that you never get in to a situation where

deadlock is inevitable. Recovery: When deadlock is identified in the system, it recovers from

it by removing some of the causes of the deadlock. Detection: Detecting whether the deadlock actually exists and

identifies the processes and resources that are involved in the deadlock.

48. What is a reentrant procedure? A reentrant procedure is the one in which multiple users can share a single

copy of a program during the same period. It is a useful, memory-saving technique for multi-programmed timesharing systems. Reentrancy has 2 key aspects:

The program code cannot modify itself. The local data for each user process must be stored separately.

49. What is a binary semaphore? What is its use?A binary semaphore is one, which takes only 0 and 1 as values. They are

used to implement mutual exclusion and synchronize concurrent processes.

50. What is thrashing?

Page 8: Operatin

It is a phenomenon in virtual memory schemes, when the processor spends most of its time swapping pages, rather than executing instructions. This is due to an inordinate number of page faults.

51. What is short, long and medium-term scheduling? Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process.

Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of main-memory until they are ready to execute. The swapping-in decision is based on memory-management criteria.

Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by preemption.

52. What are turnaround time and response time?Turnaround time is the interval between the submission of a job and its

completion. Response time is the interval between submission of a request, and the first response to that request.

53. What are the typical elements of a process image? User data: Modifiable part of user space. It may include program data,

user stack area and programs that may be modified. User program: The instructions to be executed. System stack: Each process has one or more LIFO stacks associated

with it. Used to store parameters and calling addresses for procedure and system calls.

Process Control Block (PCB): Info needed by the OS to control processes.

54. What is the Translation Lookaside Buffer (TLB)?In a cached system, the base addresses of the last few referenced pages is

maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses-- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.

55. What is the resident set and working set of a process? Resident set is that portion of the process image that is actually in main-memory at a particular instant. Working set is that subset of resident set that is actually needed for execution (Relate this to the variable-window size method for swapping techniques).

56. When is a system in safe state?

Page 9: Operatin

The set of dispatchable processes is in a safe state, if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.

57. What is cycle stealing?We encounter cycle stealing in the context of Direct Memory Access

(DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.

58. What is meant by arm-stickiness?If one or a few processes have a high access rate to data on one track of a

storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multi-surface disks are more likely to be affected by this, than the low density ones.

59. What is busy waiting?The repeated execution of a loop of code while waiting for an event to

occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.

60. What are local and global page replacements?Local replacement means that an incoming page is brought in only to the

relevant process' address space. Global replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions model only.

61. Define latency, transfer and seek time with respect to disk I/O.Seek time is the time required to move the disk arm to the required track.

Rotational delay or latency is the time to move the required sector to the disk head. Sums of seek time (if any) and the latency is the access time, for accessing a particular track in a particular sector. Time taken to actually transfer a span of data is transfer time.

62. Describe the Buddy system of memory allocation.Free memory is maintained in linked lists, each of equal sized blocks. Any

such block is of size 2^k. When some memory is required by a process, the block size of next higher order is chosen, and broken into two. Note that the two such pieces differ in address only in their kth bit. Such pieces are called buddies. When any used block is freed, the OS checks to see if its buddy is also free. If so, it is rejoined, and put into the original free-block linked-list.

63. How are the wait/signal operations for monitor different from those for semaphores?

If a process in the monitor signals and no task is waiting on the condition variable, the signal is lost. So this allows easier program design. Whereas in

Page 10: Operatin

semaphores, every operation affects the value of the semaphore, so the wait and signal operations should be perfectly balanced in the program.

64. In the context of memory management, what are placement and replacement algorithms?

Placement algorithms determine where in the available main-memory to load the incoming process. Common methods are first-fit, next-fit, and best-fit. Replacement algorithms are used when memory is full, and one process (or part of a process) needs to be swapped out to accommodate the new incoming process. The replacement algorithm determines which are the partitions (memory portions occupied by the processes) to be swapped out.

65. In loading processes into memory, what is the difference between load-time dynamic linking and run-time dynamic linking?

For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a target external module causes that module to be loaded and the references are updated to a relative address from the start base address of the application module.

With run-time dynamic loading: Some of the linking is postponed until actual reference during execution. Then the correct module is loaded and linked.

66. What are demand- and pre-paging?With demand paging, a page is brought into the main-memory only when a

location on that page is actually referenced during execution. With prepaging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.

67. What is mounting? Mounting is the mechanism by which two different file systems can be

combined together. This is one of the services provided by the operating system, which allows the user to work with two different file systems, and some of the secondary devices.

68. What is process spawning?When the OS at the explicit request of another process creates a process,

this action is called process spawning.

69. List out some reasons for process termination. Normal completion Time limit exceeded Memory unavailable Bounds violation Protection error Arithmetic error Time overrun I/O failure Invalid instruction Privileged instruction

Page 11: Operatin

Data misuse Operator or OS intervention Parent termination

70. What are the reasons for process suspension? Swapping Interactive user request Timing Parent process request

71. What are the possible states a thread can have? Ready Standby Running Waiting Transition Terminated

72. Define process.A process is the execution of a program and consists of a pattern of bytes

that the CPU interprets as machine instructions, data and stack.

73. What is called program counter?The control point of the process tacks the sequence of instructions using a

hardware register typically called program counter.

74. What is known as time-slicing?In uni-processor system, several processes can be active in memory but

only one process can be allowed to run on the CPU. The kernel provides an illusion of concurrency by allowing one process to use the CPU for a brief period of time called a quantum, and then switching to another. In this way, each process receives some amount of CPU time and executes. This is known as time slicing.

75. What happens when a process makes system call?When a process makes a system call, it executes a special set of

instructions to put the system in kernel mode and transfer control to the kernel, which handles the operation on behalf of the process. After the system call is complete, the kernel executes another set of instructions that returns back to the user mode and transfer control back to the process.

76. Kernel functions may execute either in ___________ or in ___________.Ans: process-context, system context (interrupt context).

77. What is meant by process context?When a running process is entered into waiting state, eventually another

ready to run process should be allowed to take over the CPU from the running process. In order to rerun the already running process, the kernel must store the context of the process, when it was about to swapped by another process.

Page 12: Operatin

This is meant by process context of the currently running process.

79. The information about the system state, such as current and previous execution modes, current and previous interrupt priority levels, overflow and carry bits gets stored in ___________.

Ans: Processor Status Word (PSW).

80. Does a process instantaneously respond to a signal? If not, how the situation is handled?

A process does not instantaneously respond to a signal. When the signal is generated, the kernel notifies the process by setting a bit in the pending signals mask. The process must become aware of the signal and respond to it, and that can happen only when it scheduled to run. When it runs, all the pending signals will be handled by the process before returning to its normal user-level processing.

81. Suppose a signal is generated for a sleeping process? Should the signal be kept pending or should its sleep be interrupted?

The answer depends on why the process is sleeping. If the process is sleeping for an event that is certain to occur soon, there is no need to wake up the process. On the other hand, if the process if waiting for an event, that does not know for how long it will be waited, then the sleep should be interrupted.

82. What are the different components of a process address space? Text Initialized data Un-initialized data Shared memory Shared libraries Heap User stack

83. What are the main goals of memory management? Address space management Address translation Physical memory management Memory protection Memory sharing Monitoring system load

84. What are the main advantages of using virtual memory? Run programs larger than physical memory. Reducing program startup time by partially loaded programs Increased CPU utilization, by allowing more than one program to

reside in memory at any time. Allow sharing of resources. Allow relocatable programs, which may be placed anywhere in

memory and moved around during execution.

85. What are the main disadvantages of using virtual memory?

Page 13: Operatin

Memory management activities take up a significant amount of CPU time.

Usable memory is further reduced by fragmentation. Data structures used for memory management reduces the physical

memory. Address translation is added to the execution time for each instruction. When page fault occurs, it requires time-consuming disk I/O

operations by bringing the page into memory.

86. What is called a page frame?In a demand-paged system, memory is divided into fixed-size pages and

these are brought into and out of memory as required. A page of physical memory is called a page frame.

87. What is meant by pure demand paging?In pure demand paging, the pages will be brought into memory only when

needed.

88. What is meant by anticipatory paging?In anticipatory paging, the pages will be brought into memory, when the

system predicts that it will be needed soon.

89. When a program gets executed, what are all the information of a program gets stored in the pages?

Text Initialized data Un-initialized data Modified data Stack Heap Shared memory Shared libraries

90. What is meant by page replacement policy?In order to make room for a new page, the kernel must reclaim a page that

is currently in memory. The page replacement policy deals with the kernel to decide which page to reclaim.

91. What is meant by local and global page replacement policy?If a process needs a new page, it must replace one of its own pages. This

is called local replacement policy.If a process needs a new page, it can also steal a page from any process.

This is called global replacement policy.

92. Under what circumstances, the processes in the main memory should swap out?

Any parent process wants to allocate space for its child process. When the size of a process gets increased. When returning back the already swapped-out process to memory.

Page 14: Operatin

93. What is meant by disk mirroring?Disk mirroring allows a redundant copy of all data, thus increasing the

reliability of the file system.

94. What is meant by write-through and write-behind cache technique?A write-through cache writes out modified data blocks to the backing store

immediately, whereas in write-behind cache technique, the modified blocks are simply marked as dirty or modified, and written to the disk at a later time.

Exercises:

1. Explain why context switch is an expensive operation.2. What are preemptible and non-preemptible processes?3. What are the situations that lead to context switch?4. State the advantages of multiprocessor system over uniprocessor system.5. What are the advantages and disadvantages of symmetric multiprocessing over

asymmetric multiprocessing?6. Is it necessary that all processors in a system should be equal?7. What is meant by distributed operating system and list out the advantages and

disadvantages?8. What is meant by shared memory region and explain its advantages?9. What is meant by synchronization?10. Give a situation where condition variable can be employed.11. What are the limitations involved in process model?12. What is meant by DMA and DVMA?