notes f2032 topic 1

21
Acting as a reference for students and lecturer for Fundamentals of Operating System of Diploma in Information Technology (Programming) and (Networking) | Siti Khadijah Mohd Salleh / 2010 JTMK FUNDAMENTAL OF OPERATING SYSTEM

Upload: f2032

Post on 10-Apr-2015

317 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: NOTES F2032 TOPIC 1

|

JTMK FUNDAMENTAL OF OPERATING SYSTEM

Page 2: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

2

TOPIC 1: INTRODUCTION TO OPERATING SYSTEM

Definition:

An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. Operating system can be defined as a program that acts as the intermediary between a user of a computer and the computer hardware.

An operating system performs these services for applications:

In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.

It manages the sharing of internal memory among multiple applications. It handles input and output to and from attached hardware devices, such as hard disks, printers,

and dial-up ports. It sends messages to each application or interactive user (or to a system operator) about the

status of operation and any errors that may have occurred. It can offload the management of what are called batch jobs (for example, printing) so that the

initiating application is freed from this work. On computers that can provide parallel processing, an operating system can manage how to

divide the program so that it runs on more than one processor at a time.

All major computer platforms (hardware and software) require and sometimes include an operating system. Linux, Windows 2000, VMS, OS/400, AIX, and z/OS are all examples of operating systems.

At the simplest level, an operating system does two things:

1. It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space and more (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection).

2. It provides a stable, consistent way for applications to deal with the hardware without having to know all the details of the hardware.

Page 3: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

3

A. Evolution of Operating Systems

In the early computers there were no Operating Systems. By in the early 1960s, when the commercial computer services and commercial computer

merchants started supplying the extensive apparatus for reformation of the development, execution of jobs, and scheduling on batch processing systems.

With the advancement of the commercial computer services we have come across a number of Operating Systems software.

Starting from the DOS, a lot much Operating Systems software has got developed through out the ages like the UNIX, Oracle etc depending on the requirement.

The most commonly-used Operating Systems for laptops and modern Desktops Operating Systems were the Microsoft Windows.

Tough more powerful servers make the use of FreeBSD, Linux, and other Unix-like systems a lot. Though, these types of Operating Systems, particularly Mac OS X, are also installed on the

personal computers.

B. Types of Operating Systems

A. Batch SystemStacked Job Batch Systems (mid 1950s - mid 1960s)

A batch system is one in which jobs are bundled together with the instructions necessary to allow them to be processed without intervention. Often jobs of a similar nature can be bundled together to further increase economy. The basic physical layout of the memory of a batch job computer is shown below:

-------------------------------------------------------| || Monitor (permanently resident) || |-------------------------------------------------------| || User Space || (compilers, programs, data, etc.) || |-------------------------------------------------------

The monitor is system software that is responsible for interpreting and carrying out the instructions in the batch jobs. When the monitor started a job, it handed over control of the entire computer to the job, which then controlled the computer until it finished. Often magnetic tapes and drums were used to store intermediate data and compiled programs.

1. Advantages of batch systems o move much of the work of the operator to the computer o increased performance since it was possible for job to start as soon as the previous job

finished

Page 4: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

4

2. Disadvantages o turn-around time can be large from user standpoint o more difficult to debug program o due to lack of protection scheme, one batch job can affect pending jobs (read too many

cards, etc) o a job could corrupt the monitor, thus affecting pending jobs o a job could enter an infinite loop

As mentioned above, one of the major shortcomings of early batch systems was that there was no protection scheme to prevent one job from adversely affecting other jobs.

The solution to this was a simple protection scheme, where certain memory (e.g. where the monitor resides) were made off-limits to user programs. This prevented user programs from corrupting the monitor.

To keep user programs from reading too many (or not enough) cards, the hardware was changed to allow the computer to operate in one of two modes: one for the monitor and one for the user programs. IO could only be performed in monitor mode, so that IO requests from the user programs were passed to the monitor. In this way, the monitor could keep a job from reading past it's on $EOJ card.

To prevent an infinite loop, a timer was added to the system and the $JOB card was modified so that a maximum execution time for the job was passed to the monitor. The computer would interrupt the job and return control to the monitor when this time was exceeded.

Spooling Batch Systems (mid 1960s - late 1970s)

One difficulty with simple batch systems is that the computer still needs to read the the deck of cards before it can begin to execute the job. This means that the CPU is idle (or nearly so) during these relatively slow operations.

Since it is faster to read from a magnetic tape than from a deck of cards, it became common for computer centers to have one or more less powerful computers in addition to there main computer. The smaller computers were used to read a decks of cards onto a tape, so that the tape would contain many batch jobs. This tape was then loaded on the main computer and the jobs on the tape were executed. The output from the jobs would be written to another tape which would then be removed and loaded on a less powerful computer to produce any hardcopy or other desired output.

It was a logical extension of the timer idea described above to have a timer that would only let jobs execute for a short time before interrupting them so that the monitor could start an IO operation. Since the IO operation could proceed while the CPU was crunching on a user program, little degradation in performance was noticed.

Page 5: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

5

Since the computer can now perform IO in parallel with computation, it became possible to have the computer read a deck of cards to a tape, drum or disk and to write out to a tape printer while it was computing. This process is called SPOOLing: Simultaneous Peripheral Operation OnLine. Spooling batch systems were the first and are the simplest of the multiprogramming systems. One advantage of spooling batch systems was that the output from jobs was available as soon as the job completed, rather than only after all jobs in the current cycle were finished.

B. Multiprogramming System

As machines with more and more memory became available, it was possible to extend the idea of multiprogramming (or multiprocessing) as used in spooling batch systems to create systems that would load several jobs into memory at once and cycle through them in some order, working on each one for a specified period of time.

--------------------------------------| Monitor || (more like a operating system) |--------------------------------------| User program 1 |--------------------------------------| User program 2 |--------------------------------------| User program 3 |--------------------------------------| User program 4 |--------------------------------------

At this point the monitor is growing to the point where it begins to resemble a modern operating system. It is responsible for:

starting user jobs spooling operations IO for user jobs switching between user jobs ensuring proper protection while doing the above

As a simple, yet common example, consider a machine that can run two jobs at once. Further, suppose that one job is IO intensive and that the other is CPU intensive. One way for the monitor to allocate CPU time between these jobs would be to divide time equally between them. However, the CPU would be idle much of the time the IO bound process was executing.

A good solution in this case is to allow the CPU bound process (the background job) to execute until the IO bound process (the foreground job) needs some CPU time, at which point the monitor permits it to run. Presumably it will soon need to do some IO and the monitor can return the CPU to the background job.

Page 6: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

6

C. Distributed System

Is an application that executes a collection of protocols to coordinate the actions of multiple processes on a network, such that all components cooperate together to perform a single or small set of related tasks. Distributed system must have the following characteristics:

Fault-Tolerant: It can recover from component failures without performing incorrect actions.

Highly Available: It can restore operations, permitting it to resume providing services even when some components have failed.

Recoverable: Failed components can restart themselves and rejoin the system, after the cause of failure has been repaired.

Consistent: The system can coordinate actions by multiple components often in the presence of concurrency and failure. This underlies the ability of a distributed system to act like a non-distributed system.

Scalable: It can operate correctly even as some aspect of the system is scaled to a larger size. For example, we might increase the size of the network on which the system is running. This increases the frequency of network outages and could degrade a "non-scalable" system. Similarly, we might increase the number of users or servers, or overall load on the system. In a scalable system, this should not have a significant effect.

Predictable Performance: The ability to provide desired responsiveness in a timely manner. Secure: The system authenticates access to data and services.

C. Microsoft Contributions for Evolution of Operating Systems

Microsoft has designed and marketed the Windows Operating Systems as a collection of several Operating Systems.

Microsoft was the first to introduce the idea of an operating setting which was named as Windows in November 1985 as an attachment to the MS-DOS in reply to the increasing curiosity in Graphical User Interfaces (GUIs).

Microsoft Windows in the end started to govern the world market of the personal computers, going far ahead of Mac OS, which was predominating before its era.

The latest version of Windows present in the market is Windows Vista while the latest server version of it is the Windows Server 2003.

The descendant to Windows Server 2003 will be the Windows Server 2008 which is still in beta version and is at present being under tested.

D. Facilities of Computer Operating Systems:

1. Memory Management for Evolution of Operating Systems

Present computer structural designs assemble the computer's memory in a hierarchical process, beginning from the highest registers, random access memory, CPU cache, and disk storage.

A computer Operating Systems memory manager synchronizes the utility of these numerous kinds of memory by tracking which one is obtainable, which is to be assigned or de-assigned and how to progress data between them.

Page 7: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

7

Generally the activity is termed as virtual memory management as it amplifies the amount of memory obtainable for each process by creating the disk storage seems like main memory.

There is a pace penalty connected with utilizing disks or other slower storage as memory. This memory management also is administers the virtual addresses. The procedure is known as "paging" or "swapping" and the terminology varies between

different Operating Systems.

2. Process Management for Evolution of Operating Systems

Each and every program running on a computer whether it a service or an application, is generally a process.

Most Operating Systems facilitate simultaneous execution of many processes and programs at once through multitasking, even with one CPU.

The most elementary of computers multitasking is done by simply switching processes rapidly. Most Operating Systems permit a process to be allocated a priority which affects its distribution

of CPU time.

3. Disk and File Systems for Evolution of Operating Systems

Generally, computer Operating Systems also includes support for file systems. Modern file systems include a hierarchy of directories. While the idea is theoretically alike transversely all general-purpose file systems, some

differences in implementation survive.

E. Evolution of Operating Systems Example

Two obvious examples of Evolution of Operating Systems are: case sensitivity and character utilized to separate directories.

1. Security for Evolution of Operating Systems

Computer Operating Systems also comprise some standards of security. Security is based on the two concepts. The Operating Systems offers admission to a number of resources, directly or indirectly, like files

on a local disk, personal information about users, privileged system calls, and the services presented by the programs running on the system.

The Operating Systems is competent of unique between some requesters of these resources who are authorized to access the resource and others who are forbidden.

Internal security is regarded as an already running program. On some systems, a program once it is running has no limitations, but frequently the program

has an individuality which it keeps and is used to check all of its requests for resources. To launch identity there may be a process of authentication. Often a username must be cited and each username must possess a password. Other procedures of authentication are magnetic cards or biometric data, may be utilized

instead. In some cases, especially connections from the network, resources may be admittance with no

confirmation at all.

Page 8: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

8

2. Example of Computer Operating Systems:

Windows XP Operating System Microsoft Operating Systems 64 Bit Operating Systems Linux Operating Systems Mac Operating Systems Network Operating Systems MSDN Operating Systems Windows 2000 Operating Systems UNIX Operating Systems PDA Operating Systems Server Operating Systems Virtual Operating Systems Windows Operating Systems Windows Vista Operating Systems

3. UNIX Operating Systems

It is a modified KDE desktop operating under Linux. The Unix-like family is a miscellaneous group of Unix Operating Systems, with several major sb-

categories including BSD, System V, and Linux. UNIX systems run on a wide variety of machine structural designs. They are utilized heavily as server systems in business, as well as workstations in educational

and engineering environments. Complementary software UNIX options are Linux and BSD, are famous in these areas.

4. Microsoft Windows Operating Systems

The Microsoft Windows family of Operating Systems derived as append to the older MS-DOS milieu for the IBM PC.

Contemporary versions are based on the newer Windows NT core that was initially intended for OS/2 and borrowed from VMS. Windows runs on x86, x86-64 and Itanium processors.

Previous versions also operate on the MIPS, DEC Alpha, Fairchild Clipper and PowerPC structural designs.

5. Sun Solar Operating Systems

One of the world's largest on-hand inventories of fully tested, renovated Sun Microsystems paraphernalia.

Solar Systems Peripherals, Inc. is devoted to providing user with outstanding values in Sun Microsystems.

Page 9: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

9

6. Linux Operating Systems

Linux is a freely distributed operating system that behaves like the Unix operating system. Linux was designed specifically for the PC platform and takes advantage of its design to give users comparable performance to high-end UNIX workstations.

Many big-name companies have joined the Linux bandwagon such as IBM and Compaq, offering systems pre-installed with Linux. Also, many companies have started Linux packages, such as Red Hat, Corel, and Samba.

However, they can only charge for services and documentation packaged with the Linux software. More and more businesses are using Linux as an efficient and more economical way to run their networks.

F. Terminologies in relation to Operating System

A. MultitaskingMultitasking, in an operating system, is allowing a user to perform more than one computer task (such as the operation of an application program) at a time. The operating system is able to keep track of where you are in these tasks and go from one to the other without losing information. Microsoft Windows 2000, IBM's OS/390, and Linux are examples of operating systems that can do multitasking (almost all of today's operating systems can). When you open your Web browser and then open word at the same time, you are causing the operating system to do multitasking. Being able to do multitasking doesn't mean that an unlimited number of tasks can be juggled at the same time. Each task consumes system storage and other resources. As more tasks are started, the system may slow down or begin to run out of shared storage.

B. Cooperative multitaskingSome early operating system ran on processor without interrupting clocks, meaning that each process must voluntary yield the processor on which it is running before another process can execute. However this technique is rarely used in today’s system; because it allows processes to accidentally or maliciously monopolize the processor.

C. Preemptive multitaskingPreemptive multitasking is task in which a computer operating system uses some criteria to decide how long to allocate to any one task before giving another task a turn to use the operating system. The act of taking control of the operating system from one task and giving it to another task is called preempting. A common criterion for preempting is simply elapsed time (this kind of system is sometimes called time sharing or time slicing). In some operating systems, some applications can be given higher priority than other applications, giving the higher priority programs control as soon as they are initiated and perhaps longer time slices.

D. Non-preemptive multitaskingIs a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Such systems are either statically scheduled,

Page 10: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

10

most often periodic systems, or exhibit some form of cooperative multitasking, in which case the computational tasks can self-interrupt and voluntarily give control to other tasks. When non preemptive is used, a process that receives such resources can not be interrupted until it is finished.

E. MultithreadingMultithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. Each user request for a program or system service (and here a user can also be another program) is kept track of as a thread with a separate identity. As programs work on behalf of the initial request for that thread and are interrupted by other requests, the status of work on behalf of that thread is kept track of until the work is completed.

G. Major subsystem in Operating System

A. Process Management

A process is a program in execution. A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task. The operating system is responsible for the following activities in connection with process management: Process creation and deletion. Process suspension and resumption. Provision of mechanisms for:

1. process synchronization2. process communication

B. Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices. Main memory is a volatile storage device. It loses its contents in the case of system failure. The operating system is responsible for the following activities in connections with memory management: Keep track of which parts of memory are currently being used and by whom. Decide which processes to load when memory space becomes available. Allocate and de-allocate memory space as needed.

C. File System Management

Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure. Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

Page 11: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

11

H. SYSTEM CALLS

One of the most renown features of Unix is the clear distinction between ``kernel space'' and ``user space''. System calls have always been the means through which user space programs can access kernel services. The Linux kernel implementation allows breaking this clean distinction by allowing kernel code to invoke some of the system calls. This leverages the kernel's capabilities to include some of the tasks that have traditionally been reserved to user space.

Please note that invoking system calls from kernel space is not in general a good thing. To the sake of maintaining, debugging and porting the code, what has always been performed in user space should not be converted to run in kernel space, unless that is absolutely necessary to meet performance or size requirements.

The gain in performance comes for avoidance of costly user-space/kernel-space transitions and associated data passing; the gain in size comes from avoidance of a separate executable with its libc and associated material.

SYSTEM CALLS: the Mechanism

In order to understand the speed benefits achieved by invoking system calls from kernel space, we should first analyze the exact steps performed by a normal system call, like read. The function's role is copying data from a source, (usually a device, either a mass-storage or a communication medium) to buffers held in the application.

Figure 1 show the steps involved in performing a call to read from a user space function, like the main procedure of a C program. You can verify the exact steps by running objdump on compiled code for the user-space part and browsing kernel source files for the kernel-space part. In order to understand the speed benefits achieved by invoking system calls from kernel space, we should first analyze the exact steps performed by a normal system call, like read. The function's role is copying data from a source, (usually a device, either a mass-storage or a communication medium) to buffers held in the application.

Figure 1 show the steps involved in performing a call to read from a user space function, like the main procedure of a C program. You can verify the exact steps by running objdump on compiled code for the user-space part and browsing kernel source files for the kernel-space part.

Page 12: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

12

Figure 1: Steps involved in performing a call to read

Page 13: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

13

I. Open vs. Closed Source Software

Closed source software (i.e. Microsoft Windows and Office) is developed by a single person or company. Only the final product that is run on your computer is made available, while the all important source code or recipe for making the software is kept a secret. This software is normally copyright or patented and is legally protected as intellectual property. The owner of the software distributes the software directly or via vendors to you the end user. You cannot legally give it away, copy it or modify it in any way unless you have a special licence or permission to do so.

Open Source software is almost the opposite (i.e. Redhat Linux, Open Office) and is free to use and distribute provided that certain conditions are met.

Why do people create open and closed source software?

To gain a better understanding of some of the strengths and weaknesses of the different types of software, it helps to understand why people or organizations spend time and money creating the software in the first place.

The incentives for producing closed source software are fairly straight forward. The producer creates a product that you can be sold. The buyers are not allowed to distribute it further and the inner workings are kept secret. If someone does anything they are not supposed to, the producer can take legal action against them. Software is intangible, and once you have made your program, you can replicate it as many times as you want. This is a huge leap over building lets say car, where you need more materials for each car you churn out.

The incentives for Open source software are not as straight forward. What you have are developers writing commercial level software and effectively giving it away. The reasons for writing open source software range from those who have a passion for computing and who want to contribute to make a difference to those who do not like having to rely on any single company to produce what is needed. There have been a few cases where open software has been sponsored to act as competition where another company has been seen to abuse its monopoly position.

Open source software and its authors are legally protected by the GPL (General Public License). When you use software published under the GPL you can use it for free and give it to as many people as you want - providing that you do not pretend that you wrote it - this stops someone from hijacking your work and benefiting as a result. You can make changes and then even sell the software, provided you make available the modified source code specifying which bits you changed - you're only likely to sell one copy. The person you sell it to can then redistribute it freely. The software is provided without warranty, so a user cannot sue anyone if it breaks.

Which one should I choose?

Unfortunately the decision is not clear cut and comes down to what you, the end user needs. Below is a quick comparison of the strengths and weaknesses of both open and closed software.

Page 14: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

14

Closed Source software is created to satisfy a need in the market. In paying for the software you get some definite perks. You can expect documentation to be provided with whatever your purchase and you can expect the application to perform in the way it was advertised. If the software does not work, you have the option of legal action or some other recourse against the company who sold you the software. As it is in the best interests of the company making the software, you can normally count on being able to obtain help / support for the software that you have paid money for.

On the down side software companies are under great amount of stress to continually upgrade what they are selling. In most cases software is rushed out the door before it is ready. This means that the software may not function correctly in some cases and in the worst case, can compromise the security of your computer. Most companies deal with this by producing patches that fix problems that get discovered, however users have a poor record of applying these patches resulting in thousands of computers around the world being left vulnerable every time a flaw is discovered.

As mentioned above, closed source software companies are the sole people who are allowed to build the products that they sell, so in the world there are relatively few versions of popular software that people use, considering the millions of computers being used today. For example, a security flaw affecting the latest version of windows and in turn millions of computers was discovered in June last year. Most users failed to apply the patch that was issued and within a few weeks a virus was written to exploit this vulnerability. The result was many networks around the world being brought to a crawl, clogged by the traffic produced by this program spreading freely.

Open source software on the other hand is created normally for use by those who want to use it. Many potentially useful programs are normally aimed at the proficient user, making it too complicated and inaccessible to the average end user. This situation has been remedied by organizations who tailor once unfriendly software to suit end users. The big examples are the Linux operating system (alternative to Windows) and Open Office (alternative to Microsoft Office). Unlike closed source software, the software is normally provided without warranty and you have no recourse should the software malfunction or not perform, there is also no guarantee of good documentation or support.

On the positive side, the source code to open software is available by all to read. The code for the bigger projects is therefore scrutinised by more people than even the biggest software companies can hire and software flaws are discovered as opposed to stumbled across. Most open source projects allow anyone to contribute and problems are normally resolved quickly and cleanly.

Open source software packages have had a better security record than closed source software. As open source programs normally originate for use by the experienced people who write them, security takes precedence over convenience. A good example is the way Microsoft's Outlook Express deals with email compared to Linux or Unix equivalents. In Linux, you first have to save an attachment to disk, mark it as executable (i.e. a program you want to run) and then run it. Outlook Express has a preview pane running by default, so when you click on a message, Outlook automatically goes sniffing around whatever's in the message - if there's anything malicious in the message, you're probably going to get infected.

The bottom line

Both open source and closed source software are far from perfect. If you are new to computers then closed source software is probably for you, as the cost of training and getting yourself competent will

Page 15: NOTES F2032 TOPIC 1

JTMKFUNDAMENTAL OF OPERATING SYSTEM

15

exceed getting the cost of buying easier to use software. The support offered by closed source companies in Africa tends to be better than its open source competitors. There are companies that offer paid support for open source software, but again this is still relatively small in Africa.

On the other hand, open source software is catching up quickly with its closed source counterparts. Some versions or distributions of Linux can be installed completely without having to touch a keyboard and projects are currently running to improve the documentation available for open source software. Also as overall computer literacy improves as computers become more pervasive, open source software will become more appealing.

In the author's opinion, the abilities and friendliness of open and closed source software are merging, and the real showdown will happen in five to ten years when the only real difference between the two classes will be the cost. This can already be seen by hints of the South African and Nigerian governments considering open source products.

http://www.scienceinafrica.co.za/2004/january/software.html

J. NETWORK OPERATING SYSTEM

A network operating system (NOS) is a computer operating system that is designed primarily to support workstation, personal computer, and, in some instances, older terminal that are connected on a local area network (LAN). Artisoft's LANtastic, Banyan VINES, Novell's NetWare, and Microsoft's LAN Manager are examples of network operating systems. In addition, some multi-purpose operating systems, such as Windows NT and Digital's OpenVMS come with capabilities that enable them to be described as a network operating system.

A network operating system provides printer sharing, common file system and database sharing, application sharing, and the ability to manage a network name directory, security, and other housekeeping aspects of a network.