types-of-computer... report

12
INTRODUCTION Q:- What is Computer ? Ans:- A computer is a machine that manipulates data according to a set of instructions. Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity. The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century

Upload: hashgeneration

Post on 28-Jun-2015

3.437 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Types-of-computer... report

INTRODUCTION

Q:- What is Computer ?

Ans:- A computer is a machine that manipulates data according to a set of instructions. Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity. The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations. The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability. The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be

Page 2: Types-of-computer... report

re-programmed to compensate for the changing lengths of day and night throughout the year. control, not data.

TYPES OF COMPUTERS

There are mainly three types of Computers and they have been classified into more types of computers. The types of computers are :-

1. Analog computers2. Hybrid computers

3. Digital computers

(a) Micro computers

(b)Mini computers

(c) Super computers

(d)Main frame computers

Page 3: Types-of-computer... report

Analog Computer

An analog computer (spelled analogue in British English) is a form of computer that uses the continuously-changeable aspects of physical phenomena such as electrical,mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities incrementally, as their numerical values change. Mechanical analog computers were very important in gun fire control in World War II and the Korean War; they were made in significant numbers. In particular, development of transistors made electronic analog computers practical, and before digital computers had developed sufficiently, they were commonly used in science and industry. Analog computers can have a very wide range of complexity. Slide rules and nomographs are the simplest, while naval gun fire control computers and large hybrid digital/analogue computers were among the most complicated. Digital computers have a certain minimum

Types of computers

Analog Computer Digital computers

Mini computer

Hybrid computer

Micro computer Main frame Computer Super Computer

Page 4: Types-of-computer... report

(and relatively great) degree of complexity that is far greater than that of the simpler analog computers. This complexity is required to execute their stored programs, and in many instances for creating output that is directly suited to human use. Setting up an analog computer required scale factors to be chosen, along with initial conditions – that is, starting values. Another essential was creating the required network of interconnections between computing elements. Sometimes it was necessary to re-think the structure of the problem so that the computer would function satisfactorily. No variables could be allowed to exceed the computer's limits, and differentiation was to be avoided, typically by rearranging the "network" of interconnects, using integrators in a different sense. Running an electronic analog computer, assuming a satisfactory setup, started with the computer held with some variables fixed at their initial values. Moving a switch released the holds and permitted the problem to run. In some instances, the computer could, after a certain running time interval, repeatedly return to the initial-conditions state to reset the problem, and run it again. The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations that are of essentially the same form. However, the difference between these systems is what makes analog computing useful. If one considers a simple mass-spring system, constructing the physical system would require making or modifying the springs and masses. This would be followed by attaching them to each other and an appropriate anchor, collecting test equipment with the appropriate input range, and finally, taking measurements. In more complicated cases, such as suspensions for racing cars, experimental construction, modification, and testing is not so simple nor inexpensive.

Hybrid Computers

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. In general, analog computers are extraordinarily fast, since they can solve most complex equations at the rate at which a signal traverses the circuit, which is generally an appreciable fraction of the speed of light. On the other hand, the precision of analog computers is not good; they are limited to three, or at most, four digits of precision. Digital computers can be built to take the solution of equations to almost unlimited precision, but quite slowly compared to analog computers. Generally, complex equations are approximated using iterative numerical methods which take huge numbers of iterations, depending on how good the initial "guess" at the final value is and how

Page 5: Types-of-computer... report

much precision is desired. (This initial guess is known as the numerical seed for the iterative process.) For many real-time operations, the speed of such digital calculations is too slow to be of much use (e.g., for very high frequency phased array radars or for weather calculations), but the precision of an analog computer is insufficient. Hybrid computers can be used to obtain a very good but relatively imprecise 'seed' value, using an analog computer front-end, which is then fed into a digital computer iterative process to achieve the final desired degree of precision. With a three or four digit, highly accurate numerical seed, the total digital computation time necessary to reach the desired precision is dramatically reduced, since many fewer iterations are required. Consider that the nervous system in animals is a form of hybrid computer. Signals pass across the synapses from one nerve cell to the next as discrete (digital) packets of chemicals, which are then summed within the nerve cell in an analog fashion by building an electro-chemical potential until its threshold is reached, whereupon it discharges and sends out a series of digital packets to the next nerve cell. The advantages are at least threefold: noise within the system is minimized (and tends not to be additive), no common grounding system is required, and there is minimal degradation of the signal even if there are substantial differences in activity of the cells along a path (only the signal delays tend to vary). The individual nerve cells are analogous to analog computers; the synapses are analogous to digital computers.Note that hybrid computers should be distinguished from hybrid systems. The latter may be no more than a digital computer equipped with an analog-to-digital converter at the input and/or a digital-to-analog converter at the output, to convert analog signals for ordinary digital signal processing, and conversely, e.g., for driving physical control systems, such as servomechanisms. Sometimes seen is another usage of the term "hybrid computer" meaning a mix of different digital technologies to achieve overall accelerated processing, often application specific using different processor technologies.

Digital Computer

Blaise Pascal of France and Gottfried Wilhelm Leibniz of Germany invented mechanical digital calculating machines during the 17th century. The English inventor Charles Babbage, however, is generally credited with having conceived the first automatic digital computer. During the 1830s Babbage devised his so-called Analytical Engine, a mechanical device designed to combine basic arithmetic operations with decisions based on its own computations. Babbage’s plans embodied most of the fundamental elements of the modern digital computer. For example, they called for sequential control—i.e., program control that included branching, looping, and both arithmetic and storage units with automatic printout. Babbage’s device, however, was never completed and was forgotten until his writings were rediscovered over a century later. Of great importance in the evolution of the digital computer was the work of the English mathematician and logician George Booleen the symbols of algebra and those of logic as used to represent logical forms and syllogisms. His formalism, operating on only 0 and 1, became the basis

Page 6: Types-of-computer... report

of what is now called Boolean algebra, on which computer switching theory and procedures are grounded. John V. Atanasoff, an American mathematician and physicist, is credited with building the first electronic digital computer, which he constructed from 1939 to 1942 with the assistance of his graduate student Clifford E. Berry. Konrad Zuse, a German engineer acting in virtual isolation from developments elsewhere, completed construction in 1941 of the first operational program-controlled calculating machine (Z3). In 1944 Howard Aiken and a group of engineers at International Business Machines Corporation completed work on the Harvard Mark I, a machine whose data-processing operations were controlled primarily by electric relays (switching devices). Since the development of the Harvard Mark I, the digital computer has evolved at a rapid pace. The succession of advances in computer equipment, principally in logic circuitry, is often divided into generations, with each generation comprising a group of machines that share a common technology. In 1946 J. Presper Eckert and John W. Mauchly, both of the University of Pennsylvania, constructed ENIAC (an acronym for electronic numerical integrator and computer). A typical digital computer system has four basic functional elements: (1) input-output equipment, (2) main memory, (3) control unit, and (4) arithmetic-logic unit. Any of a number of devices is used to enter data and program instructions into a computer and to gain access to the results of the processing operation. Common input devices include keyboards and optical scanners; output devices include printers and cathode-ray tube and liquid-crystal display monitors. The information received by a computer from its input unit is stored in the main memory or, if not for immediate use, in an auxiliary storage device. The control unit selects and calls up instructions from the memory in appropriate sequence and relays the proper commands to the appropriate unit. It also synchronizes the varied operating speeds of the input and output devices to that of the arithmetic-logic unit (ALU) so as to ensure the proper movement of data through the entire computer system.

(a) Mini Computers :- A minicomputer (colloquially, mini) is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the largest multi-user systems (mainframe computers) and the smallest single-user systems (microcomputers or personal computers). The class at one time formed a distinct group with its own hardware and operating systems, but the contemporary term for this class of system is midrange computer, such as the higher-end SPARC, POWER and Itanium -based systems from Sun Microsystems, IBM and Hewlett-Packard.

(b) Micro Computers: - A microcomputer is a computer with a microprocessor as its central processing unit. Another general characteristic of these computers is that they occupy physically small amounts of space when compared to mainframe and minicomputers. Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense) The abbreviation "micro" was common during the 1970s and 1980s, but has now fallen out of common usage.

Page 7: Types-of-computer... report

(c) Main frame Computers: - Mainframes (often colloquially referred to as Big Iron) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, ERP, and financial transaction processing. The term probably had originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames. Later the term was used to distinguish high-end commercial machines from less powerful units.Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z10 is the latest incarnation.) Otherwise, large systems that are not based on the System/360 but are used for similar tasks are usually referred to as servers or even supercomputers. However, "server", "supercomputer" and "mainframe" are not synonymous (see client-server).

(d) Super Computers: - A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation(CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Report on Types of Computers

Page 8: Types-of-computer... report

Table of contents

Sr.no Topics Page no.

1. Introduction to Computer 32. Types of Computer 43. Analog computer 54. Hybrid computer 65. Digital computer 76. Mini Computer 8

Page 9: Types-of-computer... report

7. Micro computer 88. Mainframe Computer 89. Super Computer 8