atm final prac

90
CT INSTITUTE OF TECHNOLOGY 1. Study of ATX base mother board. ATX :- ATX (Advanced Technology Extended) is a motherboard from factor specification developed by Intel in 1995 to improve on previous de facto standards like the AT form factor . It was the first big change in computer case , motherboard , and power supply design in many years, improving standardization and interchangeability of parts. The specification defines the key mechanical dimensions, mounting point, I/O panel, power and connector interfaces between a computer case , a motherboard , and a power supply . With the improvements it offered, including lower costs, ATX overtook AT completely as the default form factor for new systems within a few years. ATX addressed many of the AT form factor's annoyances that had frustrated system builders. Other standards for smaller boards (including micro ATX , Flex ATX and mini-ITX ) usually keep the basic rear layout but reduce the size of the board and the number of expansion slot positions. In 2003, Intel announced the BTX standard, intended as a replacement for ATX. As of 2009, the ATX form factor remains a standard for do-it- yourselfers; BTX has however made inroads into pre-made systems. This was designed to solve the problems in BAT and LPX Motherboards The official specifications were released by Intel in 1995, and have been revised numerous times since, the most recent being version 2.3 released in 2007. A full-size ATX board is 12 × 9.6 in (305 × 244 mm). This allows many ATX form factor chassis to accept micro ATX boards as well. Page 1

Upload: ashish-passi

Post on 29-Nov-2014

1.242 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

1. Study of ATX base mother board.

ATX :-

ATX (Advanced Technology Extended) is a motherboard from factor specification developed by Intel in 1995 to improve on previous de facto standards like the AT form factor. It was the first big change in computer case, motherboard, and power supply design in many years, improving standardization and interchangeability of parts. The specification defines the key mechanical dimensions, mounting point, I/O panel, power and connector interfaces between a computer case, a motherboard, and a power supply. With the improvements it offered, including lower costs, ATX overtook AT completely as the default form factor for new systems within a few years. ATX addressed many of the AT form factor's annoyances that had frustrated system builders. Other standards for smaller boards (including micro ATX, Flex ATX and mini-ITX) usually keep the basic rear layout but reduce the size of the board and the number of expansion slot positions. In 2003, Intel announced the BTX standard, intended as a replacement for ATX. As of 2009, the ATX form factor remains a standard for do-it-yourselfers; BTX has however made inroads into pre-made systems. This was designed to solve the problems in BAT and LPX Motherboards

The official specifications were released by Intel in 1995, and have been revised numerous times since, the most recent being version 2.3 released in 2007.

A full-size ATX board is 12 × 9.6 in (305 × 244 mm). This allows many ATX form factor chassis to accept micro ATX boards as well.

Page 1

Page 2: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

ATX I/O platesOn the back of the system, some major changes were made. The AT standard had only a keyboard connector and expansion slots for add-on card backplates. Any other onboard interfaces (such as serial and parallel ports) had to be connected via flying leads to connectors which were mounted either on spaces provided by the case or brackets placed in unused expansion slot positions. ATX allowed each motherboard manufacturer to put these ports in a rectangular area on the back of the system, with an arrangement they could define themselves (though a number of general patterns depending on what ports the motherboard offers have been followed by most manufacturers). Generally the case comes with a snap out panel, also known as an I/O plate or I/O shield, reflecting one of the common arrangements. If necessary, I/O plates can be replaced to suit the arrangement on the motherboard that is being fitted and the I/O plates are usually included when purchasing a motherboard. It's also possible to operate without a plate if one doesn't mind the empty space at the rear of the case, although such operation could be illegal due to radio frequency interference that will be allowed to escape without such a metal plate to contain it. Panels were also made that allowed fitting an AT motherboard in an ATX case.

ATX also made the PS/2-style mini-DIN keyboard and mouse connectors ubiquitous. AT systems used a 5-pin DIN connector for the keyboard, and were generally used with serial port mice (although PS/2 mouse ports were also found on some systems). Many modern motherboards are phasing out the PS/2-style keyboard and mouse connectors in favor of the more modern Universal Serial Bus. Other legacy connectors that are slowly being phased out of modern ATX motherboards include 25-pin parallel ports and 9-pin RS-232 serial ports. In their place are onboard peripheral ports such as Ethernet, FireWire, e SATA, audio ports (both analog and S/PDIF), video (analog D-sub, DVI, or HDMI), and extra USB ports.

Ultra ATX, XL-ATXIn 2008, Foxconn unveiled a Foxconn F1 motherboard prototype, which has the same width as a standard ATX motherboard, but an extended 14.4" length to accommodate 10 slots. The firm called the new "form factor" for this motherboard "Ultra ATX"] in its CES 2008 showing. Also unveiled during the January 2008 CES was the Lian Li Armor suit PC-P80 case with 10 slots designed for the motherboard.

Unlike Ultra-ATX which was defined by one company, XL-ATX is not yet an established standard. In April 2010, Gigabyte Technology announced its 12.8" long by 9.6" wide GA-890FXA-UD7 motherboard that allowed all seven slots to be moved downward by one slot position. The added length could have allowed placement of up to eight expansion slots, but the top slot position is vacant on this particular model. Meanwhile, EVGA had already released a 13.5" long by 10.3" wide "XL-ATX" motherboard as its EVGA X58 Classified 4-Way SLI. EVGA's version of XL-ATX has room for up to nine expansion slots, but the top two positions are vacant. Note that even though both of these boards have room for extra expansion slots, neither makes use of that extra room for card placement. In Q2/2010 Gigabyte launched another XL-ATX Main board with model number GA-X58A-UD9, but it also only implements 7 PCI-Express x16 Slots (the extra space from XL-ATX form factor seems to be needed for chipset cooling).

Page 2

Page 3: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

HPTXIn 2010, EVGA Corporation has revealed plans for a new motherboard, the "Super Record 2", or SR-2, that is claimed to have a size surpassing that of the "EVGA X58 Classified 4-Way SLI". The new board is designed to accommodate two Dual QPI LGA1366 slot CPUs (e.g. Intel Xeon), similar to that of the Intel "SkullTrail" motherboard that could accommodate two Intel Core 2 Quad processors, and appears to have a total of seven PCI-E slots and 12 DDR3 RAM slots. The new form factor is dubbed "HPTX", and is 13.6 by 15 inches (34.5 cm by 38.1 cm).

Power supplyThe ATX specification requires the power supply to produce three main outputs, +3.3 V, +5 V and +12 V. Low-power −12 V and 5 VSB (standby) supplies are also required. A −5 V output was originally required because it was supplied on the ISA bus, but it became obsolete with the removal of the ISA bus in modern PCs and has been removed in later versions of the ATX standard.

Originally the motherboard was powered by one 20-pin connector. An ATX power supply provides a number of peripheral power connectors, and (in modern systems) two connectors for the motherboard: a 4-pin auxiliary connector providing additional power to the CPU, and a main 24-pin power supply connector, an extension of the original 20-pin version.

Page 3

Page 4: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

ATX 2.0 Connector

24-pin ATX12V 2.x power supply connector(20-pin omits the last four: 11, 12, 23 and 24)Color Signal Pin Pin Signal Color

Orange +3.3 V 1 13+3.3 V Orange+3.3 V sense Brown

Orange +3.3 V 2 14 −12 V BlueBlack Ground 3 15 Ground BlackRed +5 V 4 16 Power on GreenBlack Ground 5 17 Ground BlackRed +5 V 6 18 Ground BlackBlack Ground 7 19 Ground BlackGrey Power good 8 20 Reserved N/CPurple +5 V standby 9 21 +5 V RedYellow +12 V 10 22 +5 V RedYellow +12 V 11 23 +5 V RedOrange +3.3 V 12 24 Ground Black

Pins 8, 13, and 16 (shaded) are control signals, not power:

"Power On" is pulled up to +5V by the PSU, and must be driven low to turn on the PSU.

"Power good" is low when other outputs have not yet reached, or are about to leave, correct voltages.

The "+3.3 V sense" line is for remote sensing.

Pin 20 (formerly −5V, white wire) is absent in current power supplies; it was optional in ATX and ATX12V ver. 1.2, and deleted as of ver. 1.3.

The right-hand pins are numbered 11–20 in the 20-pin version.

Four wires have special functions: PS_ON# or "Power On" is a signal from the motherboard to the power supply. When

the line is connected to GND (by the motherboard), the power supply turns on. It is internally pulled up to +5 V inside the power supply.

PWR_OK or "Power Good" is an output from the power supply that indicates that its output has stabilized and is ready for use. It remains low for a brief time (100–500 ms) after the PS_ON# signal is pulled low.

+5 VSB or "+5 V standby" supplies power even when the rest of the supply lines are off. This can be used to power the circuitry that controls the Power On signal.

+3.3 V sense should be connected to the +3.3 V on the motherboard or its power connector. This connection allows for remote sensing of the voltage drop in the power supply wiring.

Page 4

Page 5: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Generally, supply voltages must be within ±5% of their nominal values at all times. The little-used negative supply voltages, however, have a ±10% tolerance. There is a specification for ripple in a 10 Hz–20 MHz bandwidth.

Supply [V] Tolerance Range (min. to max.) Ripple (p. to p. max.)+5 VDC ±5% (±0.25 V) +4.75 V to +5.25 V 50 mV−5 VDC ±10% (±0.50 V) –4.50 V to –5.50 V 50 mV+12 VDC ±5% (±0.60 V) +11.40 V to +12.60 V 120 mV−12 VDC ±10% (±1.2 V) –10.8 V to –13.2 V 120 mV+3.3 VDC ±5% (±0.165 V) +3.135 V to +3.465 V 50 mV+5 VSB ±5% (±0.25 V) +4.75 V to +5.25 V 50 mV

Physical CharacteristicsATX power supplies generally have the dimensions of 6 x 3.5 x 5.5 (inches) and share a common mounting layout of four screws arranged on the back side of the unit.

Main changes from AT design

Power switchAT-style computer cases had a power button that was directly connected to the system computer power supply (PSU). The general configuration was a double-pole latching mains voltage switch with the four pins connected to wires from a four-core cable. The wires were either soldered to the power button (making it difficult to replace the power supply if it failed) or blade receptacles were used.

Original ATXATX, introduced in late 1995, defined three types of power connectors:

4-pin "Molex connector" — transferred directly from AT standard: +5 V and +12 V for P-ATA hard disks, CD-ROMs, 5.25 inch floppy drives and other peripherals.[11]

4-pin Berg floppy connector — transferred directly from AT standard: +5 V and +12 V for 3.5 inch floppy drives and other peripherals.

20-pin Molex Mini-fit Jr. main motherboard connector — new to the ATX standard.

A supplemental 6-pin AUX connector providing additional 3.3 V and 5 V supplies to the motherboard, if it needed it. This was used to power the CPU in motherboards that drove their CPU voltage regulator modules which used 3.3 volt rails and/or 5 volt rails as their input and could not get enough power through the regular 20-pin header.

The power distribution specification defined that most of PSU's power should be provided on 5 V and 3.3 V rails, because most of the electronic components (CPU, RAM, chipset, PCI, AGP and ISA cards) used 5 V or 3.3 V for power supply. The 12 V rail was only used by fans and motors of peripheral devices (HDD, FDD, CD-ROM, etc.).

The original ATX power supply specification remained mostly unrevised until 2000.

TX12V 1.xWhile designing the Pentium 4 platform in 1999/2000, the standard 20-pin ATX power connector was deemed inadequate to supply increasing electrical load requirements. So, ATX was significantly revised into ATX12V 1.0 standard (that is why ATX12V 1.x is sometimes inaccurately called ATX-P4). ATX12V 1.x was also adopted by Athlon XP and Athlon 64 systems.

[edit] ATX12V 1.0

Page 5

Page 6: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

The main changes and additions in ATX12V 1.0 (released in February 2000) were:

Increased the power on the 12 V rail (power on 5 V and 3.3 V rails remained mostly the same).

An extra 4-pin mini fit JR (Molex 39-01-2040), 12-volt connector to power the CPU. Formally called the +12 V Power Connector, this is commonly referred to as the P4 connector because this was first needed to support the Pentium 4 processor.

Before the Pentium 4, processors were generally powered from the 5V rail. Modern processors operate at much lower voltages, typically around 1 V, and some draw over 100 A. It is infeasible to provide such low voltages and high currents from the system power supply, so the Pentium 4 established the practice of generating them with a DC-DC converter on the motherboard next to the processor. The 4-pin 12V connector supplies this converter.

ATX12V 1.1This is a minor revision from August 2000. The power on 3.3 V rail was slightly increased, among other much lesser changes.

ATX12V 1.2A relatively minor revision from January 2002. The only significant change was that the −5 V rail was no longer required (it became optional). This voltage was very rarely used, only on some old systems with some ISA add-on cards.

ATX12V 1.3Introduced in April 2003 (a month after 2.0). This standard introduced some changes, with most of them being minor. Some of them are:

Slightly increased the power on 12 V rail.

Defined minimal required PSU efficiencies for light and normal load.

Defined acoustic levels.

Introduction of Serial ATA power connector (but defined as optional).

The -5V rail is prohibited.

ATX12V 2.xATX12V 2.x brought a very significant design change regarding power distribution. When analyzing the then-current PC architectures' power demands, it was determined that it would be much easier (both from economical and engineering perspectives) to power most PC components from 12 V rails, instead of from 3.3 V and 5 V rails.

ATX12V 2.0

Page 6

Page 7: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

ATX 450 PHF.The above conclusion was incorporated in ATX12V 2.0 (introduced in February 2003), which defined quite different power distribution from ATX12V 1.x:

The main ATX power connector was extended to 24 pins. The extra four pins provide one additional 3.3 V, 5 V and 12 V circuit.

The 6-pin AUX connector from ATX12V 1.x was removed because the extra 3.3 V and 5 V circuits which it provided are now incorporated in the 24-pin main connector.

Most power is now provided on 12 V rails. The standard specifies that two independent 12 V rails (12 V2 for the 4 pin connector and 12 V1 for everything else) with independent overcurrent protection are needed to meet the power requirements safely (some very high power PSUs have more than two rails, recommendations for such large PSUs are not given by the standard).

The power on 3.3 V and 5 V rails was significantly reduced.

The power supply is required to include a Serial ATA power cable.

Many other specification changes and additions.

ATX12V v2.01This is a minor revision from June 2004. An errant reference for the -5V rail was removed. Other minor changes were introduced.

ATX12V v2.1This is a minor revision from March 2005. The power was slightly increased on all rails. Efficiency requirements changed. Added 6-pin connector for PCIe graphics cards, that aids the PCIe slot in the motherboard, delivering 75 watts.

ATX12V v2.2Another minor revision. Added 8-pin connector for PCIe graphics cards, that delivers another 150 watts.

ATX12V v2.3The most recent revision, effective March 2007. Efficiency recommendations were increased to 80% (with at least 70% efficiency required), and the 12 V minimum load requirement was lowered. Higher efficiency generally results in less power consumption (and less waste heat), and the 80% recommendation brings supplies in line with new Energy Star 4.0 mandates.[14]

The reduced load requirement allows compatibility with processors that draw very little

Page 7

Page 8: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

power during startup.[15] The absolute over current limit (240VA per rail) is no longer present, enabling 12V line to provide more than 20A per rail.

2:- Study of ATA (PATA & SATA) & SCSI Interface.

Page 8

Page 9: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

1- ATA:- Parallel ATA (PATA), originally ATA, is an interface standard for the connection of storage devices such as hard disks, solid-state drives, floppy drives, and optical disc drives in computers. The standard is maintained by X3/INCITS committee. It uses the underlying AT Attachment (ATA) and AT Attachment Packet Interface (ATAPI) standards.

The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use. After the introduction of Serial ATA in 2003, the original ATA was retroactively renamed Parallel ATA.

Parallel ATA cables have a maximum allowable length of only 18 in (457 mm). Because of this length limit the technology normally appears as an internal computer storage interface. For many years ATA provided the most common and the least expensive interface for this application. It has largely been replaced by Serial ATA (SATA) in newer systems.

IDE and ATA-1The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Control Data Corporation (who manufactured the hard drive part) and Compaq Computer (into whose systems these drives would initially go), they developed the connector, the signaling protocols, and so on with the goal of remaining software compatible with the existing ST-506 hard drive interface.[2] The first such drives appeared in Compaq PCs in 1986.

The term Integrated Drive Electronics refers not just to the connector and interface definition, but also to the fact that the drive controller is integrated into the drive, as opposed to a separate controller on or connected to the motherboard. The interface cards used to connect a parallel ATA drive to, for example, a PCI slot are not drive controllers, they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA slot in disguise, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it.

Page 9

Page 10: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1".

Second ATA interfaceWhen PC motherboard makers started to include onboard ATA interfaces in place of the earlier ISA plug-in cards, there was usually only one ATA connector on the board, which could support up to two hard drives. At the time in combination with the floppy drive, this was sufficient for most people, and eventually it became common to have two hard drives installed. When the CD-ROM was developed, many computers would have been unable to accept these drives if they had been ATA devices, due to already having two hard drives installed. Adding the CD-ROM drive would have required removal of one of the drives.

SCSI was available as a CD-ROM expansion option at the time, but devices with SCSI were more expensive than ATA devices due to the need for a smart interface that is capable of bus arbitration. SCSI typically added US$ 100-300 to the cost of a storage device, in addition to the cost of a SCSI host adapter.

The less-expensive solution was the addition of a dedicated CD-ROM interface, typically included as an expansion option on a sound card. It was included on the sound card because early business PCs did not include support for more than simple beeps from the internal speaker, and tuneful sound playback was considered unnecessary for early business software. When the CD-ROM was introduced, it was logical to also add digital audio to the computer at the same time (for the same reason, sound cards tended to include a gameport interface for joysticks). An older business PC could be upgraded in this manner to meet the Multimedia PC standard for early software packages that used sound (which required the sound card) and colorful video animation (which required the CD-ROM as floppy disks simply did not have the necessary data capacity).

This second ATA interface on the sound card eventually evolved into the second motherboard ATA interface which was long included as a standard component in all PCs. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems.

EIDE and ATA-2In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2" The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants.

ATA-2 also was the first to note that devices other than hard drives could be attached to the interface:

ATAPIAs mentioned in the previous sections, ATA was originally designed for, and worked only with hard disks and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard

Page 10

Page 11: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

disks. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol.

The Small Form Factor committee approached this problem by defining ATAPI, the "ATA Packet Interface". ATAPI is actually a protocol allowing the ATA interface to carry SCSI commands and responses; therefore all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. In fact, some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI.

ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI.

ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive .

The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview.

ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4).

UDMA and ATA-4The ATA/ATAPI-4 also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 MByte/s to 33 MByte/second. In later versions faster Ultra DMA modes were added, requiring a new 80-wire cable to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MByte/

Parallel ATA interface

Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin connectors attached to a ribbon cable. Each cable has two or three connectors, one of which plugs into an adapter interfacing with the rest of the computer system. The remaining connector(s) plug into drives.

ATA's cables have had 40 wires for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives — the extra four for power), but an 80-wire version appeared with the introduction of the Ultra DMA/33 (UDMA) mode. All of the additional wires in the new cable are ground wires, interleaved with the previously defined wires to

Page 11

Page 12: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

reduce the effects of capacitive coupling between neighboring signal wires, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables.

Though the number of wires doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally the connectors are different; the connectors for the 80-wire cable connect a larger number of ground wires to a smaller number of ground pins, while the connectors for the 40-wire cable connect ground wires to ground pins one-for-one. 80-wire cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-wire cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select.

Round parallel ATA cables (as opposed to ribbon cables) were eventually made available as they were believed to have less effect on computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications.

Pin 20

In the ATA standard pin 20 is defined as (mechanical) key and is not used. This socket on the female connector is often obstructed, requiring pin 20 to be omitted from the male cable or

drive connector, making it impossible to plug it in the wrong way round; a male connector with pin 20 present cannot be used. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20.

Pin 28

Pin 28 of the gray (slave/middle) connector of an 80 conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors.

Pin 34

Pin 34 is connected to ground inside the blue connector of an 80 conductor cable but not attached to any conductor of the cable. It is attached normally on the gray and black connectors.

2- SATA:-

Serial ATA (SATA or Serial Advanced Technology Attachment)

Page 12

Page 13: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

It is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial ATA was designed to replace the older ATA (AT Attachment) standard (also known as EIDE), offering several advantages over the older parallel ATA (PATA) interface: reduced cable-bulk and cost (7 conductors versus 40), native hot swapping, faster data transfer through higher signalling rates, and more efficient transfer through an (optional) I/O queuing protocol.

SATA host-adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) used a 16-bit wide data bus with many additional support and control signals, all operating at much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command-set as legacy ATA devices.

As of 2009, SATA has replaced parallel ATA in most shipping consumer desktop and laptop computers, and is expected to eventually replace PATA in embedded applications where space and cost are important factors. SATA’s market share in the desktop PC market was 99% in 2008.[2] PATA remains widely used in industrial and embedded applications that use CompactFlash storage, though even here, the next CFast storage standard will be based on SATA.

SATA revision 1.0 (SATA 1.5 Gbit/s)First-generation SATA interfaces, now known as SATA 1.5 Gbit/s, communicate at a rate of 1.5 Gbit/s. Taking 8b/10b encoding overhead into account, they have an actual uncoded transfer rate of 1.2 Gbit/s (150 MB/s). The theoretical burst throughput of SATA 1.5 Gbit/s is similar to that of PATA/133, but newer SATA devices offer enhancements such as NCQ, which improve performance in a multitasking environment.

During the initial period after SATA 1.5 Gbit/s finalization, adapter and drive manufacturers used a "bridge chip" to convert existing PATA designs for use with the SATA interface. Bridged drives have a SATA connector, may include either or both kinds of power connectors, and, in general, perform identically to their PATA equivalents. Most lack support for some SATA-specific features such as NCQ. Native SATA products quickly eclipsed bridged products with the introduction of the second generation of SATA drives.

As of April 2010 mechanical hard disk drives can transfer data at up to 157 MB/s, which is beyond the capabilities of the older PATA/133 specification and also exceeds a SATA 1.5 Gbit/s link.

Page 13

Page 14: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

SATA revision 2.0 (SATA 3 Gbit/s)Second generation SATA interfaces running at 3.0 Gbit/s are shipping in high volume as of 2010, and prevalent in all[citation needed] SATA disk drives and the majority of PC and server chipsets. With a native transfer rate of 3.0 Gbit/s, and taking 8b/10b encoding into account, the maximum uncoded transfer rate is 2.4 Gbit/s (300 MB/s). The theoretical burst throughput of SATA 3.0 Gbit/s is roughly double that of PATA/133. In addition, SATA devices offer enhancements such as native command queuing that improve performance in a multitasking environment.

All SATA data cables meeting the SATA spec are rated for 3.0 Gbit/s and will handle current mechanical drives without any loss of sustained and burst data transfer performance. However, high-performance flash drives are approaching SATA 3 Gbit/s transfer rate, and this is being addressed with the SATA 6 Gbit/s interoperability standard.

SATA revision 3.0 (SATA 6 Gbit/s)Serial ATA International Organization presented the draft specification of SATA 6 Gbit/s physical layer in July 2008 and ratified its physical layer specification on August 18, 2008. The full 3.0 standard was released on May 27, 2009.[11] It provides peak throughput of about 600 MB/s (Megabytes per second) including the protocol overhead (10b/8b coding with 8 bits to one byte). While even the fastest conventional hard disk drives can barely saturate the original SATA 1.5 Gbit/s bandwidth, Solid-State Drives have already saturated SATA 3 Gbit/s with 285/275 MB/s max read/write speed and 250 MB/s sustained with the Sandforce 1200 and 1500 controller. However SandForce SSD controllers scheduled for

release in 2011 have delivered 500 MB/s read/write rates, and ten channels of fast flash can reach well over 500 MB/s with new ONFI drives – a move from SATA 3 Gbit/s to SATA 6 Gbit/s allows such devices to work at their full speed. Full performance from Crucial's C300 SSD similarly require SATA 3.0. As for standard hard disks, the reads from their built-in DRAM cache will end up faster across the new interface.[13] SATA 6 Gbit/s hard drives and motherboards are now shipping from several suppliers. Intel's current Sandybridge platform offers 6 Gbit/s SATA ports as standard.

The new specification contains the following changes:

6 Gbit/s for scalable performance when used with SSDs

Continued compatibility with SAS, including SAS 6 Gbit/s. "A SAS domain may support attachment to and control of unmodified SATA devices connected directly into the SAS domain using the Serial ATA Tunneled Protocol (STP)" from the SATA_Revision_3_0_Gold specification.

Isochronous Native Command Queuing (NCQ) streaming command to enable isochronous quality of service data transfers for streaming digital content applications.

An NCQ Management feature that helps optimize performance by enabling host processing and management of outstanding NCQ commands.

Improved power management capabilities.

A small low insertion force (LIF) connector for more compact 1.8-inch storage devices.

A connector designed to accommodate 7 mm optical disk drives for thinner and lighter notebooks.

Page 14

Page 15: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Alignment with the INCITS ATA8-ACS standard.

In general, the enhancements are aimed at improving quality of service for video streaming and high-priority interrupts. In addition, the standard continues to support distances up to a meter. The new speeds may require higher power consumption for supporting chips, factors that new process technologies and power management techniques are expected to mitigate. The new specification can use existing SATA cables and connectors, although some OEMs are expected to upgrade host connectors for the higher speeds. Also, the new standard is backwards compatible with SATA 3 Gbit/s.

TerminologyThe name SATA II has become synonymous with the 3 Gbit/s standard. In order to provide the industry with consistent terminology, the SATA-IO has compiled a set of marketing guidelines for the third revision of the specification.

The SATA 6 Gbit/s specification should be called Serial ATA International Organization: Serial ATA Revision 3.0.

The technology itself is to be referred to as SATA 6 Gb/s.

A product using this standard should be called the SATA 6 Gb/s [product name].

Using the terms SATA III or SATA 3.0 to refer to a SATA 6 Gbit/s product is unclear and not preferred. SATA-IO has provided a guideline to foster consistent marketing terminology across the industry]

Cables, connectors, and portsConnectors and cables present the most visible differences between SATA and parallel ATA drives. Unlike PATA, the same connectors are used on 3.5-inch (89 mm) SATA hard disks for desktop and server computers and 2.5-inch (64 mm) disks for portable or small computers; this allows 2.5-inch (64 mm) drives to be used in desktop computers with only a

mounting bracket and no wiring adapter. Smaller disks may use the mini-SATA spec, suitable for small-form-factor Serial ATA drives and mini SSDs.

There is a special connector (eSATA) specified for external devices, and an optionally implemented provision for clips to hold internal connectors firmly in place. SATA drives may be plugged into SAS controllers and communicate on the same physical cable as native SAS disks, but SATA controllers cannot handle SAS disks.

There are female SATA ports (on motherboards for example) for use with SATA data cable with locks or clips, thus reducing the chance of accidentally unplugging while the machine is turned on—As do SATA power/data connectors on optical and high-density devices. Moreover, some SATA cables have orthogonally positioned heads in the shape of an 'L' which in effect ease the connection of devices to circuit boards.

Data connector

Pin # Function1 Ground2 A+ (transmit)3 A− (transmit)4 Ground5 B− (receive)

Page 15

Page 16: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

6 B+ (receive)7 Ground8 Coding notch

A 7-pin Serial ATA right-angle data cable.

The SATA standard defines a data cable with seven conductors (3 grounds and 4 active data lines in two pairs) and 8 mm wide wafer connectors on each end. SATA cables can have lengths up to 1 metre (3.3 ft), and connect one motherboard socket to one hard drive. PATA ribbon cables, in comparison, connect one motherboard socket to one or two hard drives, carry either 40 or 80 wires, and are limited to 45 centimetres (18 in) in length by the PATA specification (however, cables up to 90 centimetres (35 in) are readily available). Thus, SATA connectors and cables are easier to fit in closed spaces, and reduce obstructions to air cooling. They are more susceptible to accidental unplugging and breakage than PATA, but cables can be purchased that have a locking feature, whereby a small (usually metal) spring holds the plug in the socket.

One of the problems associated with the transmission of data at high speed over electrical connections is described as noise, which is due to electrical coupling between data circuits and other circuits. As a result, the data circuits can both affect other circuits, and be affected by them. Designers use a number of techniques to reduce the undesirable effects of such unintentional coupling. One such technique used in SATA links is differential signaling.

Comparison to other buses

NameRaw

bandwidth (Mbit/s)

Transfer speed

(MByte/s)

Max. cable length (m)

Power provided

Devices per Channel

eSATA 3,000 375[citation needed]

2 with eSATA HBA (1 with passive adapter)

No1 (15 with port multiplier)

eSATAp 5 V/12 V[29]

Page 16

Page 17: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

SATA revision 3.0

6,000 750[citation needed] 1 No

SATA revision 2.0

3,000 375[citation needed]

SATA revision 1.0

1,500187,5[citation

needed] 1 per line

PATA 133 1,064 133.5 0.46 (18 in) No 2

SAS 600 6,000 750[citation needed] 10 No1 (>65k with expanders)

SAS 300 3,000 375[citation needed]

SAS 150 1,500187,5[citation

needed]

FireWire 3200 3,144 393100 (more with special cables)

15 W, 12–25 V

63 (with hub)

FireWire 800 786 98.25 100[30]

FireWire 400 393 49.13 4.5[30][31]

USB 3.0* 4,800 400[citation needed] 3[32] 4.5 W, 5 V127 (with hub)[32]

Page 17

Page 18: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

USB 2.0 480 60 5[33] 2.5 W, 5 V

USB 1.0 12 1.5 3 Yes

SCSI Ultra-640 5,120 640 12 No15 (plus the HBA)

SCSI Ultra-320 2,560 320

Fibre Channel over optic fibre

10,520 2,000 2–50,000 No

126(16,777,216 with switches)

Fibre Channel over copper cable

4,000 400 12

InfiniBandQuad Rate

10,000 1,000

5 (copper)[34][35] <10,000 (fiber)

No

1 with point to pointMany with switched fabric

Unlike PATA, both SATA and eSATA support hot-swapping by design. However, this feature requires proper support at the host, device (drive), and operating-system level. In general, all SATA devices (drives) support hot-swapping (due to the requirements on the device-side), also most SATA host adapters support this command.

SCSI-3 devices with SCA-2 connectors are designed for hot-swapping. Many server and RAID systems provide hardware support for transparent hot-swapping. The designers of the SCSI standard prior to SCA-2 connectors did not target hot-swapping, but, in practice, most RAID implementations support hot-swapping of hard disks.

Page 18

Page 19: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

3-SCSI:-

Small Computer System Interface (SCSI, / ̍ s k ʌ z i / SKUHZ - ee ) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, and electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners and CD drives. The SCSI standard defines command sets for specific peripheral device types; the presence of "unknown" as one of these types means that

in theory it can be used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial requirements.

SCSI is an intelligent, peripheral, buffered, peer to peer interface. It hides the complexity of physical format. Every device attaches to the SCSI bus in a similar manner. Up to 8 or 16

devices can be attached to a single bus. There can be any number of hosts and peripheral devices but there should be at least one host. SCSI uses hand shake signals between devices, SCSI-1, SCSI-2 have the option of parity error checking. Starting with SCSI-U160 (part of SCSI-3) all commands and data are error checked by a CRC32 checksum. The SCSI protocol defines communication from host to host, host to a peripheral device, peripheral device to a peripheral device. However most peripheral devices are exclusively SCSI targets, incapable of acting as SCSI initiators—unable to initiate SCSI transactions themselves. Therefore peripheral-to-peripheral communications are uncommon, but possible in most SCSI applications. The Symbios Logic 53C810 chip is an example of a PCI host interface that can act as a SCSI target.

Page 19

Page 20: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Interfaces

Two SCSI connectors.

Main article: SCSI connector

SCSI is available in a variety of interfaces. The first, still very common, was parallel SCSI (now also called SPI), which uses a parallel electrical bus design. As of 2008, SPI is being replaced by Serial Attached SCSI (SAS), which uses a serial design but retains other aspects of the technology. iSCSI drops physical implementation entirely, and instead uses TCP/IP as a transport mechanism. Many other interfaces which do not rely on complete SCSI standards still implement the SCSI command protocol.

SCSI interfaces have often been included on computers from various manufacturers for use under Microsoft Windows, Mac OS, Unix and Linux operating systems, either implemented on the motherboard or by the means of plug-in adaptors. With the advent of SAS and SATA drives, provision for SCSI on motherboards is being discontinued. A few companies still market SCSI interfaces for motherboards supporting PCIe and PCI-X.

SCSI command protocolIn addition to many different hardware implementations, the SCSI standards also include a complex set of command protocol definitions. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change

for use with iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA Packet Interface, USB Mass Storage class and FireWire SBP-2.

In SCSI terminology, communication takes place between an initiator and a target. The initiator sends a command to the target which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters.

At the end of the command sequence the target returns a Status Code byte which is usually 00h for success, 02h for an error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain a Key Code Qualifier (KCQ) from the target. The Check Condition and Request Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition.

There are 4 categories of SCSI commands: N (non-data), W (writing data from initiator to target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in total, with the most common being:

Page 20

Page 21: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Test unit ready : Queries device to see if it is ready for data transfers (disk spun up, media loaded, etc.).

Inquiry : Returns basic device information, also used to "ping" the device since it does not modify sense data.

Request sense : Returns any error codes from the previous command that returned an error status.

Send diagnostic and Receive diagnostic results: runs a simple self-test, or a specialised test defined in a diagnostic page.

Start/Stop unit : Spins disks up and down, load/unload media.

Read capacity : Returns storage capacity.

Format unit : Sets all sectors to all zeroes, also allocates logical blocks avoiding defective sectors.

SCSI Read format capacities : Retrieve the data capacity of the device.

Read (four variants): Reads data from a device.

Write (four variants): Writes data to a device.

Log sense : Returns current information from log pages.

Mode sense : Returns current device parameters from mode pages.

Mode select : Sets device parameters in a mode page.

Each device on the SCSI bus is assigned a unique SCSI identification number or ID. Devices may encompass multiple logical units, which are addressed by logical unit number (LUN). Simple devices have just one LUN, more complex devices may have multiple LUN)

A "direct access" (i.e. disk type) storage device consists of a number of logical blocks, usually referred to by the term Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of LBAs has evolved over time and so four different command variants are provided for reading and writing data. The Read( 6) and Write(6) commands

contain a 21-bit LBA address. The Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus various other parameter options.

A "sequential access" (i.e. tape-type) device does not have a specific capacity because it typically depends on the length of the tape, which is not known exactly. Reads and writes on a sequential access device happen at the current position, not at a specific LBA. The block size on sequential access devices can either be fixed or variable, depending on the specific device. Tape devices such as half-inch 9-track tape, DDS (4 mm tapes physically similar to DAT), Exabyte, etc., support variable block sizes.

How Parallel SCSI works

SCSI uses a protocol method to transfer data between devices on the bus. It is a circular process which starts and ends up in the same layer. From the first layer, all additional layers of protocol must be executed before any data is transferred to or from another device and the layers of protocol must be completed after the data has been transferred to the end of the process. The protocol layers are referred to as "SCSI bus phases". These phases are:

Page 21

Page 22: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

BUS FREE

ARBITRATION

SELECTION

MESSAGE OUT

COMMAND OUT

DATA OUT/IN

STATUS IN

MESSAGE IN

RESELECTION

The SCSI bus can be in only one phase at a given time.

Not all controllers use all the phases and a well written driver will not assume that phases will occur but rather command an operation then read status to determine the phase that the device wants to do next. This technique allows a single driver to work with a variety of controllers that may vary in whether they drop unneeded phases or not. In early implementations of SCSI, writing a single driver to work with Xebec and DTC required this approach developed by Douglas Goodall while adding SCSI support to the Ampro Little Board Z80.

Data signals in parallell SCSI: DB(0-7) DB(P) TERMPWR ATN BSY ACK RST MSG SEL C/D REQ I/O

Page 22

Page 23: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

3- Study of Bluetooth interface &devices.

Bluetooth:

Bluetooth is a proprietary open wireless technology standard for exchanging data over short distances (using short wavelength radio transmissions) from fixed and mobile devices, creating personal area networks (PANs) with high levels of security. Created by telecoms vendor Ericsson in 1994, it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization.

Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 14,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The SIG oversees the development of the specification, manages the qualification program, and protects the trademarks. To be marketed as a Bluetooth device, it must be qualified to standards defined by the SIG. A network of patents are required to implement the technology and are only licensed to those qualifying devices; thus the protocol, whilst open, may be regarded as proprietary.

ImplementationBluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands (1 MHz each; centered from 2402 to 2480 MHz) in the range 2,400-2,483.5 MHz (allowing for guard bands). This range is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.

Originally Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available; subsequently, since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK and 8DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode where an instantaneous data rate of 1 Mbit /s is possible. The term enhanced data rate (EDR) is used to describe π/4-DPSK and 8DPSK schemes, each giving 2 and 3 Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a "BR/EDR radio".

Bluetooth is a packet-based protocol with a master-slave structure. One master may communicate with up to 7 slaves in a piconet; all devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 µs intervals. Two clock ticks make up a slot of 625 µs; two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets the master transmits in even slots and receives in odd slots; the slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long but in all cases the master transmit will begin in even slots and the slave transmit in odd slots.

Page 23

Page 24: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Bluetooth provides a secure way to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles.

Communication and connectionA master Bluetooth device can communicate with up to seven devices in a piconet. (An ad-hoc computer network using Bluetooth technology) The devices can switch roles, by agreement, and the slave can become the master at any time.

At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion.

The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices serve as bridges, simultaneously playing the master role in one piconet and the slave role in another.

Many USB Bluetooth adapters or "dongles" are available, some of which also include an IrDA adapter. Older (pre-2003) Bluetooth dongles, however, have limited capabilities, offering only the Bluetooth Enumerator and a less-powerful Bluetooth Radio incarnation. Such devices can link computers with Bluetooth with a distance of 100 meters, but they do not offer as many services as modern adapters do.

UsesBluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range (power-class-dependent, but effective ranges vary in practice; see table below) based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other, however a quasi optical" wireless path must be viable.

While the Bluetooth Core Specification does mandate minimums for range, the range of the technology is application specific and is not limited. Manufacturers may tune their implementations to the range needed to support individual use cases.

Bluetooth profiles

To use Bluetooth wireless technology, a device shall be able to interpret certain Bluetooth profiles, which are definitions of possible applications and specify general behaviors that Bluetooth enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.

List of Devices

Page 24

Page 25: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

A typical Bluetooth mobile phone headset.

Wireless control of and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.

Wireless networking between PCs in a confined space and where little bandwidth is required.

Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.

Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX.

Replacement of traditional wired serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.

For controls where infrared was traditionally used.

For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.

Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.

Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.

Three seventh-generation game consoles, Nintendo's and Sony's PlayStation 3 and PSP Go, use Bluetooth for their respective wireless controllers.

Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.

Short range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.

Allowing a DECT phone to ring and answer calls on behalf of a nearby cell phone

Real-time location systems (RTLS), are used to track and identify the location of objects in real-time using “Nodes” or “tags” attached to, or embedded in the objects tracked, and “Readers” that receive and process the wireless signals from these tags to determine their locations

Tracking livestock and detainees. According to a leaked diplomatic cable, King Abdullah of Saudi Arabia suggested "implanting detainees with an electronic chip containing information about them and allowing their movements to be tracked with Bluetooth. This was done with horses and falcons, the King said."

Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g. a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm. A product using this technology has been available since 2009.

Bluetooth devices

Page 25

Page 26: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

A Bluetooth USB dongle with a 100 m range. The MacBook Pro, shown, also has a built in Bluetooth adaptor.

Bluetooth exists in many products, such as the iPod Touch, Lego Mindstorms NXT , PlayStation 3, PSP Go, telephones, the Nintendo Wii, and some high definition headsets, modems, and watches. The technology is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).

Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. [17] This makes using services easier because more of the security, network address and permission configuration can be automated than with many other network types.

Computer requirements

A typical Bluetooth USB dongle.

An internal notebook Bluetooth card (14×36×4 mm).

A personal computer that does not have embedded Bluetooth can be used with a Bluetooth adapter that will enable the PC to communicate with other Bluetooth devices (such as mobile

Page 26

Page 27: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

phones, mice and keyboards). While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others will require an external one in the form of a dongle.

Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth allows multiple devices to communicate with a computer over a single adapter.

Operating system supportFor more details on this topic, see Bluetooth stack.

Apple has supported Bluetooth since Mac   OS   X v10.2 which was released in 2002.

For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases have native support for Bluetooth 1.1, 2.0 and 2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 support Bluetooth 2.1+EDR. Windows 7 supports Bluetooth 2.1+EDR and Extended Inquiry Response (EIR).

The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack which may support more profiles or newer versions of Bluetooth. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring the Microsoft stack to be replaced.

Linux has two popular Bluetooth stacks, BlueZ and Affix. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. The Affix stack was developed by Nokia. FreeBSD features Bluetooth support since its 5.0 release. NetBSD features Bluetooth support since its 4.0 release. Its Bluetooth stack has been ported to OpenBSD as well.

Mobile phone requirementsA Bluetooth-enabled mobile phone is able to pair with many devices. To ensure the broadest support of feature functionality together with legacy device support, the Open Mobile Terminal Platform (OMTP) forum has published a recommendations paper, entitled "Bluetooth Local Connectivity”.

Bluetooth v4.0

The Bluetooth SIG completed the Bluetooth Core Specification version 4.0, which includes Classic Bluetooth, Bluetooth high speed and Bluetooth low energy protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols. This version has been adopted as of June 30, 2010.

Cost-reduced single-mode chips, which will enable highly integrated and compact devices, will feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost. The Link Layer in these controllers will enable Internet connected sensors to schedule Bluetooth low energy traffic between Bluetooth transmissions.

Currently (2011-03) the definition of respective application profiles still are not fulfilled work-items of the standardisation bodies.

Page 27

Page 28: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Bluetooth v1.0 and v1.0BVersions 1.0 and 1.0B had many problems, and manufacturers had difficulty making their products interoperable. Versions 1.0 and 1.0B also included mandatory Bluetooth hardware device address (BD_ADDR) transmission in the Connecting process (rendering anonymity impossible at the protocol level), which was a major setback for certain services planned for use in Bluetooth environments.

Bluetooth v1.1 Ratified as IEEE Standard 802.15.1-2002 ]

Many errors found in the 1.0B specifications were fixed.

Added support for non-encrypted channels.

Received Signal Strength Indicator (RSSI).

Bluetooth v1.2This version is backward compatible with 1.1 and the major enhancements include the following:

Faster Connection and Discovery

Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence.

Higher transmission speeds in practice, up to 721 kbit/s than in v1.1.

Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better support for concurrent data transfer.

Host Controller Interface (HCI) support for three-wire UART.

Ratified as IEEE Standard 802.15.1-2005 ]

Introduced Flow Control and Retransmission Modes for L2CAP.

Bluetooth v2.0 + EDRThis version of the Bluetooth Core Specification was released in 2004 and is backward compatible with the previous version 1.2. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The nominal rate of EDR is about 3 Mbit/s, although the practical data transfer rate is 2.1 Mbit/s. EDR uses a combination of GFSK and Phase Shift Keying modulation (PSK) with two variants, π/4-DQPSK and 8DPSK. EDR can provide a lower power consumption through a reduced duty cycle.

The specification is published as "Bluetooth v2.0 + EDR" which implies that EDR is an optional feature. Aside from EDR, there are other minor improvements to the 2.0 specification, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.

Bluetooth v2.1 + EDR

Page 28

Page 29: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Bluetooth Core Specification Version 2.1 + EDR is fully backward compatible with 1.2, and was adopted by the Bluetooth SIG on July 26, 2007.

The headline feature of 2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security. See the section on Pairing below for more details.

2.1 allows various other improvements, including "Extended inquiry response" (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; sniff subrating, which reduces the power consumption in low-power mode

Bluetooth v3.0 + HSVersion 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on April 21, 2009. Bluetooth 3.0+HS supports theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link. Its main new feature is AMP (Alternate MAC/PHY), the addition of 802.11 as a high speed transport. Two technologies had been anticipated for AMP: 802.11 and UWB, but UWB is missing from the specification.

The High-Speed part of the specification is not mandatory, and hence only devices sporting the "+HS" will actually support the Bluetooth over Wifi high-speed data transfer. A Bluetooth 3.0 device without the HS suffix will not support High Speed, and needs to only support Unicast Connectionless Data (UCD), as shown in the Bluetooth 3.0+HS specification, Vol0, section 4.1 Specification Naming Conventions.

Alternate MAC/PHYEnables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration, however when large quantities of data need to be sent, the high speed alternate MAC PHY 802.11 (typically associated with Wi-Fi) will be used to transport the data. This means that the proven low power connection models of Bluetooth are used when the system is idle, and the faster radio is used when large quantities of data need to be sent.

Unicast connectionless dataPermits service data to be sent without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.

Enhanced Power ControlUpdates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behaviour that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset.

Page 29

Page 30: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

4:- Study about Twisted pair cable-connection using IEEE colour coading scheme.

1- Cross cable2- Straight cable

Twisted pair- cabling is a type of wiring in which two conductors (the forward and return conductors of a single circuit) are twisted together for the purposes of canceling out electromagnetic interference (EMI) from external sources; for instance, electromagnetic radiation from unshielded twisted pair (UTP) cables, and crosstalk between neighboring pairs. It was invented by Alexander Graham Bell.

sExplanation

In balanced pair operation, the two wires carry equal and opposite signals and the destination detects the difference between the two. This is known as differential mode transmission. Noise sources introduce signals into the wires by coupling of electric or magnetic fields and tend to couple to both wires equally. The noise thus produces a common-mode signal which is cancelled at the receiver when the difference signal is taken. This method starts to fail when the noise source is close to the signal wires; the closer wire will couple with the noise more strongly and the common-mode rejection of the receiver will fail to eliminate it. This problem is especially apparent in telecommunication cables where pairs in the same cable lie next to each other for many miles. One pair can induce crosstalk in another and it is additive along the length of the cable. Twisting the pairs counters this effect as on each half twist the wire nearest to the noise-source is exchanged. Providing the interfering source remains

Page 30

Page 31: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

uniform, or nearly so, over the distance of a single twist, the induced noise will remain common-mode. Differential signaling also reduces electromagnetic radiation from the cable, along with the associated attenuation allowing for greater distance between exchanges.

The twist rate (also called pitch of the twist, usually defined in twists per meter) makes up part of the specification for a given type of cable. Where nearby pairs have equal twist rates, the same conductors of the different pairs may repeatedly lie next to each other, partially undoing the benefits of differential mode. For this reason it is commonly specified that, at least for cables containing small numbers of pairs, the twist rates must differ.

In contrast to FTP (foiled twisted pair) and STP (shielded twisted pair) cabling, UTP (unshielded twisted pair) cable is not surrounded by any shielding. It is the primary wire type for telephone usage and is very common for computer networking, especially as patch cables or temporary network connections due to the high flexibility of the cables.

Unshielded twisted pair (UTP)

Unshielded twisted pair

UTP cables are found in many Ethernet networks and telephone systems. For indoor telephone applications, UTP is often grouped into sets of 25 pairs according to a standard 25-pair color code originally developed by AT&T. A typical subset of these colors (white/blue, blue/white, white/orange, orange/white) shows up in most UTP cables.

For urban outdoor telephone cables containing hundreds or thousands of pairs, the cable is divided into smaller but identical bundles. Each bundle consists of twisted pairs that have different twist rates. The bundles are in turn twisted together to make up the cable. Pairs having the same twist rate within the cable can still experience some degree of crosstalk. Wire pairs are selected carefully to minimize crosstalk within a large cable.

Page 31

Page 32: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Unshielded twisted pair cable with different twist rates

UTP cable is also the most common cable used in computer networking. Modern Ethernet, the most common data networking standard, utilizes UTP cables. Twisted pair cabling is often used in data networks for short and medium length connections because of its relatively lower costs compared to optical fiber and coaxial cable.

UTP is also finding increasing use in video applications, primarily in security cameras. Many middle to high-end cameras include a UTP output with setscrew terminals. This is made possible by the fact that UTP cable bandwidth has improved to match the baseband of television signals. While the video recorder most likely still has unbalanced BNC connectors for standard coaxial cable, a balun is used to convert from 100-ohm balanced UTP to 75-ohm unbalanced. A balun can also be used at the camera end for ones without a UTP output. Only one pair is necessary for each video signal.

STP cable format

S/STP, also known as S/FTP.

S/UTP cable format

Page 32

Page 33: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

S/STP cable format

Twisted pair cables are often shielded in an attempt to prevent electromagnetic interference. Because the shielding is made of metal, it may also serve as a ground. However, usually a shielded or a screened twisted pair cable has a special grounding wire added called a drain wire. This shielding can be applied to individual pairs, or to the collection of pairs. When shielding is applied to the collection of pairs, this is referred to as screening. The shielding must be grounded for the shielding to work.

Shielded twisted pair (STP or STP-A) STP cabling includes metal shielding over each individual pair of copper wires. This type of shielding protects cable from external EMI. e.g. the 150 ohm shielded twisted pair cables defined by the IBM Cabling System specifications and used with token ring networks.

Screened unshielded twisted pair (S/UTP) Also known as Foiled Twisted Pair (FTP), is a screened UTP cable (ScTP).

Screened shielded twisted pair (S/STP or S/FTP) S/STP cabling, also known as Screened Fully shielded Twisted Pair (S/FTP), is both individually shielded (like STP cabling) and also has an outer metal shielding covering the entire group of shielded copper pairs (like S/UTP). This type of cabling offers the best protection from interference from external sources, and also eliminates alien crosstalk.

Note that different vendors and authors use different terminology (i.e. STP has been used to denote both STP-A, S/STP, and S/UTP).

Comparison of some old and new abbreviations, according to ISO/IEC 11801:

Old name New name cable screening pair shieldingUTP U/UTP None NoneFTP F/UTP Foil noneSTP U/FTP None FoilS-FTP SF/UTP foil, braiding NoneS-STP S/FTP braiding Foil

The code before the slash designates the shielding for the cable itself, while the code after the slash determines the shielding for the individual pairs:

Page 33

Page 34: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

TP = twisted pairU = unshieldedF = foil shieldingS = braided shielding

Most common cable categories

Category Bandwidth Applications Notes

Cat1 0.4 MHzTelephone and modem lines

Not described in EIA/TIA recommmendations. Unsuitable for modern systems.

Cat2 ? MHzOlder terminal systems, e.g. IBM 3270

Not described in EIA/TIA recommmendations. Unsuitable for modern systems.

Cat3 16MHz10BASE-T and 100BASE-T4 Ethernet

Described in EIA/TIA-568. Unsuitable for speeds above 16 Mbit/s.

Cat4 20MHz 16 Mbit/s Token Ring

Cat5 100MHz100BASE-TX & 1000BASE-T Ethernet

Cat5e 100MHz100BASE-TX & 1000BASE-T Ethernet

Enhanced Cat5. Practically the same as Cat5, but with better testing standards so Gigabit Ethernet works reliably.

Cat6 250MHz 1000BASE-T EthernetMost commonly installed cable in Finland according to the 2002 standard. SFS-EN 50173-1

Cat6e

250MHz (500MHz according to some)

10GBASE-T (under development) Ethernet

Not a standard; a cable maker's own label.

Cat6a 500MHz10GBASE-T (under development) Ethernet

Standard under development (ISO/IEC 11801:2002 Amendment 2).

Cat7 600MHz No applications yet.Four pairs, U/FTP (shielded pairs). Standard under development.

Cat7a 1000MHzTelephone, CATV, 1000BASE-T in the same cable.

Four pairs, S/FTP (shielded pairs, braid-screened cable). Standard under development.

Cat8 1200MHzUnder development, no applications yet.

Four pairs, S/FTP (shielded pairs, braid-screened cable). Standard under development.

Advantages

It is a thin, flexible cable that is easy to string between walls. More lines can be run through the same wiring ducts.

UTP costs less per meter/foot than any other type of LAN cable.

Page 34

Page 35: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Disadvantages

Twisted pair’s susceptibility to electromagnetic interference greatly depends on the pair twisting schemes (usually patented by the manufacturers) staying intact during the installation. As a result, twisted pair cables usually have stringent requirements for maximum pulling tension as well as minimum bend radius. This relative fragility of twisted pair cables makes the installation practices an important part of ensuring the cable’s performance.

In video applications that send information across multiple parallel signal wires, twisted pair cabling can introduce signaling delays known as skew which results in subtle color defects and ghosting due to the image components not aligning correctly when recombined in the display device. The skew occurs because twisted pairs within the same cable often use a different number of twists per meter so as to prevent common-mode crosstalk between pairs with identical numbers of twists. The skew can be compensated by varying the length of pairs in the termination box, so as to introduce delay lines that take up the slack between shorter and longer pairs, though the precise lengths required are difficult to calculate and vary depending on the overall cable length.

COLOR-CODE STANDARD

Again, please bear with me...  Let's start with simple pin-out diagrams of the two types of UTP Ethernet cables and watch how committees can make a can of worms out of them.   Here are the diagrams:

Note that the TX (transmitter) pins are connected to corresponding RX (receiver) pins, plus to plus and minus to minus.  And that  you must use a crossover cable to connect units with identical interfaces.  If you use a straight-through cable, one of the two units must, in effect, perform the cross-over function.

Two wire color-code standards apply: EIA/TIA 568A and EIA/TIA 568B. The codes are commonly depicted with RJ-45 jacks as follows (the view is from the front of the jacks):

Page 35

Page 36: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

If we apply the 568A color code and show all eight wires, our pin-out looks like this:

Note that pins 4, 5, 7, and 8 and the blue and brown pairs are not used in either standard.   Quite contrary to what you may read elsewhere, these pins and wires are not used or required to implement 100BASE-TX duplexing--they are just plain wasted.

However, the actual cables are not physically that simple.  In the diagrams, the orange pair of wires are not adjacent.  The blue pair is upside-down.  The right ends match RJ-45 jacks and the left ends do not.  If, for example, we invert the left side of the 568A "straight"-thru cable to match a 568A jack--put one 180° twist in the entire cable from end-to-end--and twist together and rearrange the appropriate pairs, we get the following can-of-worms:

This further emphasizes, I hope,  the importance of the word "twist" in making network cables which will work.  You cannot use an flat-untwisted telephone cable for a network cable.  Furthermore, you must use a pair of twisted wires to connect a set of transmitter pins to their corresponding receiver pins.  You cannot use a wire from one pair and another wire from a different pair.

Page 36

Page 37: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Keeping the above principles in mind, we can simplify the diagram for a 568A straight-thru cable by untwisting  the wires, except the 180° twist in the entire cable, and bending the ends upward.  Likewise, if we exchange the green and orange pairs in the 568A diagram we will get a simplified diagram for a 568B straight-thru cable.  If we cross the green and orange pairs in the 568A diagram we will arrive at a simplified diagram for a crossover cable.  All three are shown below.

Page 37

Page 38: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

5:-Study of installation of NOVELL Netware.

Novell NetWare is a network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, with network protocols based on the archetypal Xerox Network Systems stack

Novell NetWare

The NetWare console screen (August 22, 2006)

Company / developer Novell, Inc.

Working state Current

Source model Closed source

Initial release 1983

Latest stable release 6.5 SP8 / May 6, 2009

Available language(s) English

Kernel type Hybrid kernel

Default user interface Command line interface

License Proprietary

Official website www.novell.com

.

Page 38

Page 39: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

NetWare 286 2.x

NetWare version 2 had a reputation as notoriously difficult to configure, since the operating system was provided as a set of compiled object modules that required configuration and linking. Compounding this inconvenience was that the process was designed to run from multiple diskettes, which was slow and unreliable. Any change to the operating system required a re-linking of the kernel and a reboot of the system, requiring at least 20 diskette swaps. An additional complication in early versions was that the installation contained a proprietary low-level format program for MFM hard drives, which was run automatically before the software could be loaded, called COMPSURF.

NetWare was administered using text-based utilities such as SYSCON. The file system used by NetWare 2 was NetWare File System 286, or NWFS 286, supporting volumes of up to 256 MB. NetWare 286 recognized 80286 protected mode, extending NetWare's support of RAM from 1 MB to the full 16 MB addressable by the 80286. A minimum of 2 MB was required to start up the operating system; any additional RAM was used for FAT, DET and file caching. Since 16-bit protected mode was implemented the i80286 and every subsequent Intel x86 processor, NetWare 286 version 2.x would run on any 80286 or later compatible processor.

NetWare 2 implemented a number of features inspired by mainframe and minicomputer systems that were not available in other operating systems of the day. The System Fault Tolerance (SFT) features included standard read-after-write verification (SFT-I) with on-the-fly bad block re-mapping (at the time, disks did not have that feature built in) and software RAID1 (disk mirroring, SFT-II). The Transaction Tracking System (TTS) optionally protected files against incomplete updates. For single files, this required only a file attribute to be set. Transactions over multiple files and controlled roll-backs were possible by programming to the TTS API.

NetWare 286 2.x supported two modes of operation: dedicated and non-dedicated. In dedicated mode, the server used DOS only as a boot loader to execute the operating system file net$os.exe. All memory was allocated to NetWare; no DOS ran on the server. For non-dedicated operation, DOS 3.3 or higher would remain in memory, and the processor would time-slice between the DOS and NetWare programs, allowing the server computer to be used simultaneously as a network file-server and as a user workstation. All extended memory (RAM above 1 MB) was allocated to NetWare, so DOS was limited to only 640kB; expanded memory managers that used the MMU of 80386 and higher processors, such as EMM386, would not work either, because NetWare 286 had control of protected mode and the upper RAM, both of which were required for DOS to use this approach to expanded memory; 8086-style expanded memory on dedicated plug-in cards was possible however. Time slicing was accomplished using the keyboard interrupt. This feature required strict compliance with the IBM PC design model, otherwise performance was affected. Non-dedicated NetWare was popular on small networks, although it was more susceptible to lockups due to DOS program problems. In some implementations, users would experience significant network slowdown when someone was using the console as a workstation. NetWare 386 3.x and later supported only dedicated operation.

the server. To broaden the hardware base, particularly to machines using the IBM MCA bus, later versions of NetWare 2.x did not require the key card; serialised license floppy disks were used in place of the key cards.

Page 39

Page 40: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

NetWare 3.x

Starting with NetWare 3.x, support for 32-bit protected mode was added, eliminating the 16 MB memory limit of NetWare 286. This allowed larger hard drives to be supported, since NetWare 3.x cached (copied) the entire file allocation table (FAT) and directory entry table (DET) into memory for improved performance.

By accident or design, the initial releases of the client TSR programs modified the high 16 bits of the 32-bit 80386 registers, making them unusable by any other program until this was fixed. Phil Katz noticed the problem and added a switch to his PKZIP suite of programs to enable 32-bit register use only when the Netware TSRs were not present.

NetWare version 3 eased development and administration by modularization. Each functionality was controlled by a software module called a NetWare Loadable Module (NLM) loaded either at startup or when it was needed. It was then possible to add functionality such as anti-virus software, backup software, database and web servers, long name support (standard filenames were limited to 8 characters plus a three letter extension, matching MS-DOS) or Macintosh style files.

NetWare continued to be administered using console-based utilities. The file system introduced by NetWare 3.x and used by default until NetWare 5.x was NetWare File System 386, or NWFS 386, which significantly extended volume capacity (1 TB, 4 GB files) and could handle up to 16 volume segments spanning multiple physical disk drives. Volume segments could be added while the server was in use and the volume was mounted, allowing a server to be expanded without interruption.

Initially, NetWare used Bindery services for authentication. This was a stand-alone database system where all user access and security data resided individually on each server. When an infrastructure contained more than one server, users had to log-in to each of them individually, and each server had to be configured with the list of all allowed users.

The "NetWare Name Services" product allowed user data to be extended across multiple servers, and the Windows "Domain" concept is functionally equivalent to NetWare v3.x Bindery services with NetWare Name Services added on (e.g. a 2-dimensional database, with a flat namespace and a static schema).

For a while, Novell also marketed an OEM version of NetWare 3, called Portable NetWare, together with OEMs such as Hewlett-Packard, DEC and Data General, who ported Novell source code to run on top of their Unix operating systems. Portable NetWare did not sell well.

NetWare 4.x

Version 4 in 1993 also introduced NetWare Directory Services, later re-branded as Novell Directory Services (NDS), based on X.500, which replaced the Bindery with a global directory service, in which the infrastructure was described and managed in a single place. Additionally, NDS provided an extensible schema, allowing the introduction of new object types. This allowed a single user authentication to NDS to govern access to any server in the directory tree structure. Users could therefore access network resources no matter on which server they resided, although user license counts were still tied to individual servers. (Large

Page 40

Page 41: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

enterprises could opt for a license model giving them essentially unlimited per-server users if they let Novell audit their total user count)

Version 4 also introduced a number of useful tools and features, such as transparent compression at file system level and RSA public/private encryption.

Another new feature was the NetWare Asynchronous Services Interface (NASI). It allowed network sharing of multiple serial devices, such as modems. Client port redirection occurred via an MS-DOS or Microsoft Windows driver allowing companies to consolidate modems and analog phone lines.

NetWare 5.x

With the release of NetWare 5 in October 1998, Novell finally acknowledged the prominence of the Internet by switching its primary NCP interface from the IPX/SPX network protocol to TCP/IP. IPX/SPX was still supported, but the emphasis shifted to TCP/IP. Novell also added a GUI to NetWare. Other new features were:

Novell Storage Services (NSS), a new file system to replace the traditional NetWare File System - which was still supported

Java virtual machine for NetWare

Novell Distributed Print Services (NDPS)

ConsoleOne, a new Java-based GUI administration console

directory-enabled Public key infrastructure services (PKIS)

directory-enabled DNS and DHCP servers

support for Storage Area Networks (SANs)

Novell Cluster Services (NCS)

Oracle 8i with a 5-user license

The Cluster Services greatly improved on SFT-III, as NCS does not require specialized hardware or identical server configurations.

NetWare 5 was released during a time when NetWare market share dropped precipitously; many companies and organizations were replacing their NetWare servers with servers running Microsoft's Windows NT operating system. Novell also released their last upgrade to the NetWare 4 operating system, NetWare 4.2.

Netware 5 and above supported Novell NetStorage for Internet-based access to files stored within Netware.

Novell released NetWare 5.1 in January 2000, shortly after its predecessor. It introduced a number of useful tools, such as:

IBM WebSphere Application Server

Page 41

Page 42: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

NetWare Management Portal (later renamed Novell Remote Manager), web-based management of the operating system

FTP, NNTP and streaming media servers

NetWare Web Search Server

WebDAV support

NetWare 6.0

NetWare 6 was released in October 2001. This version has a simplified licensing scheme based on users, not servers. This allows unlimited connections per user. Novell Cluster Services was also improved to support 32 node clusters. NetWare 6.0 included a 2 node clustering license.

NetWare 6.5

NetWare 6.5 was released in August 2003. Some of the new features in this version included:

more open-source products such as PHP, MySQL and OpenSSH a port of the Bash shell and a lot of traditional Unix utilities such as wget, grep, awk

and sed to provide additional capabilities for scripting

iSCSI support (both target and initiator)

Virtual Office - an "out of the box" web portal for end users providing access to e-mail, personal file storage, company address book, etc.

Domain controller functionality

Universal password

DirXML Starter Pack - synchronization of user accounts with another eDirectory tree, a Windows NT domain or Active Directory.

exteNd Application Server - a Java EE 1.3-compatible application server

support for customized printer driver profiles and printer usage auditing

NX bit support

support for USB storage devices

support for encrypted volumes

The latest - and apparently last - Service Pack for Netware 6.5 is SP8, released October 2008.

Current NetWare situation

As of 2010 some organizations still use Novell NetWare, but its ongoing decline in popularity began in the mid-1990s, when NetWare was the de facto standard for file and print

Page 42

Page 43: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

software for the Intel x86 server platform. Modern (2009) NetWare and OES installations are used by larger organizations that may need the added flexibility they provide.

Microsoft successfully shifted market share away from NetWare products toward their own in the late-1990s. Microsoft's more aggressive marketing was aimed directly to management through major magazines; Novell NetWare's was through IT specialist magazines with distribution limited to select IT personnel.

Novell did not adapt their pricing structure accordingly and NetWare sales suffered at the hands of those corporate decision makers whose valuation was based on initial licensing fees. As a result, organizations that still use NetWare, eDirectory, and Novell software often have a hybrid infrastructure of NetWare, Linux, and Windows servers.

Netware Lite / Personal Netware

In 1991 Novell introduced a radically different and cheaper product - Netware Lite in answer to Artisoft's similar LANtastic. Both were peer to peer systems, where no specialist server was required, but instead all PCs on the network could share their resources.

The product line became Personal Netware in 1993.

Performance

NetWare dominated the network operating system (NOS) market from the mid-80s through the mid- to late-90s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark pitted NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) against a dedicated Auspex NFS server and a SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware.

The reasons for NetWare's performance advantage are given below.

File service instead of disk service

At the time NetWare was first developed, nearly all LAN storage was based on the disk server model. This meant that if a client computer wanted to read a particular block from a particular file it would have to issue the following requests across the relatively slow LAN:

1. Read first block of directory2. Continue reading subsequent directory blocks until the directory block containing the

information on the desired file was found, could be many directory blocks

Page 43

Page 44: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

3. Read through multiple file entry blocks until the block containing the location of the desired file block was found, could be many directory blocks

4. Read the desired data block

NetWare, since it was based on a file service model, interacted with the client at the file API level:

1. Send file open request (if this hadn't already been done)2. Send a request for the desired data from the file

All of the work of searching the directory to figure out where the desired data was physically located on the disk was performed at high speed locally on the server. By the mid-1980s, most NOS products had shifted from the disk service to the file service model. Today, the disk service model is making a comeback, see SAN.

Aggressive caching

From the start, the NetWare design focused on servers with copious amounts of RAM. The entire file allocation table (FAT) was read into RAM when a volume was mounted, thereby requiring a minimum amount of RAM proportional to online disk space; adding a disk to a server would often require a RAM upgrade as well. Unlike most competing network operating systems prior to Windows NT, NetWare automatically used all otherwise unused RAM for caching active files, employing delayed write-backs to facilitate re-ordering of disk requests (elevator seeks). An unexpected shutdown could therefore corrupt data, making an uninterruptible power supply practically a mandatory part of a server installation.

The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed ("dirty") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.

Efficiency of NetWare Core Protocol (NCP)

Most network protocols in use at the time NetWare was developed didn't trust the network to deliver messages. A typical client file read would work something like this:

1. Client sends read request to server2. Server acknowledges request

3. Client acknowledges acknowledgement

4. Server sends requested data to client

5. Client acknowledges data

6. Server acknowledges acknowledgement

Page 44

Page 45: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

In contrast, NCP was based on the idea that networks worked perfectly most of the time, so the reply to a request served as the acknowledgement. Here is an example of a client read request using this model:

1. Client sends read request to server2. Server sends requested data to client

All requests contained a sequence number, so if the client didn't receive a response within an appropriate amount of time it would re-send the request with the same sequence number. If the server had already processed the request it would resend the cached response, if it had not yet had time to process the request it would only send a "positive acknowledgement". The bottom line to this 'trust the network' approach was a 2/3 reduction in network transactions and the associated latency.

External links

NetWare Cool Solutions - tips & tricks, guides, tools and other resources submitted by the NetWare community

A brief history of NetWare

Another brief history of NetWare

Novell NetWare Licenses still available (for new servers/additive users/system upgrade)

A story of Novell's early years - 1980-1990

v · d · e Novell

Business Service Management Operations Center

Identity and Systems Management

eDirectory · ZENworks · Identity Manager · Access Manager · BorderManager

Linux Operating SystemsopenSUSE · SUSE Linux Enterprise Server · SUSE Linux Enterprise Desktop · SUSE Studio

Workgroup Collaboration Open Enterprise Server · GroupWise · NetWare

ProjectsAppArmor · Evolution · iFolder · Mono · openSUSE Project · YaST · ZYpp  · openSUSE Build Service · SUSE Studio ImageWriter

Training and CertificationCertified Novell Administrator · Certified Novell Engineer

Important peopleMajor · Fairclough · Noorda · Hovsepian · Schmidt

Page 45

Page 46: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Retrieved from "http://en.wikipedia.org/wiki/Novell_NetWare"Categories: Novell software | Netware | Proprietary softwareHidden categories: Wikipedia introduction cleanup from October 2009 | All pages needing cleanup | Wikipedia articles needing clarification from May 2010 | All articles with unsourced statements | Articles with unsourced statements from July 2007 | Articles with specifically marked weasel-worded phrases from May 2010 | Articles containing potentially dated statements from 2010 | All articles containing potentially dated statements | Articles with unsourced statements from November 2009 | Articles with unsourced statements from July 2009 | Articles with unsourced statements from December 2008

6:- Study of file system.

1- FAT :-

File Allocation Table

FAT

Developer Microsoft

Full Name

File Allocation Table

FAT12 (12-

bit version)FAT16/FAT16B (16-bit versions)

FAT32 (32-bit

version with 28 bits

used)

Page 46

Page 47: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Introduced August 1980

(QDOS)

August 1984 (DOS 3.0 with FAT16),

November 1987 (Compaq DOS 3.31

with FAT16B)

August 1996

(Windows 95

OSR2)

MBR Partition

type0x01 0x04, 0x06, 0x0E 0x0B, 0x0C

GPT Partition

type

Basic data partition

EBD0A0A2-B9E5-4433-87C0-68B6B72699C7

Structures

Directory

contentsTable

File allocation Linked list

Bad blocks Cluster tagging

Limits

Max file size 4 GiB minus 1 byte (or block size if smaller)

Max cluster

count4,077 (212-19) 65,517 (216-19) 268,435,437 (228-19)

Max filename

size8.3 filename, or 255 UTF-16 characters when using LFN

Max volume size 32 MB

2 GB

4 GB with 64 kB clusters (not widely

supported)

2 TB

8 TB (with 32 kB

clusters)

16 TB (with 64 kB

clusters, not widely

supported)

Features

Dates recorded

Creation, modified, access (accuracy to day only)

(Creation time and access date are only available when ACCDATE

support is enabled)

Date range January 1, 1980 - December 31, 2107

Date resolution 2 s

Forks Not natively

AttributesRead-only, hidden, system, volume label, subdirectory, archive,

(NetWare only) executable

Permissions

Global/directory/file-based only with DR-DOS and

Multiuser DOS, world/group/owner file

permissions only with multiuser security

No

Page 47

Page 48: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Transparent

compressionPer-volume, Stacker, DoubleSpace, DriveSpace No

Transparent

encryptionPer-volume only with DR-DOS No

File Allocation Table (FAT) is a computer file system architecture now widely used on many computer systems and most memory cards, such as those used with digital cameras. FAT file systems are commonly found on floppy disks, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. For floppy disks, the FAT has been standardized as ECMA-107 and ISO/IEC 9293 Those standards include only FAT12 and FAT16 without long filename support; long filenames with FAT is partially patented.[4]

The FAT file system is relatively straightforward technically and is supported by virtually all existing operating systems for personal computers. This makes it a useful format for solid-state memory cards and a convenient way to share data between operating systems.

FAT12The initial version of FAT is now referred to as FAT12. Designed as a file system for floppy disks, it limited cluster addresses to 12-bit values, which not only limited the cluster count to 4078,[6] but made FAT manipulation tricky with the PC's 8-bit and 16-bit registers. (Under Linux, FAT12 is limited to 4084 clusters.[7]) The disk's size is stored as a 16-bit count of sectors, which limited the size to 32 MB.[8] FAT12 was used by several manufacturers with different physical formats, but a typical floppy disk at the time was 5.25-inch, single-sided, 40 tracks, with 8 sectors per track, resulting in a capacity of 160 kB for both the system areas and files. The FAT12 limitations exceeded this capacity by a factor of ten or more.

By convention, all the control structures were organized to fit inside the first track, thus avoiding head movement during read and write operations, although this varied depending on the manufacturer and physical format of the disk. At the time FAT12 was introduced, DOS did not support hierarchical directories, and the maximum number of files was typically limited to a few dozen. Hierarchical directories were introduced in MS-DOS version 2.0.

A limitation which was not addressed until much later was that any bad sector in the control structures area, track 0, could prevent the disk from being usable. The DOS formatting tool rejected such disks completely. Bad sectors were allowed only in the file area, where they made the entire holding cluster unusable as well. FAT12 remains in use on all common floppy disks, including 1.44 MB ones.

Initial FAT16In 1984, IBM released the PC AT, which featured a 20 MB hard disk. Microsoft introduced MS-DOS 3.0 in parallel. (The earlier PC XT was the first PC with a hard drive from IBM, and MS-DOS 2.0 supported that hard drive with FAT12.) Cluster addresses were increased to 16-bit, allowing for up to 65,517 clusters per volume, and consequently much greater file system sizes, at least in theory. However, the maximum possible number of sectors and the maximum (partition, rather than disk) size of 32 MB did not change. Therefore, although technically already "FAT16", this format was not what today is commonly understood as FAT16. With the initial implementation of FAT16 not actually providing for larger partition sizes than FAT12, the early benefit of FAT16 was to enable the use of smaller clusters, making disk usage more efficient, particularly for files several hundred bytes in size, which

Page 48

Page 49: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

were far more common at the time. MS-DOS 2.x hard disks larger than 15 MB are incompatible with later versions of MS-DOS.

A 20 MB hard disk formatted under MS-DOS 3.0 was not accessible by the older MS-DOS 2.0 because MS-DOS 2.0 did not support version 3.0's FAT-16. MS-DOS 3.0 could still access MS-DOS 2.0 style 8 kB-cluster partitions under 15 MB.

MS-DOS 3.0 also introduced support for high-density 1.2 MB 5.25" diskettes, which notably had 15 sectors per track, hence more space for the FATs. This probably prompted a dubious optimization of the cluster size, which went down from 2 sectors to just 1. The net effect was that high density diskettes were significantly slower than older double density ones.

Final FAT16Finally in November 1987, Compaq DOS 3.31 (an OEM version of MS-DOS 3.3 released by Compaq with their machines) introduced what is today called the FAT16 format, with the expansion of the 16-bit disk sector count to 32 bits. The result was initially called the DOS 3.31 Large File System. Although the on-disk changes were minor, the entire DOS disk driver had to be converted to use 32-bit sector numbers, a task complicated by the fact that it was written in 16-bit assembly language.

In 1988 this improvement became more generally available through MS-DOS 4.0 and OS/2 1.1. The limit on partition size was dictated by the 8-bit signed count of sectors per cluster, which had a maximum power-of-two value of 64. With the standard hard disk sector size of 512 bytes, this gives a maximum of 32 kB clusters, thereby fixing the "definitive" limit for the FAT16 partition size at 2 gigabytes. On magneto-optical media, which can have 1 or 2 kB sectors instead of 0.5 kB, this size limit is proportionally larger.

Much later, Windows NT increased the maximum cluster size to 64 kB by considering the sectors-per-cluster count as unsigned. However, the resulting format was not compatible with any other FAT implementation of the time, and it generated greater internal fragmentation. Windows 98 also supported reading and writing this variant, but its disk utilities did not work with it. This contributes to a confusing compatibility situation.

The number of root directory entries available is determined when the volume is formatted, and is stored in a 16-bit signed field, defining an absolute limit of 32767 entries (32736, a multiple of 32, in practice). For historical reasons, FAT12 and FAT16 media generally use 512 root directory entries on non-floppy media. Other sizes may be incompatible with some software or devices (entries being file and/or folder names in the original 8.3 format). [11]

Some third party tools like mkdosfs allow the user to set this parameter.

FAT32In order to overcome size limit of FAT16, while at the same time allowing DOS (disk operating system) real mode code to handle the format, and without reducing available conventional memory unnecessarily, Microsoft expanded the cluster size yet again, calling the new revision FAT32. Cluster values are represented by 32-bit numbers, of which 28 bits are used to hold the cluster number, for a maximum of approximately 268 million (2 28) clusters. This allows for drive sizes of up to 8 TiB with 32 KiB clusters, but the boot sector uses a 32-bit field for the sector count, limiting volume size to 2 TiB on a hard disk with 512 byte sectors.

Page 49

Page 50: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

On Windows 95/98, due to the version of Microsoft's DOS-mode SCANDISK utility included with these operating systems being a 16-bit application, the FAT structure is not allowed to grow beyond 4 177 920 (< 222) clusters, placing the volume limit at 127.5 GiB (≈137 GB).[13][14] The FAT32 drive formatting tools provided by Microsoft (fdisk and format) are thus designed to scale the cluster size upwards as the volume size increases, thereby preventing the cluster-count from exceeding 4.177 million clusters - and only reaching that number on volumes with a size of 128 GB with 32 kB cluster size. Most or all appraisals of the efficiency of the FAT32 file system are rooted in this behavior, as they point out that FAT32 increases the cluster size with the volume size and therefor small file storage can become very inefficient when larger cluster sizes are used (eg - 32 kB). This behavior, however, is unnecessary in many cases. Large FAT32 volumes can in fact be created with smaller cluster sizes (eg 4 kB) given the use of alternate drive preparation tools.

Additionally, there are numerous reports that the DOS scandisk utility provided with Windows 98 (scandisk.exe) and even Windows 98 itself can in fact operate on FAT32 volumes with cluster counts much higher than the 4.177 million claimed by Microsoft, some of which have been in the 40 to 120 million range, thereby indicating that the scandisk utility can likely operate on volumes up to the upper limit of the FAT32 specifications. [15] A limitation in original versions of Windows 98/98SE's Fdisk utility causes it to incorrectly report disk sizes over 64 GiB. A corrected version is available from Microsoft, but it cannot partition drives larger than 512 GiB (≈550 GB).

Windows 98 (and presumably Windows ME) has been shown to be able to run from and correctly operate with volumes exceeding 128 GB (up to 1.5 TB in some reports) as well as with FAT32 volumes with 40 to 120 million clusters, although the native drive maintenance tools (scandskw.exe, defrag) have upper limits that are not yet known - but are reported to exceed 20 million clusters when using Windows ME versions of these tools. The Windows 2000/XP installation program and filesystem creation tool imposes a limitation of 32 GiB. However, both systems can read and write to FAT32 file systems of any size. This limitation is by design and according to Microsoft was imposed because many tasks on a very large FAT32 file system become slow and inefficient. This limitation can be bypassed by using third-party formatting utilities.

All versions of Windows prior to XP-SP1 and 2000-SP4 lacked inherent support for 48-bit LBA drive access because of a limitation in their 32-bit protected-mode IDE driver, meaning that the maximum disk size for (parallel) ATA disks is 128 GiB (≈137 GB), without using alternate drivers. Windows XP became fully 48-bit LBA capable with SP1 in 2002, but Microsoft did not release a patch for the 32-bit protected-mode drivers for Windows 98/ME (ESDI_506.PDR) even though those OS's were still being fully supported by Microsoft at that time. Intel did release an alternate IDE driver for win-9x/me (Intel Application Accelerator) that provides full 48-bit LBA operability for the 800-series chipsets, and several individuals and user groups have modified Microsoft's EDSI_506.PDR driver to make it 48-bit LBA compliant for Windows 98. Most or all third-party drivers for SATA controllers are 48-bit LBA compliant, even when used under Windows 98/ME. All versions of DOS that are FAT-32 aware are also 48-bit LBA capable, so long as 48-bit LBA is supported by the underlying hardware (ie - motherboard / BIOS).

FAT32 was introduced with Windows 95 OSR2, although reformatting was needed to use it, and DriveSpace 3 (the version that came with Windows 95 OSR2 and Windows 98) never supported it. Windows 98 introduced a utility to convert existing hard disks from FAT16 to FAT32 without loss of data. In the NT line, native support for FAT32 arrived in Windows 2000. A free FAT32 driver for Windows NT 4.0 was available from Winternals, a company later acquired by Microsoft. Since the acquisition the driver is no longer officially available.

Page 50

Page 51: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

The maximum possible size for a file on a FAT32 volume is 4 GiB minus 1 byte or 4 294 967 295 (232−1) bytes. Video applications, large databases, and some other software easily exceed this limit. Larger files require another filesystem , such as NTFS.

Third party supportFurther information: FAT filesystem and Linux

Other IBM PC operating systems—such as Linux, FreeBSD, BeOS and JNode—have all supported FAT, and most added support for VFAT and FAT32 shortly after the corresponding Windows versions were released. Early Linux distributions also supported a format known as UMSDOS, which was FAT with Unix file attributes (such as long file name and access permissions) stored in a separate file called “--linux-.---”. UMSDOS fell into disuse after VFAT was released and is not enabled by default in Linux kernels from version 2.5.7 onwards.[23] The Mac OS X operating system also supports the FAT file systems on volumes other than the boot disk. The Amiga supports FAT through the CrossDOS file system.

A free windows-based FAT32 formatter is available that overcomes many of the arbitrary limitations imposed by Microsoft.

FATXFATX is a slightly modified version of the FAT filesystem, and is designed for Microsoft's Xbox video game console hard disk drive and memory cards. FATX is not to be confused with exFAT, described below.

exFATexFAT (also sometimes incorrectly and inappropriately known as FAT64) is an incompatible replacement for FAT file systems that was introduced with Windows Embedded CE 6.0. MBR partition type is 0x7 (the same as NTFS). exFAT is intended to be used on SDXC and flash drives, where FAT is used today. Microsoft has provided a hotfix to add support for exFAT to Windows XP,[26] while Windows Vista Service Pack 1 added exFAT support to Windows Vista.

exFAT introduces a free space bitmap allowing faster space allocation and faster deletes, support for files up to 18,446,744,073,709,551,615 (264-1) bytes, larger cluster sizes (up to 32 MB in the first implementation), an extensible directory structure and name hashes for filenames for faster comparisons. No short 8.3 filenames are stored. It does not have security ACLs or file system journaling like NTFS, though device manufacturers can choose to implement simplified support for transactions (backup file allocation table used for the write operations, primary FAT for storing last known good allocation table, which is essential for writeable removable media to mitigate corruption).

TFAT/TexFAT Transaction-Safe FAT File System

TFAT and TexFAT are layers over the FAT and exFAT file systems respectively that provide a level of transaction safety to reduce the risk of data loss in the event of a power outage or unexpected removal of the drive.

Design

Overview

The following is an overview of the order of structures in a FAT partition or disk:

Contents Boot FS More File File Root Data Region (for files and

Page 51

Page 52: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Sector

InformationSector(FAT32 only)

reservedsectors(optional)

AllocationTable #1

AllocationTable #2

Directory(FAT12/16 only)

directories) ...(To end of partition or disk)

Size in sectors

(number of reserved sectors)(number of FATs)*(sectors per FAT)

(number of root entries*32)/Bytes per sector

NumberOfClusters*SectorsPerCluster

A FAT file system is composed of four different sections.

The Reserved sectors, located at the very beginning. The first reserved sector (sector 0) is the Boot Sector (aka Partition Boot Record). It includes an area called the BIOS Parameter Block (with some basic file system information, in particular its type, and pointers to the location of the other sections) and usually contains the operating system's boot loader code. The total count of reserved sectors is indicated by a field inside the Boot Sector. Important information from the Boot Sector is accessible through an operating system structure called the Drive Parameter Block in DOS and OS/2. For FAT32 file systems, the reserved sectors include a File System Information Sector at sector 1 and a Backup Boot Sector at Sector 6.

The FAT Region. This typically contains two copies (may vary) of the File Allocation Table for the sake of redundancy checking, although the extra copy is rarely used, even by disk repair utilities. These are maps of the Data Region, indicating which clusters are used by files and directories. In FAT16 and FAT12 they immediately follow the reserved sectors.

The Root Directory Region. This is a Directory Table that stores information about the files and directories located in the root directory. It is only used with FAT12 and FAT16, and imposes on the root directory a fixed maximum size which is pre-allocated at creation of this volume. FAT32 stores the root directory in the Data Region, along with files and other directories, allowing it to grow without such a constraint. Thus, for FAT32, the Data Region starts here.

The Data Region. This is where the actual file and directory data is stored and takes up most of the partition. The size of files and subdirectories can be increased arbitrarily (as long as there are free clusters) by simply adding more links to the file's chain in the FAT. Note however, that files are allocated in units of clusters, so if a 1 kB file resides in a 32 kB cluster, 31 kB are wasted. FAT32 typically commences the Root Directory Table in cluster number 2: the first cluster of the Data Region.

FAT uses little endian format for entries in the header and the FAT(s).

2- NTFS:-

NTFSDeveloper MicrosoftFull name New Technology File SystemIntroduced July 1993 (Windows NT 3.1)

Partition identifier

0x07 (MBR)EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 (GPT)

StructuresDirectory B+ tree

Page 52

Page 53: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

contentsFile allocation BitmapBad blocks $badclusLimits

Max file size16 EiB − 1 KiB (format);

16 TiB − 64 KiB (implementation) Max number of files

4,294,967,295 (232-1)

Max filename length

255 UTF-16 code units

Max volume size

264 clusters − 1 cluster (format);

256 TiB (256 × 10244 bytes) − 64 KiB (64 × 1024 bytes) (implementation)

Allowed characters in filenames

In Posix namespace, any UTF-16 code unit (case sensitive) except U+0000 (NUL) and / (slash). In Win32 namespace, any UTF-16 code unit (case insensitive) except U+0000 (NUL) / (slash) \ (backslash) : (colon) * (asterisk) ? (Question mark) " (quote) < (less than) > (greater than) and | (pipe)

Features

Dates recordedCreation, modification, POSIX change, access

Date range

1 January 1601 – 28 May 60056 (File times are 64-bit numbers counting 100-nanosecond intervals (ten million per second) since 1601, which is 58,000+ years)

Date resolution 100ns

ForksYes (see Alternate data streams below)

AttributesRead-only, hidden, system, archive, not content indexed, off-line, temporary, compressed

File system permissions

ACLs

Transparent compression

Per-file, LZ77 (Windows NT 3.51 onward)

Transparent encryption

Per-file,DESX (Windows 2000 onward),Triple DES (Windows XP onward),AES (Windows XP Service Pack 1, Windows Server 2003 onward)

Data deduplication

Yes[citation needed]

Page 53

Page 54: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Supported operating systems

Windows NT family (Windows NT 3.1 to Windows NT 4.0, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2), Mac OS X, GNU/Linux

NTFS (New Technology File System) is the standard file system of Windows NT, including its later versions Windows 2000, Windows XP, Windows Server 2003, Windows Server 2008, Windows Vista, and Windows 7.

NTFS supersedes the FAT file system as the preferred file system for Microsoft’s Windows operating systems. NTFS has several improvements over FAT and HPFS (High Performance File System) such as improved support for metadata and the use of advanced data structures to improve performance, reliability, and disk space utilization, plus additional extensions such as security access control lists (ACL) and file system journaling.

Versions

The NTFS on-disk format has five released versions:

v1.0 with NT 3.1,[citation needed] released mid-1993

v1.1 with NT 3.5,[citation needed] released fall 1994

v1.2 with NT 3.51 (mid-1995) and NT 4 (mid-1996) (occasionally referred to as "NTFS 4.0", because OS version is 4.0)

v3.0 from Windows 2000 ("NTFS V5.0" or "NTFS5")

v3.1 from Windows XP (autumn 2001; "NTFS V5.1"[citation needed])

Windows Server 2003 (spring 2003; occasionally "NTFS V5.2"

Windows Server 2008 and Windows Vista (mid-2005) (occasionally "NTFS V6.0"[

Windows Server 2008 R2 and Windows 7 (occasionally "NTFS V6.1".

V1.0 and V1.1 (and newer) are incompatible: that is, volumes written by NT 3.5x cannot be read by NT 3.1 until an update on the NT 3.5x CD is applied to NT 3.1, which also adds FAT long file name support. V1.2 supports compressed files, named streams, ACL-based security, etc. V3.0 added disk quotas, encryption, sparse files, reparse points, update sequence number (USN) journaling, the $Extend folder and its files, and reorganized security descriptors so that multiple files which use the same security setting can share the same descriptor. V3.1 expanded the Master File Table (MFT) entries with redundant MFT record number (useful for recovering damaged MFT files).

Windows Vista introduced Transactional NTFS, NTFS symbolic links, partition shrinking and self-healing functionality though these features owe more to additional functionality of the operating system than the file system itself.

The NTFS.sys version (i.e. NTFS v5.0 introduced with Windows 2000) should not be confused with the on-disk NTFS format version (v3.1 since Windows XP). The NTFS v3.1 on-disk format is unchanged from the introduction of Windows XP and is used in Windows Server 2003, Windows Server 2008, Windows Vista, and Windows 7. The confusion arises when no differentiation is made when features are implemented into the NTFS.sys driver within the Windows OS rather than in the NTFS on-disk format. An incident of this was when Microsoft detailed new features within NTFS in Windows 2000 and they called it

Page 54

Page 55: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

NTFS v5.0, yet it is the NTFS.sys driver that is at that version and the on-disk format is only at v3.0.

FeaturesNTFS v3.0 includes several new features over its predecessors: sparse file support, disk usage quotas, reparse points, distributed link tracking, and file-level encryption, also known as the Encrypting File System (EFS).

NTFS LogNTFS is a Journaling file system and uses the NTFS Log ($Logfile) to record metadata changes to the volume.

It is a critical functionality of NTFS (a feature that FAT/FAT32 does not provide) for ensuring that its internal complex data structures (notably the volume allocation bitmap, or data moves performed by the defragmentation API, the modifications to MFT records such as moves of some variable-length attributes stored in MFT records and attribute lists), and indices (for directories and security descriptors) will remain consistent in case of system crashes, and allow easy rollback of uncommitted changes to these critical data structures when the volume is remounted.

Alternate data streams (ADS)Alternate data streams allow more than one data stream to be associated with a filename, using the filename format "filename:streamname" (e.g., "text.txt:extrastream"). Alternate streams are not listed in Windows Explorer, and their size is not included in the file's size. Only the main stream of a file is preserved when it is copied to a FAT-formatted USB drive, attached to an e-mail, or uploaded to a website. As a result, using alternate streams for critical data may cause problems. NTFS Streams were introduced in Windows NT 3.1, to enable Services for Macintosh (SFM) to store Macintosh resource forks. Although current versions of Windows Server no longer include SFM, third-party Apple Filing Protocol (AFP) products (such as Group Logic's ExtremeZ -IP ) still use this feature of the file system.

Malware has used alternate data streams to hide its code; some malware scanners and other special tools now check for data in alternate streams. Microsoft provides a tool called Streams to allow users to view streams on a selected volume.

Very small ADS are also added within Internet Explorer (and now also other browsers) to mark files that have been downloaded from external sites: they may be unsafe to run locally and the local shell will require confirmation from the user before opening them. When the user indicates that he no longer wants this confirmation dialog, this ADS is simply dropped from the MFT entry for downloaded files.

Some media players have also tried to use ADS to store custom metadata to media files, in order to organize the collections, without modifying the effective data content of the media files themselves (using embedded tags when they are supported by the media file formats such as MPEG and OGG containers); these metadata may be displayed in the Windows Explorer as extra information columns, with the help of a registered Windows Shell extension that can parse them, but most media players prefer to use their own separate database instead of ADS for storing these information (notably because ADS are visible to all users of these files, instead of being managed with distinct per-user security settings and having their values defined according to user preferences).

Transactional NTFS

Page 55

Page 56: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

As of Windows Vista, applications can use Transactional NTFS to group changes to files together into a transaction. The transaction will guarantee that all changes happen, or none of them do, and it will guarantee that applications outside the transaction will not see the changes until they are committed.

It uses similar techniques as those used for Volume Shadow Copies (i.e. copy-on-write) to ensure that overwritten data can be safely rolled back, and a CLFS log to mark the transactions that have still not been committed, or those that have been committed but still not fully applied (in case of system crash during a commit by one of the participants).

Transactional NTFS does not restrict transactions to just the local NTFS volume, but also includes other transactional data or operations in other locations such as data stored in separate volumes, the local registry, or SQL databases, or the current states of system services or remote services. These transactions are coordinated network-wide with all participants using a specific service, the DTC, to ensure that all participants will receive same commit state, and to transport the changes that have been validated by any participant (so that the others can invalidate their local caches for old data or rollback their ongoing uncommitted changes). Transactional NTFS allows, for example, the creation of network-wide consistent distributed filesystems, including with their local live or offline caches.

Encrypting File System (EFS)EFS provides strong and user-transparent encryption of any file or folder on an NTFS volume. EFS works in conjunction with the EFS service, Microsoft's CryptoAPI and the EFS File System Run-Time Library (FSRTL). EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes a relatively small amount of time to encrypt and decrypt large amounts of data than if an asymmetric key cipher is used. The symmetric key that is used to encrypt the file is then encrypted with a public key that is associated with the user who encrypted the file, and this encrypted data is stored in an alternate data stream of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key that is stored in the file header. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. Also, in case of a user losing access to their key, support for additional decryption keys has been built in to the EFS system, so that a recovery agent can still access the files if needed. NTFS-provided encryption and compression are mutually exclusive—NTFS can be used for one and a third-party tool for the other.

The support of EFS is not available in Basic, Home and MediaCenter versions of Windows, and must be activated after installation of Professional, Ultimate and Server versions of Windows or by using enterprise deployment tools within Windows domains.

QuotasDisk quotas were introduced in NTFS v3. They allow the administrator of a computer that runs a version of Windows that supports NTFS to set a threshold of disk space that users may use. It also allows administrators to keep track of how much disk space each user is using. An administrator may specify a certain level of disk space that a user may use before they receive a warning, and then deny access to the user once they hit their upper limit of space. Disk quotas do not take into account NTFS's transparent file-compression, should this be enabled. Applications that query the amount of free space will also see the amount of free space left to the user who has a quota applied to them.

Page 56

Page 57: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

The support of disk quotas is not available in Basic, Home and MediaCenter versions of Windows, and must be activated after installation of Professional, Ultimate and Server versions of Windows or by using enterprise deployment tools within Windows domains.

InternalsIn NTFS, all file data—file name, creation date, access permissions, and contents—are stored as metadata in the Master File Table. This abstract approach allowed easy addition of file system features during Windows NT's development — an interesting example is the addition of fields for indexing used by the Active Directory software.

NTFS allows any sequence of 16-bit values for name encoding (file names, stream names, index names, etc.). This means UTF-16 codepoints are supported, but the file system does not check whether a sequence is valid UTF-16 (it allows any sequence of short values, not restricted to those in the Unicode standard).

Internally, NTFS uses B+ trees to index file system data. Although complex to implement, this allows faster file look up times in most cases. A file system journal is used to guarantee the integrity of the file system metadata but not individual files' content. Systems using NTFS are known to have improved reliability compared to FAT file systems.

The Master File Table (MFT) contains metadata about every file, directory, and metafile on an NTFS volume. It includes filenames, locations, size, and permissions. Its structure supports algorithms which minimize disk fragmentation [citation needed]. A directory entry consists of a filename and a "file ID" which is the record number representing the file in the Master File Table. The file ID also contains a reuse count to detect stale references. While this strongly resembles the W_FID of Files-11, other NTFS structures radically differ.

MetafilesNTFS contains several files which define and organize the file system. In all respects, most of these files are structured like any other user file ($Volume being the most peculiar), but are not of direct interest to file system clients. These metafiles define files, back up critical file system data, buffer file system changes, manage free space allocation, satisfy BIOS expectations, track bad allocation units, and store security and disk space usage information. All content is in an unnamed data stream, unless otherwise indicated.

Segment Number

File Name Purpose

0 $MFT

Describes all files on the volume, including file names, timestamps, stream names, and lists of cluster numbers where data streams reside, indexes, security identifiers, and file attributes like "read only", "compressed", "encrypted", etc.

1 $MFTMirrDuplicate of the first vital entries of $MFT, usually 4 entries (4 Kibibyte).

2 $LogFile Contains transaction log of file system metadata changes.3 $Volume Contains information about the volume, namely the volume

object identifier, volume label, file system version, and volume flags (mounted, chkdsk requested, requested $LogFile resize, mounted on NT 4, volume serial number updating, structure upgrade request). This data is not stored in a data stream, but in special MFT attributes: If present, a volume object ID is stored in an $OBJECT_ID record; the volume label is stored in a $VOLUME_NAME record, and the remaining volume data is in a $VOLUME_INFORMATION record. Note: volume serial

Page 57

Page 58: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

number is stored in file $Boot (below).

4 $AttrDefA table of MFT attributes which associates numeric identifiers with names.

5 .Root directory. Directory data is stored in $INDEX_ROOT and $INDEX_ALLOCATION attributes both named $I30.

6 $BitmapAn array of bit entries: each bit indicates whether its corresponding cluster is used (allocated) or free (available for allocation).

7 $Boot

Volume boot record. This file is always located at the first clusters on the volume. It contains bootstrap code (see NTLDR/BOOTMGR) and a BIOS parameter block including a volume serial number and cluster numbers of $MFT and $MFTMirr. $Boot is usually 8192 bytes long.

8 $BadClus

A file which contains all the clusters marked as having bad sectors. This file simplifies cluster management by the chkdsk utility, both as a place to put newly discovered bad sectors, and for identifying unreferenced clusters. This file contains two data streams, even on volumes with no bad sectors: an unnamed stream contains bad sectors—it is zero length for perfect volumes; the second stream is named $Bad and contains all clusters on the volume not in the first stream.

9 $Secure

Access control list database which reduces overhead having many identical ACLs stored with each file, by uniquely storing these ACLs in this database only (contains two indices: $SII (Standard_Information ID) and $SDH (Security Descriptor Hash) which index the stream named $SDS containing actual ACL table).

10 $UpCaseA table of unicode uppercase characters for ensuring case insensitivity in Win32 and DOS namespaces.

11 $ExtendA filesystem directory containing various optional extensions, such as $Quota, $ObjId, $Reparse or $UsnJrnl.

12 ... 23 Reserved for $MFT extension entries.

usually 24 $Extend\$QuotaHolds disk quota information. Contains two index roots, named $O and $Q.

usually 25 $Extend\$ObjIdHolds distributed link tracking information. Contains an index root and allocation named $O.

usually 26 $Extend\$ReparseHolds reparse point data (such as symbolic links). Contains an index root and allocation named $R.

27 ... file.ext Beginning of regular file entries.

These metafiles are treated specially by Windows and are difficult to directly view: special purpose-built tools are needed.[46] One such tool is the nfi.exe-"NTFS File Sector Information Utility" that is freely distributed as part of the Microsoft "OEM Support

LimitationsThe following are a few limitations of NTFS:

File NamesFile names are limited to 255 UTF-16 code points. Certain names are reserved in the volume root directory and cannot be used for files. These are: $MFT, $MFTMirr, $LogFile, $Volume, $AttrDef, . (dot), $Bitmap, $Boot, $BadClus, $Secure, $Upcase,

Page 58

Page 59: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

and $Extend;. (dot) and $Extend are both directories; the others are files. The NT kernel limits full paths to 32,767 UTF-16 code points.

Maximum Volume SizeIn theory, the maximum NTFS volume size is 264−1 clusters. However, the maximum NTFS volume size as implemented in Windows XP Professional is 232−1 clusters. For example, using 64 KiB clusters, the maximum Windows XP NTFS volume size is 256 TiB minus 64 KiB. Using the default cluster size of 4 KiB, the maximum NTFS volume size is 16 TiB minus 4 KiB. (Both of these are vastly higher than the 128 GiB limit lifted in Windows XP SP1.) Because partition tables on master boot record (MBR) disks only support partition sizes up to 2 TiB, dynamic or GPT volumes must be used to create NTFS volumes over 2 TiB. Booting from a GPT volume to a Windows environment requires a system with EFI and 64-bit support.

Maximum File SizeAs designed, the maximum NTFS file size is 16 EB (16 × 10246 bytes) minus 1 KiB or 18,446,744,073,709,550,592 bytes. As implemented, the maximum NTFS file size is 16 TiB minus 64 KiB or 17,592,185,978,880 bytes.

Alternate Data StreamsWindows system calls may handle alternate data streams. Depending on the operating system, utility and remote file system, a file transfer might silently strip data streams. A safe way of copying or moving files is to use the BackupRead and BackupWrite system calls, which allow programs to enumerate streams, to verify whether each stream should be written to the destination volume and to knowingly skip unwanted streams.

3-Ext 2/3

EX2 FILE SYSTEM

The ext2 or second extended filesystem is a file system for the Linux kernel. It was initially designed by Rémy Card as a replacement for the extended file system (ext).

The canonical implementation of ext2 is the ext2fs filesystem driver in the Linux kernel. Other implementations (of varying quality and completeness) exist in GNU Hurd, MINIX 3, Mac OS X (third-party), Darwin (same third-party as Mac OS X but untested), some BSD kernels, in Atari MiNT, and as third-party Microsoft Windows drivers.

ext2 was the default filesystem in several Linux distributions, including Debian and Red Hat Linux, until supplanted more recently by ext3, which is almost completely compatible with ext2 and is a journaling file system. ext2 is still the filesystem of choice for flash-based storage media (such as SD cards, and USB flash drives) since its lack of a journal minimizes the number of writes and flash devices have only a limited number of write cycles. Recent kernels, however, support a journal-less mode of ext4, which would offer the same benefit along with a number of ext4-specific benefits.

Ext2 data structures

The space in ext2 is split up into blocks. These blocks are divided into block groups, analogous to cylinder groups in the Unix File System. There are typically thousands of blocks on a large file system. Data for any given file is typically contained within a single block

Page 59

Page 60: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

group where possible. This is done to reduce external fragmentation and minimize the number of disk seeks when reading a large amount of consecutive data.

Each block group contains a copy of the superblock and block group descriptor table, and all block groups contain a block bitmap, an inode bitmap, an inode table and finally the actual data blocks. The group descriptor stores the location of the block bitmap, inode bitmap and the start of the inode table for every block group and these, in turn are stored in a group descriptor table.

Inodes

Every file or directory is represented by an inode. The inode includes data about the size, permission, ownership, and location on disk of the file or directory.

Example of ext2 inode structure:

Quote from the linux kernel documentation for ext2:

"There are pointers to the first 12 blocks which contain the file's data in the inode. There is a pointer to an indirect block (which contains pointers to the next set of blocks), a pointer to a doubly-indirect block (which contains pointers to indirect blocks) and a pointer to a trebly-indirect block (which contains pointers to doubly-indirect blocks)."

So, there is a structure in ext2 that has 15 pointers, the first 12 are for direct blocks. Pointer number 13 points to an indirect block, number 14 to a doubly-indirect block and number 15 to a trebly-indirect block.

Directories

Each directory is a list of directory entries. Each directory entry associates one file name with one inode number, and consists of the inode number, the length of the file name, and the actual text of the file name. To find a file, the directory is searched front-to-back for the associated filename. For reasonable directory sizes, this is fine. But for huge large directories

Page 60

Page 61: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

this is inefficient, and ext3 offers a second way of storing directories that is more efficient than just a list of filenames.

The special directories "." and ".." are implemented by storing the names "." and ".." in the directory, and the inode number of the current and parent directories in the inode field. The only special treatment these two entries receive is that they are automatically created when any new directory is made, and they cannot be deleted.

Allocating Data

When a new file or directory is created, the EXT2 file system must decide where to store the data. If the disk is mostly empty, then data can be stored almost anywhere. The EXT2 file system attempts to allocate each new directory in the group containing its parent directory, on the theory that accesses to parent and children directories are likely to be closely related. The EXT2 file system also attempts to place files in the same group as their directory entries, because directory accesses often lead to file accesses.

EX3 FILE SYSTEM

The ext3 or third extended filesystem is a journaled file system that is commonly used by the Linux kernel. It is the default file system for many popular Linux distributions. Stephen Tweedie first revealed that he was working on extending ext2 in Journaling the Linux ext2fs Filesystem in a 1998 paper and later in a February 1999 kernel mailing list posting, and the filesystem was merged with the mainline Linux kernel in November 2001 from 2.4.15 onward. Its main advantage over ext2 is journaling which improves reliability and eliminates the need to check the file system after an unclean shutdown. Its successor is ext4.

Advantages

Although its performance (speed) is less attractive than competing Linux filesystems such as ext4, JFS, ReiserFS and XFS, it has a significant advantage in that it allows in-place upgrades from the ext2 file system without having to back up and restore data. Benchmarks suggest that ext3 also uses less CPU power than ReiserFS and XFS.[5][6] It is also considered safer than the other Linux file systems due to its relative simplicity and wider testing base.

The ext3 file system adds, over its predecessor:

A Journaling file system. Online file system growth.

Htree indexing for larger directories. An HTree is a specialized version of a B-tree (not to be confused with the H tree fractal).

Page 61

Page 62: ATM Final Prac

CT INSTITUTE OF TECHNOLOGY

Without these, any ext3 file system is also a valid ext2 file system. This has allowed well-tested and mature file system maintenance utilities for maintaining and repairing ext2 file systems to also be used with ext3 without major changes. The ext2 and ext3 file systems share the same standard set of utilities, e2fsprogs, which includes an fsck tool. The close relationship also makes conversion between the two file systems (both forward to ext3 and backward to ext2) straightforward.

While in some contexts the lack of "modern" filesystem features such as dynamic inode allocation and extents could be considered a disadvantage, in terms of recoverability this gives ext3 a significant advantage over file systems with those features. The file system metadata is all in fixed, well-known locations, and there is some redundancy inherent in the data structures that may allow ext2 and ext3 to be recoverable in the face of significant data corruption, where tree-based file systems may not be recoverable.

Page 62