introduction to web technology

103
INTRODUCTION TO WEB TECHNOLOGY (TIT-503) UNIT I Introduction and Web Development Strategies History of Web, Protocols governing Web, Creating Websites for individual and Corporate World, Cyber Laws Web Applications, Writing Web Projects, Identification of Objects, Target Users, Web Team, Planning and Process Development. INTRODUCTION AND WEB DEVELOPMENT STRATEGIES / WEB TEAM Internet or commonly known as WEB is defined as a network of networks. The statement ‘NETWORK OF NETWORK’ contains a hidden definition in itself. As we know that in the early stage of development in networks only homogenous systems were able to communicate. But, as the technology has grown, new technology devices and software had emerged which allow heterogeneous network to behave

Upload: api-19823733

Post on 18-Nov-2014

3.703 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: INTRODUCTION TO WEB TECHNOLOGY

INTRODUCTION TO WEB TECHNOLOGY (TIT-503)UNIT IIntroduction and Web Development StrategiesHistory of Web, Protocols governing Web, Creating Websites for individual and Corporate World, Cyber LawsWeb Applications, Writing Web Projects, Identification of Objects, Target Users, Web Team,Planning and Process Development.

INTRODUCTION AND WEB DEVELOPMENT STRATEGIES / WEB TEAM

Internet or commonly known as WEB is defined as a network of

networks. The statement ‘NETWORK OF NETWORK’ contains a

hidden definition in itself. As we know that in the early stage of

development in networks only homogenous systems were able to

communicate. But, as the technology has grown, new technology

devices and software had emerged which allow heterogeneous

network to behave like a common group. Internet is collection of

such heterogeneous/homogeneous networks. The technologies in

internet allow one network to communicate with another

transparently. These days internet is covering almost all aspects of

humans daily life and therefore well defined strategies are required

to develop as well as use this emerging technology. Emerging of

Page 2: INTRODUCTION TO WEB TECHNOLOGY

E-commerce and it’s vast use by banks and other corporate had

lead to think about these development strategies a lot. These

development and use is under a law commonly known as CYBER

LAW (Will be dealing in detail later on) and

organizations/individuals are bound to follow these rules and

regulations.

Prior to the widespread inter-networking that led to the Internet,

most communication networks were limited by their nature to only

allow communications between the stations on the network, and

the prevalent computer networking method was based on the

central mainframe method. In the 1960s, computer researchers,

Levi C. Finch and Robert W. Taylor pioneered calls for a joined-

up global network to address interoperability problems.

Concurrently, several research programs began to research

principles of networking between separate physical networks, and

this led to the development of Packet switching. These included

Donald Davies (NPL), Paul Baran (RAND Corporation), and

Leonard Kleinrock's MIT and UCLA research programs.

Page 3: INTRODUCTION TO WEB TECHNOLOGY

This led to the development of several packet switched networking

solutions in the late 1960s and 1970s, including ARPANET, and

X.25. Additionally, public access and hobbyist networking systems

grew in popularity, including UUCP. They were however still

disjointed separate networks, served only by limited gateways

between networks. This led to the application of packet switching

to develop a protocol for inter-networking, where multiple

different networks could be joined together into a super-framework

of networks. By defining a simple common network system, the

Internet protocol suite, the concept of the network could be

separated from its physical implementation. This spread of inter-

network began to form into the idea of a global inter-network that

would be called 'The Internet', and this began to quickly spread as

existing networks were converted to become compatible with this.

This spread quickly across the advanced telecommunication

networks of the western world, and then began to penetrate into the

rest of the world as it became the de-facto international standard

Page 4: INTRODUCTION TO WEB TECHNOLOGY

and global network. However, the disparity of growth led to a

Digital divide that is still a concern today.

Following commercialization and introduction of privately run

Internet Service Providers in the 1980s, and its expansion into

popular use in the 1990s, the Internet has had a drastic impact on

culture and commerce. This includes the rise of near instant

communication by e-mail, text based discussion forums, the World

Wide Web. Investor speculation in new markets provided by these

innovations would also lead to the inflation and collapse of the

Dot-com bubble, a major market collapse. But despite this, growth

of the Internet continued, and still does.

HISTORY OF WEB AND WEB GOVERNING

PROTOCOLS

In the 1950s and early 1960s, prior to the widespread inter-

networking that led to the Internet, most communication networks

were limited by their nature to only allow communications

between the stations on the network. Some networks had gateways

Page 5: INTRODUCTION TO WEB TECHNOLOGY

or bridges between them, but these bridges were often limited or

built specifically for a single use. One prevalent computer

networking method was based on the central mainframe method,

simply allowing its terminals to be connected via long leased lines.

This method was used in the 1950s by Project RAND to support

researchers such as Herbert Simon, in Pittsburgh, Pennsylvania,

when collaborating across the continent with researchers in

Sullivan, Illinois, on automated theorem proving and artificial

intelligence.

In October 1962, Licklider was appointed head of the United

States Department of Defense's Advanced Research Projects

Agency, now known as DARPA, within the information

processing office. There he formed an informal group within

DARPA to further computer research. As part of the information

processing office's role, three network terminals had been installed:

one for System Development Corporation in Santa Monica, one for

Project Genie at the University of California, Berkeley and one for

the Compatible Time-Sharing System project at the Massachusetts

Page 6: INTRODUCTION TO WEB TECHNOLOGY

Institute of Technology (MIT). Licklider's identified need for inter-

networking would be made obviously evident by the problems this

caused.

At the tip of the inter-networking problem lay the issue of

connecting separate physical networks to form one logical

network, with much wasted capacity inside the assorted separate

networks. During the 1960s, Donald Davies (NPL), Paul Baran

(RAND Corporation), and Leonard Kleinrock (MIT) developed

and implemented packet switching. The notion that the Internet

was developed to survive a nuclear attack has its roots in the early

theories developed by RAND, but is an urban legend, not

supported by any Internet Engineering Task Force or other

document. Early networks used for the command and control of

nuclear forces were message switched, not packet-switched,

although current strategic military networks are, indeed, packet-

switching and connectionless. Baran's research had approached

packet switching from studies of decentralisation to avoid combat

damage compromising the entire network.

Page 7: INTRODUCTION TO WEB TECHNOLOGY

Promoted to the head of the information processing office at

DARPA, Robert Taylor intended to realize Licklider's ideas of an

interconnected networking system. Bringing in Larry Roberts from

MIT, he initiated a project to build such a network. The first

ARPANET link was established between the University of

California, Los Angeles and the Stanford Research Institute on

22:30 hours on October 29, 1969. By 5 December 1969, a 4-node

network was connected by adding the University of Utah and the

University of California, Santa Barbara. Building on ideas

developed in ALOHAnet, the ARPANET grew rapidly. By 1981,

the number of hosts had grown to 213, with a new host being

added approximately every twenty days.

ARPANET became the technical core of what would become the

Internet, and a primary tool in developing the technologies used.

ARPANET development was centered around the Request for

Comments (RFC) process, still used today for proposing and

distributing Internet Protocols and Systems. RFC 1, entitled "Host

Software", was written by Steve Crocker from the University of

Page 8: INTRODUCTION TO WEB TECHNOLOGY

California, Los Angeles, and published on April 7, 1969. These

early years were documented in the 1972 film Computer

Networks: The Heralds of Resource Sharing.

International collaborations on ARPANET were sparse. For

various political reasons, European developers were concerned

with developing the X.25 networks. Notable exceptions were the

Norwegian Seismic Array (NORSAR) in 1972, followed in 1973

by Sweden with satellite links to the Tanum Earth Station and

University College London.

X.25 AND PUBLIC ACCESS

Main articles: X.25, Bulletin board system, and FidoNet

Following on from ARPA's research, packet switching network

standards were developed by the International Telecommunication

Union (ITU) in the form of X.25 and related standards. In 1974,

X.25 formed the basis for the SERCnet network between British

academic and research sites, which later became JANET. The

Page 9: INTRODUCTION TO WEB TECHNOLOGY

initial ITU Standard on X.25 was approved in March 1976. This

standard was based on the concept of virtual circuits.

The British Post Office, Western Union International and Tymnet

collaborated to create the first international packet switched

network, referred to as the International Packet Switched Service

(IPSS), in 1978. This network grew from Europe and the US to

cover Canada, Hong Kong and Australia by 1981. By the 1990s it

provided a worldwide networking infrastructure.[7]

Unlike ARPAnet, X.25 was also commonly available for business

use. Telenet offered its Telemail electronic mail service, but this

was oriented to enterprise use rather than the general email of

ARPANET.

The first dial-in public networks used asynchronous TTY terminal

protocols to reach a concentrator operated by the public network.

Some public networks, such as CompuServe used X.25 to

multiplex the terminal sessions into their packet-switched

Page 10: INTRODUCTION TO WEB TECHNOLOGY

backbones, while others, such as Tymnet, used proprietary

protocols. In 1979, CompuServe became the first service to offer

electronic mail capabilities and technical support to personal

computer users. The company broke new ground again in 1980 as

the first to offer real-time chat with its CB Simulator. There were

also the America Online (AOL) and Prodigy dial in networks and

many bulletin board system (BBS) networks such as FidoNet.

FidoNet in particular was popular amongst hobbyist computer

users, many of them hackers and amateur radio operators.

UUCP( Main articles: UUCP and Usenet )

In 1979, two students at Duke University, Tom Truscott and Jim

Ellis, came up with the idea of using simple Bourne shell scripts to

transfer news and messages on a serial line with nearby University

of North Carolina at Chapel Hill. Following public release of the

software, the mesh of UUCP hosts forwarding on the Usenet news

rapidly expanded. UUCPnet, as it would later be named, also

created gateways and links between FidoNet and dial-up BBS

Page 11: INTRODUCTION TO WEB TECHNOLOGY

hosts. UUCP networks spread quickly due to the lower costs

involved, and ability to use existing leased lines, X.25 links or

even ARPANET connections. By 1981 the number of UUCP hosts

had grown to 550, nearly doubling to 940 in 1984.

Merging the networks and creating the Internet (TCP/IP)

INTERNET PROTOCOL SUITE

With so many different network methods, something was needed

to unify them. Robert E. Kahn of DARPA and ARPANET

recruited Vinton Cerf of Stanford University to work with him on

the problem. By 1973, they had soon worked out a fundamental

reformulation, where the differences between network protocols

were hidden by using a common internetwork protocol, and instead

of the network being responsible for reliability, as in the

ARPANET, the hosts became responsible. Cerf credits Hubert

Zimmerman, Gerard LeLann and Louis Pouzin (designer of the

CYCLADES network) with important work on this design.[8]

Page 12: INTRODUCTION TO WEB TECHNOLOGY

At this time, the earliest known use of the term Internet was by

Vinton Cerf, who wrote:

“Specification of Internet Transmission Control Program”. With

the role of the network reduced to the bare minimum, it became

possible to join almost any networks together, no matter what their

characteristics were, thereby solving Kahn's initial problem.

DARPA agreed to fund development of prototype software, and

after several years of work, the first somewhat crude demonstration

of a gateway between the Packet Radio network in the SF Bay area

and the ARPANET was conducted. On November 22, 1977 a three

network demonstration was conducted including the ARPANET,

the Packet Radio Network and the Atlantic Packet Satellite

network—all sponsored by DARPA. Stemming from the first

specifications of TCP in 1974, TCP/IP emerged in mid-late 1978

in nearly final form. By 1981, the associated standards were

published as RFCs 791, 792 and 793 and adopted for use. DARPA

sponsored or encouraged the development of TCP/IP

Page 13: INTRODUCTION TO WEB TECHNOLOGY

implementations for many operating systems and then scheduled a

migration of all hosts on all of its packet networks to TCP/IP. On 1

January 1983, TCP/IP protocols became the only approved

protocol on the ARPANET, replacing the earlier NCP protocol.

ARPANET to Several Federal Wide Area Networks: MILNET,

NSI, and NSFNet

ARPANET and NSFNet

After the ARPANET had been up and running for several years,

ARPA looked for another agency to hand off the network to;

ARPA's primary mission was funding cutting edge research and

development, not running a communications utility. Eventually, in

July 1975, the network had been turned over to the Defense

Communications Agency, also part of the Department of Defense.

In 1983, the U.S. military portion of the ARPANET was broken

off as a separate network, the MILNET. MILNET subsequently

became the unclassified but military-only NIPRNET, in parallel

with the SECRET-level SIPRNET and JWICS for TOP SECRET

Page 14: INTRODUCTION TO WEB TECHNOLOGY

and above. NIPRNET does have controlled security gateways to

the public Internet.

The networks based around the ARPANET were government

funded and therefore restricted to noncommercial uses such as

research; unrelated commercial use was strictly forbidden. This

initially restricted connections to military sites and universities.

During the 1980s, the connections expanded to more educational

institutions, and even to a growing number of companies such as

Digital Equipment Corporation and Hewlett-Packard, which were

participating in research projects or providing services to those

who were.

Several other branches of the U.S. government, the National

Aeronautics and Space Agency (NASA), the National Science

Foundation (NSF), and the Department of Energy (DOE) became

heavily involved in internet research and started development of a

successor to ARPANET. In the mid 1980s all three of these

branches developed the first Wide Area Networks based on

TCP/IP. NASA developed the NASA Science Network, NSF

Page 15: INTRODUCTION TO WEB TECHNOLOGY

developed CSNET and DOE evolved the Energy Sciences

Network or ESNet.

More explicitly, NASA developed a TCP/IP based Wide Area

Network, NASA Science Network (NSN), in the mid 1980s

connecting space scientists to data and information stored

anywhere in the world. In 1989, the DECnet-based Space Physics

Analysis Network (SPAN) and the TCP/IP-based NASA Science

Network (NSN) were brought together at NASA Ames Research

Center creating the first multiprotocol wide area network called the

NASA Science Internet, or NSI. NSI was established to provide a

total integrated communications infrastructure to the NASA

scientific community for the advancement of earth, space and life

sciences. As a high-speed, multiprotocol, international network,

NSI provided connectivity to over 20,000 scientists across all

seven continents.

In 1984 NSF developed CSNET exclusively based on TCP/IP.

CSNET connected with ARPANET using TCP/IP, and ran TCP/IP

over X.25, but it also supported departments without sophisticated

Page 16: INTRODUCTION TO WEB TECHNOLOGY

network connections, using automated dial-up mail exchange. This

grew into the NSFNet backbone, established in 1986, and intended

to connect and provide access to a number of supercomputing

centers established by the NSF.[12]

TRANSITION TOWARD AN INTERNET

The term "Internet" was adopted in the first RFC published on the

TCP protocol (RFC 675: Internet Transmission Control Program,

December 1974). It was around the time when ARPANET was

interlinked with NSFNet, that the term Internet came into more

general use,[14] with "an internet" meaning any network using

TCP/IP. "The Internet" came to mean a global and large network

using TCP/IP. Previously "internet" and "internetwork" had been

used interchangeably, and "internet protocol" had been used to

refer to other networking systems such as Xerox Network Services.

As interest in wide spread networking grew and new applications

for it arrived, the Internet's technologies spread throughout the rest

of the world. TCP/IP's network-agnostic approach meant that it

was easy to use any existing network infrastructure, such as the

Page 17: INTRODUCTION TO WEB TECHNOLOGY

IPSS X.25 network, to carry Internet traffic. In 1984, University

College London replaced its transatlantic satellite links with

TCP/IP over IPSS.

Many sites unable to link directly to the Internet started to create

simple gateways to allow transfer of e-mail, at that time the most

important application. Sites which only had intermittent

connections used UUCP or FidoNet and relied on the gateways

between these networks and the Internet. Some gateway services

went beyond simple e-mail peering, such as allowing access to

FTP sites via UUCP or e-mail.

TCP/IP BECOMES WORLDWIDE

The first ARPANET connection outside the US was established to

NORSAR in Norway in 1973, just ahead of the connection to

Great Britain. These links were all converted to TCP/IP in 1982, at

the same time as the rest of the Arpanet.

[edit] CERN, the European internet, the link to the Pacific and

beyond

Page 18: INTRODUCTION TO WEB TECHNOLOGY

Between 1984 and 1988 CERN began installation and operation of

TCP/IP to interconnect its major internal computer systems,

workstations, PC's and an accelerator control system. CERN

continued to operate a limited self-developed system CERNET

internally and several incompatible (typically proprietary) network

protocols externally. There was considerable resistance in Europe

towards more widespread use of TCP/IP and the CERN TCP/IP

intranets remained isolated from the Internet until 1989.

In 1988 Daniel Karrenberg, from CWI in Amsterdam, visited Ben

Segal, CERN's TCP/IP Coordinator, looking for advice about the

transition of the European side of the UUCP Usenet network

(much of which ran over X.25 links) over to TCP/IP. In 1987, Ben

Segal had met with Len Bosack from the then still small company

Cisco about purchasing some TCP/IP routers for CERN, and was

able to give Karrenberg advice and forward him on to Cisco for the

appropriate hardware. This expanded the European portion of the

Internet across the existing UUCP networks, and in 1989 CERN

Page 19: INTRODUCTION TO WEB TECHNOLOGY

opened its first external TCP/IP connections. This coincided with

the creation of Réseaux IP Européens (RIPE), initially a group of

IP network administrators who met regularly to carry out co-

ordination work together. Later, in 1992, RIPE was formally

registered as a cooperative in Amsterdam.

At the same time as the rise of internetworking in Europe, ad hoc

networking to ARPA and in-between Australian universities

formed, based on various technologies such as X.25 and

UUCPNet. These were limited in their connection to the global

networks, due to the cost of making individual international UUCP

dial-up or X.25 connections. In 1989, Australian universities

joined the push towards using IP protocols to unify their

networking infrastructures. AARNet was formed in 1989 by the

Australian Vice-Chancellors' Committee and provided a dedicated

IP based network for Australia.

The Internet began to penetrate Asia in the late 1980s. Japan,

which had built the UUCP-based network JUNET in 1984,

connected to NSFNet in 1989. It hosted the annual meeting of the

Page 20: INTRODUCTION TO WEB TECHNOLOGY

Internet Society, INET'92, in Kobe. Singapore developed

TECHNET in 1990, and Thailand gained a global Internet

connection between Chulalongkorn University and UUNET in

1992.

DIGITAL DIVIDE

While developed countries with technological infrastructures were

joining the Internet, developing countries began to experience a

digital divide separating them from the Internet. On an essentially

continental basis, they are building organizations for Internet

resource administration and sharing operational experience, as

more and more transmission facilities go into place.

AFRICA

At the beginning of the 1990s, African countries relied upon X.25

IPSS and 2400 baud modem UUCP links for international and

internetwork computer communications. In 1996 a USAID funded

project, the Leland initiative, started work on developing full

Internet connectivity for the continent. Guinea, Mozambique,

Page 21: INTRODUCTION TO WEB TECHNOLOGY

Madagascar and Rwanda gained satellite earth stations in 1997,

followed by Côte d'Ivoire and Benin in 1998.

Africa is building an Internet infrastructure. AfriNIC,

headquartered in Mauritius, manages IP address allocation for the

continent. As do the other Internet regions, there is an operational

forum, the Internet Community of Operational Networking

Specialists.

There are a wide range of programs both to provide high-

performance transmission plant, and the western and southern

coasts have undersea optical cable. High-speed cables join North

Africa and the Horn of Africa to intercontinental cable systems.

Undersea cable development is slower for East Africa; the original

joint effort between New Partnership for Africa's Development

(NEPAD) and the East Africa Submarine System (Eassy) has

broken off and may become two efforts.

ASIA AND OCEANIA

The Asia Pacific Network Information Centre (APNIC),

headquartered in Australia, manages IP address allocation for the

Page 22: INTRODUCTION TO WEB TECHNOLOGY

continent. APNIC sponsors an operational forum, the Asia-Pacific

Regional Internet Conference on Operational Technologies

(APRICOT).

In 1991, the People's Republic of China saw its first TCP/IP

college network, Tsinghua University's TUNET. The PRC went on

to make its first global Internet connection in 1995, between the

Beijing Electro-Spectrometer Collaboration and Stanford

University's Linear Accelerator Center. However, China went on

to implement its own digital divide by implementing a country-

wide content filter.

LATIN AMERICA

As with the other regions, the Latin American and Caribbean

Internet Addresses Registry (LACNIC) manages the IP address

space and other resources for its area. LACNIC, headquartered in

Uruguay, operates DNS root, reverse DNS, and other key services.

OPENING THE NETWORK TO COMMERCE

The interest in commercial use of the Internet became a hotly

debated topic. Although commercial use was forbidden, the exact

Page 23: INTRODUCTION TO WEB TECHNOLOGY

definition of commercial use could be unclear and subjective.

UUCPNet and the X.25 IPSS had no such restrictions, which

would eventually see the official barring of UUCPNet use of

ARPANET and NSFNet connections. Some UUCP links still

remained connecting to these networks however, as administrators

cast a blind eye to their operation.

During the late 1980s, the first Internet service provider (ISP)

companies were formed. Companies like PSINet, UUNET,

Netcom, and Portal Software were formed to provide service to the

regional research networks and provide alternate network access,

UUCP-based email and Usenet News to the public. The first dial-

up on the West Coast, was Best Internet[22] - now Verio, opened

in 1986. The first dialup ISP in the East was world.std.com,

opened in 1989.

This caused controversy amongst university users, who were

outraged at the idea of noneducational use of their networks.

Eventually, it was the commercial Internet service providers who

brought prices low enough that junior colleges and other schools

Page 24: INTRODUCTION TO WEB TECHNOLOGY

could afford to participate in the new arenas of education and

research.

By 1990, ARPANET had been overtaken and replaced by newer

networking technologies and the project came to a close. In 1994,

the NSFNet, now renamed ANSNET (Advanced Networks and

Services) and allowing non-profit corporations access, lost its

standing as the backbone of the Internet. Both government

institutions and competing commercial providers created their own

backbones and interconnections. Regional network access points

(NAPs) became the primary interconnections between the many

networks and the final commercial restrictions ended.

IETF AND A STANDARD FOR STANDARDS

The Internet has developed a significant subculture dedicated to

the idea that the Internet is not owned or controlled by any one

person, company, group, or organization. Nevertheless, some

standardization and control is necessary for the system to function.

Page 25: INTRODUCTION TO WEB TECHNOLOGY

The liberal Request for Comments (RFC) publication procedure

engendered confusion about the Internet standardization process,

and led to more formalization of official accepted standards. The

IETF started in January of 1985 as a quarterly meeting of U.S.

government funded researchers. Representatives from non-

government vendors were invited starting with the fourth IETF

meeting in October of that year.

Acceptance of an RFC by the RFC Editor for publication does not

automatically make the RFC into a standard. It may be recognized

as such by the IETF only after experimentation, use, and

acceptance have proved it to be worthy of that designation. Official

standards are numbered with a prefix "STD" and a number, similar

to the RFC naming style. However, even after becoming a

standard, most are still commonly referred to by their RFC

number.

In 1992, the Internet Society, a professional membership society,

was formed and the IETF was transferred to operation under it as

an independent international standards body.

Page 26: INTRODUCTION TO WEB TECHNOLOGY

NIC, InterNIC, IANA and ICANN

The first central authority to coordinate the operation of the

network was the Network Information Centre (NIC) at Stanford

Research Institute (SRI) in Menlo Park, California. In 1972,

management of these issues was given to the newly created

Internet Assigned Numbers Authority (IANA). In addition to his

role as the RFC Editor, Jon Postel worked as the manager of IANA

until his death in 1998.

As the early ARPANET grew, hosts were referred to by names,

and a HOSTS.TXT file would be distributed from SRI

International to each host on the network. As the network grew,

this became cumbersome. A technical solution came in the form of

the Domain Name System, created by Paul Mockapetris. The

Defense Data Network—Network Information Center (DDN-NIC)

at SRI handled all registration services, including the top-level

domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root

nameserver administration and Internet number assignments under

Page 27: INTRODUCTION TO WEB TECHNOLOGY

a United States Department of Defense contract.[23] In 1991, the

Defense Information Systems Agency (DISA) awarded the

administration and maintenance of DDN-NIC (managed by SRI up

until this point) to Government Systems, Inc., who subcontracted it

to the small private-sector Network Solutions, Inc.

Since at this point in history most of the growth on the Internet was

coming from non-military sources, it was decided that the

Department of Defense would no longer fund registration services

outside of the .mil TLD. In 1993 the U.S. National Science

Foundation, after a competitive bidding process in 1992, created

the InterNIC to manage the allocations of addresses and

management of the address databases, and awarded the contract to

three organizations. Registration Services would be provided by

Network Solutions; Directory and Database Services would be

provided by AT&T; and Information Services would be provided

by General Atomics.

Page 28: INTRODUCTION TO WEB TECHNOLOGY

In 1998 both IANA and InterNIC were reorganized under the

control of ICANN, a California non-profit corporation contracted

by the US Department of Commerce to manage a number of

Internet-related tasks. The role of operating the DNS system was

privatized and opened up to competition, while the central

management of name allocations would be awarded on a contract

tender basis.

USE AND CULTURE

E-mail and Usenet

E-mail is often called the killer application of the Internet.

However, it actually predates the Internet and was a crucial tool in

creating it. E-mail started in 1965 as a way for multiple users of a

time-sharing mainframe computer to communicate. Although the

history is unclear, among the first systems to have such a facility

were SDC's Q32 and MIT's CTSS.

The ARPANET computer network made a large contribution to the

evolution of e-mail. There is one report indicating experimental

inter-system e-mail transfers on it shortly after ARPANET's

Page 29: INTRODUCTION TO WEB TECHNOLOGY

creation. In 1971 Ray Tomlinson created what was to become the

standard Internet e-mail address format, using the @ sign to

separate user names from host names.

A number of protocols were developed to deliver e-mail among

groups of time-sharing computers over alternative transmission

systems, such as UUCP and IBM's VNET e-mail system. E-mail

could be passed this way between a number of networks, including

ARPANET, BITNET and NSFNet, as well as to hosts connected

directly to other sites via UUCP.

In addition, UUCP allowed the publication of text files that could

be read by many others. The News software developed by Steve

Daniel and Tom Truscott in 1979 was used to distribute news and

bulletin board-like messages. This quickly grew into discussion

groups, known as newsgroups, on a wide range of topics. On

ARPANET and NSFNet similar discussion groups would form via

mailing lists, discussing both technical issues and more culturally

focused topics (such as science fiction, discussed on the sflovers

mailing list).

Page 30: INTRODUCTION TO WEB TECHNOLOGY

From gopher to the WWW

As the Internet grew through the 1980s and early 1990s, many

people realized the increasing need to be able to find and organize

files and information. Projects such as Gopher, WAIS, and the FTP

Archive list attempted to create ways to organize distributed data.

Unfortunately, these projects fell short in being able to

accommodate all the existing data types and in being able to grow

without bottlenecks.[citation needed]

One of the most promising user interface paradigms during this

period was hypertext. The technology had been inspired by

Vannevar Bush's "Memex" and developed through Ted Nelson's

research on Project Xanadu and Douglas Engelbart's research on

NLS. Many small self-contained hypertext systems had been

created before, such as Apple Computer's HyperCard. Gopher

became the first commonly-used hypertext interface to the Internet.

Page 31: INTRODUCTION TO WEB TECHNOLOGY

While Gopher menu items were examples of hypertext, they were

not commonly perceived in that way.

In 1989, whilst working at CERN, Tim Berners-Lee invented a

network-based implementation of the hypertext concept. By

releasing his invention to public use, he ensured the technology

would become widespread. One early popular web browser,

modeled after HyperCard, was ViolaWWW.

Scholars generally agree,[citation needed] however, that the

turning point for the World Wide Web began with the introduction

of the Mosaic web browser in 1993, a graphical browser developed

by a team at the National Center for Supercomputing Applications

at the University of Illinois at Urbana-Champaign (NCSA-UIUC),

led by Marc Andreessen. Funding for Mosaic came from the High-

Performance Computing and Communications Initiative, a funding

program initiated by then-Senator Al Gore's High Performance

Computing and Communication Act of 1991 also known as the

Gore Bill . Indeed, Mosaic's graphical interface soon became more

popular than Gopher, which at the time was primarily text-based,

Page 32: INTRODUCTION TO WEB TECHNOLOGY

and the WWW became the preferred interface for accessing the

Internet. (Gore's reference to his role in "creating the Internet",

however, was ridiculed in his presidential election campaign. See

the full article Al Gore and information technology).

Mosaic was eventually superseded in 1994 by Andreessen's

Netscape Navigator, which replaced Mosaic as the world's most

popular browser. While it held this title for some time, eventually

competition from Internet Explorer and a variety of other browsers

almost completely displaced it. Another important event held on

January 11, 1994, was The Superhighway Summit at UCLA's

Royce Hall. This was the "first public conference bringing together

all of the major industry, government and academic leaders in the

field [and] also began the national dialogue about the Information

Superhighway and its implications."

24 Hours in Cyberspace, the "the largest one-day online event"

(February 8, 1996) up to that date, took place on the then-active

website, cyber24.com. It was headed by photographer Rick

Smolan.A photographic exhibition was unveiled at the

Page 33: INTRODUCTION TO WEB TECHNOLOGY

Smithsonian Institution's National Museum of American History

on 23 January 1997, featuring 70 photos from the project.[40]

Search engines

Even before the World Wide Web, there were search engines that

attempted to organize the Internet. The first of these was the

Archie search engine from McGill University in 1990, followed in

1991 by WAIS and Gopher. All three of those systems predated

the invention of the World Wide Web but all continued to index

the Web and the rest of the Internet for several years after the Web

appeared. There are still Gopher servers as of 2006, although there

are a great many more web servers.

As the Web grew, search engines and Web directories were created

to track pages on the Web and allow people to find things. The first

full-text Web search engine was WebCrawler in 1994. Before

WebCrawler, only Web page titles were searched. Another early

search engine, Lycos, was created in 1993 as a university project,

and was the first to achieve commercial success. During the late

1990s, both Web directories and Web search engines were popular

Page 34: INTRODUCTION TO WEB TECHNOLOGY

—Yahoo! (founded 1995) and Altavista (founded 1995) were the

respective industry leaders.

By August 2001, the directory model had begun to give way to

search engines, tracking the rise of Google (founded 1998), which

had developed new approaches to relevancy ranking. Directory

features, while still commonly available, became after-thoughts to

search engines.

Database size, which had been a significant marketing feature

through the early 2000s, was similarly displaced by emphasis on

relevancy ranking, the methods by which search engines attempt to

sort the best results first. Relevancy ranking first became a major

issue circa 1996, when it became apparent that it was impractical

to review full lists of results. Consequently, algorithms for

relevancy ranking have continuously improved. Google's

PageRank method for ordering the results has received the most

press, but all major search engines continually refine their ranking

methodologies with a view toward improving the ordering of

results. As of 2006, search engine rankings are more important

Page 35: INTRODUCTION TO WEB TECHNOLOGY

than ever, so much so that an industry has developed ("search

engine optimizers", or "SEO") to help web-developers improve

their search ranking, and an entire body of case law has developed

around matters that affect search engine rankings, such as use of

trademarks in metatags. The sale of search rankings by some

search engines has also created controversy among librarians and

consumer advocates.

Dot-com bubble

The suddenly low price of reaching millions worldwide, and the

possibility of selling to or hearing from those people at the same

moment when they were reached, promised to overturn established

business dogma in advertising, mail-order sales, customer

relationship management, and many more areas. The web was a

new killer app—it could bring together unrelated buyers and

sellers in seamless and low-cost ways. Visionaries around the

world developed new business models, and ran to their nearest

venture capitalist. Of course some of the new entrepreneurs were

truly talented at business administration, sales, and growth; but the

Page 36: INTRODUCTION TO WEB TECHNOLOGY

majority were just people with ideas, and didn't manage the capital

influx prudently. Additionally, many dot-com business plans were

predicated on the assumption that by using the Internet, they would

bypass the distribution channels of existing businesses and

therefore not have to compete with them; when the established

businesses with strong existing brands developed their own

Internet presence, these hopes were shattered, and the newcomers

were left attempting to break into markets dominated by larger,

more established businesses. Many did not have the ability to do

so.

The dot-com bubble burst on March 10, 2000, when the

technology heavy NASDAQ Composite index peaked at 5048.62

(intra-day peak 5132.52), more than double its value just a year

before. By 2001, the bubble's deflation was running full speed. A

majority of the dot-coms had ceased trading, after having burnt

through their venture capital and IPO capital, often without ever

making a profit.

Page 37: INTRODUCTION TO WEB TECHNOLOGY

Worldwide Online Population Forecast

In its "Worldwide Online Population Forecast, 2006 to 2011,"

JupiterResearch anticipates that a 38 percent increase in the

number of people with online access will mean that, by 2011, 22

percent of the Earth's population will surf the Internet regularly.

JupiterResearch says the worldwide online population will increase

at a compound annual growth rate of 6.6 percent during the next

five years, far outpacing the 1.1 percent compound annual growth

rate for the planet's population as a whole. The report says 1.1

billion people currently enjoy regular access to the Web.

North America will remain on top in terms of the number of people

with online access. According to JupiterResearch, online

penetration rates on the continent will increase from the current 70

percent of the overall North American population to 76 percent by

2011. However, Internet adoption has "matured," and its adoption

pace has slowed, in more developed countries including the United

States, Canada, Japan and much of Western Europe, notes the

report.

Page 38: INTRODUCTION TO WEB TECHNOLOGY

As the online population of the United States and Canada grows by

about only 3 percent, explosive adoption rates in China and India

will take place, says JupiterResearch. The report says China should

reach an online penetration rate of 17 percent by 2011 and India

should hit 7 percent during the same time frame. This growth is

directly related to infrastructure development and increased

consumer purchasing power, notes JupiterResearch.

By 2011, Asians will make up about 42 percent of the world's

population with regular Internet access, 5 percent more than today,

says the study.

Penetration levels similar to North America's are found in

Scandinavia and bigger Western European nations such as the

United Kingdom and Germany, but JupiterResearch says that a

number of Central European countries "are relative Internet

laggards."

Brazil "with its soaring economy," is predicted by JupiterResearch

to experience a 9 percent compound annual growth rate, the fastest

Page 39: INTRODUCTION TO WEB TECHNOLOGY

in Latin America, but China and India are likely to do the most to

boost the world's online penetration in the near future.

For the study, JupiterResearch defined "online users" as people

who regularly access the Internet by "dedicated Internet access"

devices. Those devices do not include cell phones.[41]

Historiography

Some concerns have been raised over the historiography of the

Internet's development. This is due to lack of centralised

documentation for much of the early developments that led to the

Internet.

"The Arpanet period is somewhat well documented because the

corporation in charge - BBN - left a physical record. Moving into

the NSFNET era, it became an extraordinarily decentralised

process. The record exists in people's basements, in closets. [...] So

much of what happened was done verbally and on the basis of

individual trust." —Doug Gale

Page 40: INTRODUCTION TO WEB TECHNOLOGY

Cyberlaws

Why Cyberlaws In India  India became independent on 15th August, 1947. In the 49th year of Indian independence, Internet was commercially introduced in our country. The beginnings of Internet were extremely small and the growth of subscribers painfully slow. However as Internet has grown in our country, the need has been felt to enact the relevant Cyberlaws which are necessary to regulate Internet in India. This need for cyberlaws was propelled by numerous factors.

Firstly, India has an extremely detailed and well-defined legal system in place. Numerous laws have been enacted and implemented and the foremost amongst them is The Constitution of India. We have interalia, amongst others, the Indian Penal Code, the Indian Evidence Act 1872, the Banker's Book Evidence Act, 1891 and the Reserve Bank of India Act, 1934, the Companies Act, and so on. However the arrival of Internet signalled the beginning of the rise of new and complex legal issues. It may be pertinent to mention that all the existing laws in place in India were enacted way back keeping in mind the relevant political, social, economic, and cultural scenario of that relevant time. Nobody then could really visualize about the Internet. Despite the brilliant acumen of our master draftsmen, the requirements of cyberspace could hardly ever be anticipated. As such, the coming of the Internet led to the emergence of numerous ticklish legal issues and problems which necessitated the enactment of Cyberlaws.

Secondly, the existing laws of India, even with the most benevolent and liberal interpretation, could not be interpreted in the light of the emerging cyberspace, to include all aspects relating to different activities in cyberspace. In fact, the practical experience and the wisdom of judgment found that it shall not be without major perils and pitfalls, if the existing laws were to be interpreted in the scenario of emerging cyberspace, without enacting new cyberlaws. As such, the need for enactment of relevant cyberlaws.

Thirdly, none of the existing laws gave any legal validity or sanction to the activities in Cyberspace. For example, the Net is used by a large majority of users for email. Yet till today, email is not "legal" in our country. There is no law in the country, which gives legal validity, and sanction to email. Courts and

Page 41: INTRODUCTION TO WEB TECHNOLOGY

judiciary in our country have been reluctant to grant judicial recognition to the legality of email in the absence of any specific law having been enacted by the Parliament. As such the need has arisen for Cyberlaw.

Fourthly, Internet requires an enabling and supportive legal infrastructure in tune with the times. This legal infrastructure can only be given by the enactment of the relevant Cyberlaws as the traditional laws have failed to grant the same. E-commerce, the biggest future of Internet, can only be possible if necessary legal infrastructure compliments the same to enable its vibrant growth.

All these and other varied considerations created the conducive atmosphere for the need for enacting relevant cyberlaws in India. The Government of India responded by coming up with the draft of the first Cyberlaw of India - The Information Technology Bill, 1999. One question that is often asked is why should we have Cyberlaw in India, when a large chunk of the Indian population is below the poverty line and is residing in rural areas ? More than anything else, India, by its sheer numbers, as also by virtue of its extremely talented and ever growing IT population, is likely to become a very important Internet market in the future and it is important that we legislate Cyberlaws in India to provide for a sound legal and technical frame work which, in turn, could be a catalyst for growth and success of the Internet Revolution in India. SUPPORTIVE CYBER LAW

• Existing Statutes

1. Communications and Multimedia Act 1998(CMA)

2. Malaysian Communications and Multimedia Commission Act

1998

3. Digital Signature Act 1997

4. Computer Crimes Act 1997

5. Copyright Act (Amendment) Act 1997

6. Telemedicine Act 1997

7. Optical Discs Act 2000

Page 42: INTRODUCTION TO WEB TECHNOLOGY

• Amendments of Statutes

1. Communications and Multimedia (Amendment) Bill 2004

2. Communications and Multimedia Commission

(Amendment) Bill 2004.

• Proposed Statutes

1. Personal Data Protection Act

2. Electronic Transactions Act (ETA)

3. E-Government Activities Act (EGA)

4. New Subsidiary Legislations

Page 43: INTRODUCTION TO WEB TECHNOLOGY

REGULATORY FRAMEWORK (NEW LICENSING STRUCTURE)

LICENCE

o INDIVIDUAL / CLASS

Network Facilities

Network Services

Content Application Services

Application Services

LICENSES ISSUED UNDER ACT 588

LICENSE INDIVIDUAL CLASS

Network Facilities Provider (NFP) 31 24

MAIN FEATURESMAIN FEATURESMAIN FEATURESMAIN FEATURES

COMMUNICATIONS AND MULTIMEDIA ACT 1998

The “mother” cyber law that provides for legislative, regulatory and institutional framework to cater for the convergence of the telecommunications, broadcasting and computing industries.

Pro-CompetitionTransparent

Less Regulation

Flexible and GenericEmphasize Process RatherEmphasize Process Rather than Contentthan ContentIndustry Self-DisciplineIndustry Self-DisciplineRegulatory ForebearanceRegulatory Forebearance

Page 44: INTRODUCTION TO WEB TECHNOLOGY

Network Service Provider (NSP) 30 24

Application Service Provider (ASP) 80 95

Content Application Service Provider (CASP) 20 -

TOTAL 161 143

NEW AND MIGRATION LICENSES UNDER ACT 588 (INDIVIDUAL LICENSES)

LICENSES MIGRATION NEW TOTAL

Network Facilities Provider

(NFP)

20 11 31

Network Service Provider

(NSP)

19 11 30

Application Service Provider

(ASP)

16 64 80

Content Application Service

Provider (CASP)

19 1 20

Total 74 87 161

Page 45: INTRODUCTION TO WEB TECHNOLOGY

DEVELOPMENT SINCE ACT 588

1. VISIBLE INCREASE IN CELLULAR PENETRATION

-From 12% or 2.7 million subscribers in 1999 to 43.6% or

11 million subscribers in 2003*

2. INCREASE IN INTERNET USERS

- From 2.0 million in 1999 to 8.7 million in 2003*

3. MORE CHOICES FOR CONSUMERS AND LOWER

COSTS OF SERVICES

- Streamyx service reduced by 30%

- Lower charges for mobile services

- More “free to air” TV stations – Channel 9, 8TV

INSTITUTIONAL FRAMEWORK

MINISTER

MCMC

INDUSTRY FORUMS

MECM

Page 46: INTRODUCTION TO WEB TECHNOLOGY

TRIBUNAL

Power to establish an independent body to:

1. Enforce legislation (CMA 1998)

2. Regulate industry

3. Promote Industry Development

4. Promote Industry Self-Regulation

COMMUNICATIONS AND MULTIMEDIA COMMISSION ACT 1998

Page 47: INTRODUCTION TO WEB TECHNOLOGY

An Act to legalise digital signature

Facilitate e-commerce and secure on- line transaction through the use of digital signatures

Establishment of Certification Authority as the body responsible in issuing PKI, Private key, warranties and liabilities.

DIGITAL SIGNATURE ACT 1997

Page 48: INTRODUCTION TO WEB TECHNOLOGY

The Act provides for:

protection to companies, government and individuals from computer crimes in the digital era;

clear definitions on criminal activities related to use of computers such as cyber fraud, illegal access, interceptions, and illegal use of computers.

COMPUTER CRIME ACT 1997

Page 49: INTRODUCTION TO WEB TECHNOLOGY

COMPUTER CRIME ACT 1997

Under-reporting of cyber crimes:

Maintaining their business and making profit;

Unwillingness to go through the legal process;

Expose confidential business information;

No provision for victim to receive restitution for the

damage suffered.

The Act provides for:

protection to companies, government and individuals from computer crimes in the digital era;

clear definitions on criminal activities related to use of computers such as cyber fraud, illegal access, interceptions, and illegal use of computers.

COMPUTER CRIME ACT 1997

Page 50: INTRODUCTION TO WEB TECHNOLOGY

COPYRIGHT ACT(A) 1997

Provides protection for multimedia works.

Reflects up-to-date developments in copy rights issue.

Clarify legal issues in digital transmission, use of multimedia and

its components.

TELEMEDICNE ACT(A) 1997

Provisions to regulate telemedicine activities:

Registration for practitioners;

Telemedicine practices by foreign practitioners; and

Medical data management and electronic

prescription

Page 51: INTRODUCTION TO WEB TECHNOLOGY

CELLULAR PENETRATION BETWEEN CELLULAR PENETRATION BETWEEN SELECTED COUNTRIES 2002SELECTED COUNTRIES 2002

84.4979.14

67.9562.11

48.81

37.3

26.04

16.09

1.220

10

20

30

40

50

60

70

80

90

UK Singapore Korea, Rep Japan USA Malaysia Thailand China India

Source: ITU@2003

Page 52: INTRODUCTION TO WEB TECHNOLOGY

COMPUTER AND INTERNET PENETRATIONCOMPUTER AND INTERNET PENETRATION

Computer Ownership - 4.2 Juta (16.7 %)Internet Penetration - 2.9 Juta (11.4 %)

Sumber : MCMC

6.1

7.99.4

12.5

14.5

16.7

1.82.9

7.1

8.8

10.511.4

-

2.0

4.0

6.0

8.0

10.0

12.0

14.0

16.0

18.0

1998 1999 2000 2001 2002 2003

%

PCs Internet subscribers

Page 53: INTRODUCTION TO WEB TECHNOLOGY

INTERNET PENETRATION BETWEEN INTERNET PENETRATION BETWEEN SELECTED COUNTRIES 2002SELECTED COUNTRIES 2002

Source: ITU@2003

55.2 54.0 53.8

44.940.6

31.6

7.84.6

1.6

0

10

20

30

40

50

60

Korea, Rep Singapore USA Japan UK Malaysia Thailand China India

Page 54: INTRODUCTION TO WEB TECHNOLOGY

AMENDMENT OF STATUTES.

THE COMMUNICATIONS AND MULTIMEDIA

(AMENDMENT) BILL 2003

PURPOSE OF AMENDMENTS:

TO INSERT THE NECESSARY SUBSTANTIVE

PROVISIONS FOR THE ESTABLISHMENT OF AN

INDEPENDENT APPEAL TRIBUNAL; AND

0

5

10

15

20

25

% 19.29 13.3 9.15 6.13 0.12 0.08 0.05 0.02

South Korea Hong Kong Taiwan Singapore China MALAYSIA Thailand India

BROADBAND PENETRATION RATES (%) BROADBAND PENETRATION RATES (%) AMONG SELECTED ASIAN COUNTRIES IN AMONG SELECTED ASIAN COUNTRIES IN

20022002

Source: Frost & Sullivan

1.5m

14.5m

196k

19k

465k

640k

Page 55: INTRODUCTION TO WEB TECHNOLOGY

TO STRENGHTHEN THE CURRENT REGULATORY

AND LICENSING REGIME.

AMENDMENTS - SUBSIDIARY LEGISLATIONS UNDER

THE COMMUNICATIONS AND MULTIMEDIA ACT 1998

Communications and Multimedia (Licensing) Regulations 2000;

Communications and Multimedia (Spectrum) Regulations 2000;

Communications and Multimedia (Technical Standards)

Regulations 2000;

Communications and Multimedia (Spectrum) (Exemption) Order

2000;

Communications and Multimedia (Licensing) (Exemption) Order

2000

Communications and Multimedia (USP) Regulations 2002

Communications and Multimedia (Rates) Rules 2002

Notification of Issuance of Class Assignments

Page 56: INTRODUCTION TO WEB TECHNOLOGY

ENACTMENT OF NEW LAWS

To provide legal certainty for e-transactions undertaken by

businesses or Government, two new legislations will be

introduced:-

- Electronic Transactions Bill – to address electronic

transactions and communications.

- E-Government Activities Bill – to support and

promote electronic government.

ENSURING ON-LINE TRUST AND CONFIDENCEENSURING ON-LINE TRUST AND CONFIDENCE

Two aspects related to on-line trust and confidence: -

Privacy and personal data protection (PDP); and

Security of electronic transactions.

PRIVACY AND INFORMATION SECURITY

Page 57: INTRODUCTION TO WEB TECHNOLOGY

PDP -Protect personal data

PDP -Protect personal data

Promote secured

electronic environment

Encourage electronic

transactions

Enhance consumer trust and confidence

PERSONAL DATA PROTECTION BILLPERSONAL DATA PROTECTION BILL

Page 58: INTRODUCTION TO WEB TECHNOLOGY

WEB APPLICATION/WRITING WEB PROJECTS/ WEB OBJECTS/ WEB USERS

In software engineering, a Web application is an application that is accessed via Web browser over a network such as the Internet or an intranet. It is also a computer software application that is coded in a browser-supported language (such as HTML, JavaScript, Java, etc.) and reliant on a common web browser to render the application executable.

Web applications are popular due to the ubiquity of a client, sometimes called a thin client. The ability to update and maintain Web applications without distributing and installing software on potentially thousands of client computers is a key reason for their popularity. Common Web applications include Webmail, online retail sales, online auctions, wikis, discussion boards, Weblogs, MMORPGs and many other functions.

History

In earlier types of client-server computing, each application had its own client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server part of the application would typically require an upgrade to the clients installed on each user workstation, adding to the support cost and decreasing productivity.

In contrast, Web applications dynamically generate a series of Web documents in a standard format supported by common browsers such as HTML/XHTML. Client-side scripting in a standard language such as JavaScript is commonly included to add dynamic elements to the user interface. Generally, each individual Web page is delivered to the client as a static document, but the sequence of pages can provide an interactive

Roles to play for individuals, industry, and government.

Individuals should be able to make informed choices and be protected from harm & fraud

Industry should ensure fair information practices.

In some areas, Governments must choose whether to limit individual control over data to achieve larger societal benefits (e.g. security, health etc.).

A balanced approach enables individuals to benefit from responsible commercial uses of personal information

Privacy is a Shared ResponsibilityPrivacy is a Shared Responsibility

Page 59: INTRODUCTION TO WEB TECHNOLOGY

experience, as user input is returned through Web form elements embedded in the page markup. During the session, the Web browser interprets and displays the pages, and acts as the universal client for any Web application.

Interface

Webconverger operating system provides an interface for web applications.

The Web interface places very few limits on client functionality. Through Java, JavaScript, DHTML, Flash and other technologies, application-specific methods such as drawing on the screen, playing audio, and access to the keyboard and mouse are all possible. Many services have worked to combine all of these into a more familiar interface that adopts the appearance of an operating system. General purpose techniques such as drag and drop are also supported by these technologies. Web developers often use client-side scripting to add functionality, especially to create an interactive experience that does not require page reloading (which many users find disruptive)[citation needed]. Recently, technologies have been developed to coordinate client-side scripting with server-side technologies such as PHP. Ajax, a web development technique using a combination of various technologies, is an example of technology which creates a more interactive experience.

Technical considerations

A significant advantage of building Web applications to support standard browser features is that they should perform as specified regardless of the operating system or OS version installed on a given client. Rather than creating clients for MS Windows, Mac OS X, GNU/Linux, and other operating systems, the application can be written once and deployed almost anywhere. However, inconsistent implementations of the HTML, CSS, DOM and other browser specifications can cause problems in web application development and support. Additionally, the ability of users to customize many of the display settings of their browser (such as selecting different font sizes, colors, and typefaces, or disabling scripting support) can interfere with consistent implementation of a Web application.

Another approach is to use Adobe Flash or Java applets to provide some or all of the user interface. Since most Web browsers include support for these technologies (usually through plug-ins), Flash- or Java-based applications can be implemented with much of the same ease of deployment. Because they allow the programmer greater control over the interface, they bypass many browser-configuration issues, although incompatibilities between Java or Flash implementations on the client can introduce different complications. Because of their architectural similarities to traditional client-server applications, with a somewhat "thick" client, there is some dispute over whether to call

Page 60: INTRODUCTION TO WEB TECHNOLOGY

systems of this sort "Web applications"; an alternative term is "Rich Internet Application" (RIA).

Structure

Though many variations are possible, a Web application is commonly structured as a three-tiered application. In its most common form, a Web browser is the first tier, an engine using some dynamic Web content technology (such as ASP, ASP.NET, CGI, ColdFusion, JSP/Java, PHP,embPerl, Python, or Ruby on Rails) is the middle tier, and a database is the third tier. The Web browser sends requests to the middle tier, which services them by making queries and updates against the database and generates a user interface.

But there are some who view a web application as a Two-Tier architecture.

Business use

An emerging strategy for application software companies is to provide Web access to software previously distributed as local applications. Depending on the type of application, it may require the development of an entirely different browser-based interface, or merely adapting an existing application to use different presentation technology. These programs allow the user to pay a monthly or yearly fee for use of a software application without having to install it on a local hard drive. A company which follows this strategy is known as an application service provider (ASP), and ASPs are currently receiving much attention in the software industry.

Writing Web applications

There are many Web application frameworks which facilitate rapid application development by allowing the programmer to define a high-level description of the program. In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model.

The use of Web application frameworks can often reduce the number of errors in a program, both by making the code more simple, and by allowing one team to concentrate just on the framework. In applications which are exposed to constant hacking attempts on the Internet, security-related problems caused by errors in the program are a big issue. Frameworks may also promote the use of best practices such as GET after POST.

Web Application Security

The Web Application Security Consortium (WASC) and OWASP are projects developed with the intention of documenting how to avoid security problems in Web applications. A

Page 61: INTRODUCTION TO WEB TECHNOLOGY

Web Application Security Scanner is specialized software for detecting security problems in web applications.

Applications

Wikipedia application running in Mozilla Firefox.

Browser applications typically include simple office software (word processors, spreadsheets, and presentation tools) and can also include more advanced application such as project management software, CAD Design Software, and point-of-sale applications.

Examples

Word processor and Spreadsheet: Google Docs & Spreadsheets

CRM Software: SalesForce.com

Benefits

Browser Applications typically require little or no disk space, upgrade automatically with new features, integrate easily into other web procedures, such as email and searching. They also provide cross-platform compatibility (i.e Mac or Windows) because they operate within a web browser window.

Disadvantages

Standards compliance is an issue with any non-typical office document creator, which causes problems when file sharing and collaboration becomes critical. Also, Browser Applications rely on application files accessed on remote servers through the internet. Therefore, when connection is interrupted, the application is no longer usable. Google Gears is a beta platform to combat this issue and improve the usability of Browser Applications.

Page 62: INTRODUCTION TO WEB TECHNOLOGY

As the Internet grew into a major player on the global economic

front, so did the number of investors who were interested in its

development. So, you may wonder, how does the Internet

continue to play a major role in communications, media and

news? The key words are: Web Application Projects.

Web applications are business strategies and policies

implemented on the Web through the use of User, Business and

Data services. These tools are where the future lies. In this

article, I'll take you through the essential phases in the life cycle

of a Web application project, explain what options you have, and

help you formulate a plan for successful Web application

endeavors of your own. First, though, let's take a brief overview

of Web applications.

Who Needs Web Applications and Why?

There are many entities that require applications for the Web-

one example would be Business-to-Business interaction. Many

companies in the world today demand to do business with each

other over secure and private networks. This process is

becoming increasingly popular with a lot of overseas companies

who outsource projects to each other. From the simple process

of transferring funds into a bank account, to deploying a large

scale Web services network that updates pricing information

globally, the adoption of a Web applications infrastructure is vital

for many businesses.

The Web Application Model

Page 63: INTRODUCTION TO WEB TECHNOLOGY

The Web application model, like many software development

models, is constructed upon 3 tiers: User Services, Business

Services and Data Services. This model breaks an application

into a network of consumers and suppliers of services.

The User Service tier creates a visual gateway for the consumer

to interact with the application. This can range from basic HTML

and DHTML to complex COM components and Java applets.

The user services then grab business logic and procedures from

the Business Services. This tier can range from Web scripting in

ASP/PHP/JSP to server side programming such as TCL, CORBA

and PERL, that allows the user to perform complex actions

through a Web interface.

The final tier is the Data Service layer. Data services store,

retrieve and update information at a high level. Databases, file

systems, and writeable media are all examples of Data storage

and retrieval devices. For Web applications, however, databases

are most practical. Databases allow developers to store, retrieve,

add to, and update categorical information in a systematic and

organized fashion.

Choosing the Right Project

Page 64: INTRODUCTION TO WEB TECHNOLOGY

Choosing the right types of projects to work on is an extremely

important part of the Web application development plan.

Assessing your resources, technical skills, and publishing

capabilities should be your first goal. Taking the 3 tiers into

consideration, devise a list of all available resources that can be

categorically assigned to each tier.

The next consideration should be the cost. Do you have a budget

with which to complete this project? How much will it cost you to

design, develop and deliver a complete project with a fair

amount of success? These are questions that should be

answered before you sign any deals or contracts.

Let's look at an example. A company called ABC needs to

develop a Web application that will display sales information

created by different sales agents. The data is updated daily

through a completely automated process from all 3 service tiers.

The client tells you that this entire project must be done in

ASP/SQL server and that you should host the application as well.

After assessing all your resources, you and your team come to a

conclusion that the company is unable to do data backups on a

daily basis. After further discussion, you realize that this is a very

important part of the setup for your client, and you should not

risk taking a chance with the project. It's very likely that you will

be more prepared next time around, when a similar project lands

on your desk, so you decline the job and recommend someone

else who has the capabilities to do it right now.

The Phases in a Web Application Project

The Web application development process has 4 phases:

1. Envisioning the nature and direction of the project

Page 65: INTRODUCTION TO WEB TECHNOLOGY

2. Devising the plan

3. Development

4. Testing, support and stability

Let's look at each of these in more detail.

1. Envisioning the nature and direction of the

project

In this phase, the management and developers assigned to

the project come together and establish the goals that the

solution must achieve. This includes recognizing the

limitations that are placed on the project, scheduling, and

versioning of the application. By the end of this phase,

there should be clear documentation on what the

application will achieve.

2. Devising the plan

In this phase, you and your team must determine the

"how's" of the application.

What scripting language is most appropriate, which

features must be included, and how long will it take? These

are some of the questions that must be answered through

this planning phase. The main tangents at this point are

the project plan and functional specification. The project

plan determines a timeframe of events and tasks, while

the functional specification outlines in detail how the

application will function and flow.

3. Development

Once the project plan and functional specification are

ready, a baseline is set for the development work to begin.

Page 66: INTRODUCTION TO WEB TECHNOLOGY

The programmer/s or Web developer/s begin coding,

testing and publishing data. This phase establishes the

data variables, entities and coding procedures that will be

used throughout the remainder of the project. A milestone

document is prepared by the development team, which is

then handed to management for review.

4. Testing, support and stability

The stability phase of the application project mainly

focuses on testing and the removal of bugs, discrepancies

and network issues that may otherwise cause the

application to fail. It is here that policies and procedures

are established for a successful support system.

Planning for a Successful Web Development Project

In order to drastically minimize the risk of project failure, I've

always approached my application development projects in the

following sequence.

1. Identify business logic and entities

Start by gathering information on everything you have. If you are

going to be working with databases, begin by enumerating how

many entities will be used in the business logic. For example, if

your program implements sales data, a sales ticket would be an

entity.

Once you've identified all your entities, establish a clear

guideline for their relationships. This can be done via

presentations, flowcharts or even reports.

2. Create a functional specification and project plan

Page 67: INTRODUCTION TO WEB TECHNOLOGY

This part, in my opinion, is the most important part of the

project. Functional specifications (or functional specs) are a map,

or blueprint for how you want a particular Web application to

look and work. The spec details what the finished product will do,

user interaction, and its look and feel.

An advantage of writing a functional spec is that it streamlines

the development process. It takes discrepancies and guesswork

out of the programming process, because the level of detail that

goes into the plan makes it possible to minimize the

misunderstanding that's usually associated with project mishaps.

See examples of well written functional specs at RayComm.com.

Once the functional spec is finished, a project plan must be

devised. A project plan is a timeline of tasks and events that will

take place during the project. The project or program manager is

normally the person who creates a project plan, and their

primary focus is to detail task notes while being able to

accommodate scheduling and resource information. You can

download a sample Excel file for a project plan at

Method123.com.

3. Bring the application model into play

As discussed earlier, the application model consists of 3 tiers -

The User, Business and Data service tiers, each of which serves a

substantial purpose.

Practically speaking, it's always best to start with the data tier,

because you've already identified your entities and understand

their relationships. The data tier can be an SQL server database,

a text file, or even the powerful and robust Oracle. Create tables,

relationships, jobs, and procedures depending on what platform

you have chosen. If the data is a warehouse (i.e. the data

already exists and does not depend on real time interaction),

Page 68: INTRODUCTION TO WEB TECHNOLOGY

then make sure that new and additional data can be added

securely and in a scalable fashion.

A quick tip: using views in SQL server/Oracle can improve

dramatically the productivity and performance of your

application. They increase speed because they are "stored

queries" that don't have a physical existence.

The Business services tier, in my opinion, is the heart of the

application. It involves the implementation of business logic into

the scripting or programming language.

At this stage, make sure you've already set up your environment

for testing and debugging. Always test on at least 2 instances in

your application, after all, what may work perfectly for you, may

not do so well on other platforms or machines. ASP, XML, PHP,

JSP and CGI are some examples of server side scripting

languages used at the business service level. Whichever

language you choose, make sure that it's capable of handling all

the business logic presented in the functional specification.

The last is the user tier, which is absolutely vital for the

interactive and strategic elements in the application. It provides

the user with a visual gateway to the business service by placing

images, icons, graphics and layout elements in strategic areas of

interest, most commonly, based on management research. If

you'll be developing the user tier yourself, be sure to have

studied your competition. The last thing you need is for your

application to look exactly the same as someone else's.

4. Develop a support scheme

Being able to support and stabilize your application is very

important. Define a procedure call for cases of failure, mishaps

Page 69: INTRODUCTION TO WEB TECHNOLOGY

or even downtime. Give your customers the ability to contact

you in the case of an emergency relating to the program.

A good example of a support scheme is a ticket tracking system.

This system allows users to file cases pertaining to a support

request and the support team, then makes the case track able.

This means that the request is identifiable by a unique code or

number. Although ticket-tracking systems are normally used by

hosting companies or large scale ASP's (Application Service

Providers), they still serve a valuable purpose in helping keep the

application stable.