adv office

121
PROJECT REPORT CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION Submitted to the Department of computer Science & IT Bhaderwah Campus University Of Jammu In the partial fulfillment for the awards of degree Of Master of Computer Application (MCA) Report Submitted by: Dalinder Singh Roll No:-98- MCA-08 Session: 2008- 2011 Under the guidance of

Upload: puneet-chawla

Post on 09-Nov-2015

245 views

Category:

Documents


1 download

DESCRIPTION

cafe

TRANSCRIPT

PROJECT REPORT

PROJECT REPORT

CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION

Submitted to the Department of computer Science & IT Bhaderwah Campus

University Of JammuIn the partial fulfillment for the awards of degreeOf

Master of Computer Application (MCA)

Report Submitted by:

Dalinder SinghRoll No:-98-MCA-08Session: 2008-2011

Under the guidance of

External Project Guide Internal Project GuideMr. RAHUL PANDIT Mr. Jatinder ManhasProject Supervisor (Asst. Prof.)

Department of Computer Science & IT Bhaderwah Campus University of Jammu.Carried out at:

Ideogram Technology Solutions [P]Ltd.

568-A, Gandhi Nagar,Jammu-180004

PROJECT REPORT

CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION Submitted to the Department of Computer Science & IT

Bhaderwah Campus University of Jammu. Of Master of Computer Application (MCA)

By

Dalinder Singh 98-MCA-08 2008-2011

Under the guidance of

Mr. Jatinder Manhas ( Project Guide) In the partial fulfillment for the awards of degree

Department of Computer Science & IT

Bhaderwah CampusUniversity of Jammu

Carried out at IDEOGRAM TECHNOLOGY SOLUTIONS [P]. Ltd. Department of Computer Science & IT

Bhaderwah Campus University of Jammu Office of the Head PG Department of Computer Science Bhaderwah CampusUniversity of Jammu.CERTIFICATE

It is certificate that the work presented in project report entitled Customer To Customer Interactive Web Application has been carried out by Mr. Dalinder Singh, Roll no 98-MCA-08 at Ideogram Technology Solutions [P] Ltd 568-A,Gandhi Nagar,Jammu under my Guidance and Supervision in Partial fulfillment of requirement for the awards of Degree of MCA from Bhaderwah Campus, University of Jammu. As per my Knowledge no part of this work has been submitted by the candidate to any other university or institution for the awards of any degree. The work and conduct of the student remained satisfactory during his stay in the department.

Dated: - Head of the Deptt PROJECT REPORT

CUSTOMER TO CUSTOMER INTERACTIVE

WEB APPLICATION

TO WHOM IT MAY CONCERN

This is certify that Mr. Dalinder Singh, student of M.C.A (MASTERS IN COMPUTER APPLICATIONS), Bhaderwah Campus, University Of Jammu. Bearing Roll no. 98-MCA-08 was at software trainee at IDEOGRAM TCCHNOLOGY SOLUTIONS [P] LTD.This project was a part of the M.C.A curriculum for VI semester.

During her training he had designed and developed CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION, software in the partial fulfillment of the requirement for the award of degree of Masters of Computer Applications under the guidance of Mr. RAHUL PANDIT.It is further certified that the design and development of CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION is original work carried out during his training period.

Mr. Jatinder Manhas (Asst. Proff.)

DEPTT. OF M.C.A Bhaderwah Campus University of Jammu.

A

PROJECT REPORT ON

CUSTOMER TO CUSTOMER INTERACTIVE WEB APPLICATION

DEVELOPED BY:

Dalinder SinghCHAPTER 1

PROJECT PROFILE

Project Title : Customer To Customer Interactive Web Application

For Ideogram Technology Solutions [P] Ltd.

Supervisiors : Mr. Rahul Pandit

Organization : Ideogram Technology Solutions [P] Ltd.

568-A, Gandhi Nagar,Jammu -180004

www.ideogram.co.inProject duration : Six months

Group Members : Gourav Sharma(99-MCA-08)HARDWARE ENVIRONMENT

HP PC dx 7200 with the following Configuration

Processor - 1 GHz & aboveRAM - 3 MBHARD DISK - 320 GB

FDD - 1.44MBSOFTWARE ENVIRONMENT

Operating System - Microsoft Windows XP/7/8Backend - Microsoft Access Frontend - VB.NET

Case Tool -MS Visual Studio.netSubmitted to : Department Of Computer Science & IT Bhaderwah Campus

University Of Jammu

CHAPTER - 2 ORGANIZATIONS

PROFILE

Ideogram Technology Solutions [P] Ltd. is a team of Professionals with experience in High-End Project Planning and Development. We aim at being Total Solution Providers at most competitive rates with quality to our Clients. Our areas of Expertise being Web Design, Software Development, Web Development, Networking, Turnkey Projects, Consultancy, Offshore Development, VoIP Solutions, SMS Based Solutions, Search Engine Optimization and much more. Our professionals employ industry best practices to drive project success and are uniquely positioned to deliver significant cost and time savings Solutions.

Ideogram Technology Solutions [P] Ltd. specializes in:1. Website Design 5. Application Development

2. Graphics Design 6. Newspaper & Media Solutions

3. Search Engine Optimization 7. IT Consulting

4. Domain Registration & Web Hosting 8. SMS Based Solutions

We strongly believe that each IT solution is automation of business processes and needs. We employ our knowledge and expertise to provide high impact yet cost effective web solutions to our clients. We work with long-term relationships in mind; the reason our success graph is always on a rise.Our philosophy is very simple; make every solution a rewarding experience for everyone involved. Our strength is in our constantly growing team of project-managers, programmers, web designers and technical experts who possess strong problem-solving and multi-tasking abilities. People at Web Creators are always ready to impart their knowledge and skills even when dealing with uncertain and unrealistic expectations.We constantly improve our processes and level of expertise which enables us to provide cutting-edge & latest-in-technology.

Ideogram Technology Solutions [P] Ltd. offers full range of IT consulting services, Web Design & Development, Business Process Mapping & Automation, System Design, Internet & Custom Application Development, eCommerce Solutions, System Implementation and System Maintenance services.. We aspire to becoming an ultimate solution provider.

Ideogram Technology Solutions [P] Ltd. has expertise in developing custom software applications to meet customers' needs. We employ the best skills and knowledge to supply our clients with the state-of-the-art technology solutions.At Ideogram, our mission is to deliver powerful modular applications and Business Intelligence solutions that are quick to implement and provide quantifiable results and a fast return-on investment for our customers. It is our belief that we can fulfill this mission through a unique combination of industry vision innovative technology. We firmly believe that there is always something beyond a Total Solution, which keeps us on toes. In our opinion, one who rests on achieving an objective becomes static and soon gets thrown out by a centrifugal force, arising out of technological developments occurring in the world.

Introduction

Our Project is Advocate Office, that is a software for advocates to maintain their office. This software is very easy to operate and light on system tool to maintain information about Clients, Cases, Hearings, Rulings, Accounts, books, periodicals etc. Manage all of Advocates vital practice information - Calendar, Files, Contacts, Communications, Time, and much more in a single integrated database. A complete legal case management software solution that is powerful, flexible and scalable for firms of all sizes. Client Profiles manages day-to-day activity and builds a comprehensive client/case/matter database and history that can help improve every aspect of Advocates practice. Advocate will have immediate access to tools for case status and information, document management and assembly, calendaring and docketing and contact management. The system also comes with a number of standard reports that can be accessed easily by users. Track time, manage receivables, accounting program developed specifically for law firms. For quick reference provision of many reports is also made. It also provides information about daily schedule tasks. Software is dedicated to advocates/ lawyers to help them in maintaining their offices.CHAPTER 1

SYSTEM DEVELOPMENT

LIFE CYCLE1. System Development Life Cycle

The basic idea of software development life cycle (SDLC) is that there is a well defined process by which an application is conceived, developed and implemented. The phases in the SDLC provide a basis for the management and control because they define segments of the flow of work, which can be identified for the managerial purpose and specifies the documents or other deliveries to be produced in each phase.

System Development revolves around a life cycle that begins with the recognition of user needs. In order to develop good software, it has to go through different phases. There are various phases of the System Development Life Cycle for the project and different models for software development, which depict these phases. We decided to use waterfall model, the oldest and the most widely used paradigm for software engineering. The Various relevant stages of the System Life Cycle of this Application Tool are depicted in the following flow diagram.

Let us have a look on each of the above activities:

1. System AnalysisSystem Analysis is the process of diagnosing situations, done with a defiant aim, with the boundaries of the system kept in mind to produce a report based on the findings. Analysis is fact-finding techniques where problem definition, objective, system requirement specifications, feasibility analysis and cost benefit analysis are carried out. The requirement of both the system and the software are document and reviewed with the user.

2. System DesignSystem Design is actually a multistep process that focuses on four distinct attributes of a program: data structures, software architecture, interface representations, and procedural (algorithmic) detail. System design is concerned with identifying the software components (Functions, data streams, and data stores), specifying relationships among components, specifying software structure, maintaining a record of design decisions and providing a blueprint for the implementation phase.

3. Coding Coding step performs the translations of the design representations into an artificial language resulting in instructions that can be executed by the computer. It thus involves developing computer programs that meet the system specifications of design stage.

4. System Testing

System testing process focuses on the logical internals of the software, ensuring that all statements have been tested on the functional externals, that is conducting tests using various tests data to uncover errors that defined input will produce actual results that agree with required results.

5. System Implementation

System Implementation is a process that includes all those activities that take place to convert an old system to a new system. The new system may be totally new system replacing the existing system or it may be major modification to the existing system. Coding performs the translations of the design representations into an artificial language resulting in instructions that can be executed by the computer. It thus involves developing computer programs that meet the system design specifications. System implementation involves the translation of the design specifications into source code and debugging, documentation and unit testing of the source code.

6. System Maintenance Maintenance is modification of a software product after delivery to correct faults to improve performance or to adopt the product to a new operating environment. Software maintenance canot be avoided due to ware & tear caused by users. Some of the reasons for maintaining the software are1. Over a period of time, software original requirements may change.

2. Errors undetected during software development may be found during user & require correction.

3. With time new technologies are introduced such as hardware, operating system etc. The software therefore must be modified to adapt new operating environment.Type of Software Maintenance

Corrective Maintenance: This type of maintenance is also called bug fixing that may observed while the system is in use i.e correct reported errors. Adaptive Maintenance: This type of maintenance is concern with the modification required due to change in environment. (i.e external changes like use in different hardware platform or use different O.S. Perfective Maintenance: Perfective maintenance refers to enhancement to the software product there by adding or support to new features or when user change different functionalities of the system according to customer demands making the product better, faster with more function or reports. Preventive Maintenance: This type of maintenance is done to anticipate future problems and to improve the maintainability to provide a better basis for future enhancement or business changes. SYSTEM ANAYLSIS

1.1.1 Problem Definition

Our Project is Advocate Office, that is a software for advocates to maintain their office. This software is very easy to operate and light on system tool to maintain information about Clients, Cases, Hearings, Rulings, Accounts, books, periodicals etc. Manage all of Advocates vital practice information - Calendar, Files, Contacts, Communications, Time, and much more in a single integrated database. A complete legal case management software solution that is powerful, flexible and scalable for firms of all sizes. Client Profiles manages day-to-day activity and builds a comprehensive client/case/matter database and history that can help improve every aspect of Advocates practice. Advocate will have immediate access to tools for case status and information, document management and assembly, calendaring and docketing and contact management. The system also comes with a number of standard reports that can be accessed easily by users. Track time, manage receivables, accounting program developed specifically for law firms. For quick reference provision of many reports is also made. It also provides information about daily schedule tasks. Software is dedicated to advocates/ lawyers to help them in maintaining their offices.1.1.2 Proposed System

Our Project Aim is to develop a complete virtual desktop for Online Technical Community to share their ideas etc. The different modules of the portal that we are going to develop are as follows:

Client Profiling: This module enables the Advocate to store all the client information into the database which can be accessed at any moment of time for any future reference.

Case Management: This module provides the advocate with the facility of registering fresh, disposed and pending cases, along with their details (case no, client related to the case, the beginning date of the case, the witness details, and case description in details along with legal documents). Address Book: This module enables the advocate to store important contacts related to the cases being handled. Schedule Manager: This module enables the advocate to store various upcoming meeting details, daily appointments, etc.

Hearing Management: This module provides the advocate with the reference details put forth on a particular hearing date of a case, the next hearing date details, etc.

Judgment Module: This module allows the advocate to store the judgment verdict given for a particular case, which will prove useful to him/her for further upcoming cases. Login Protection: This module provides authentication to the entire system, which makes use of a username and password, thereby preventing any unauthorized access.

Accounting Module: This module provides the advocate with the facility of keeping track of income details due for each case. SMS Alerts: There will be a provision for SMS Alerts in the system where in the Advocate will be sending SMS alerts automatically/manually to his clients Document Management Module: This Module will help the user to scan their documents directly from the software without using the Scanners Software. This Module will also feature the mechanism for Converting the Images of Scanned Documents directly to Text and store that into database using a OCR(Optical Character Recognition) technique.1.1.3 Significance of Project

The advocate office software provides the advocate with the advantage of reducing manual work. This includes elimination of huge files being maintained by individuals till date, to store each and every information starting from a particular clients details to a cases final judgment details. This facility reduces the amount of time required by manual entry and is less prone to errors.

The schedule manager module facilitates the advocate to receive automatic meeting and appointment detail updates, daily court tasks, etc, thus helping him/her to prepare accordingly.

The accounting module keeps track of case income, consultation fee for a particular case.

1.1.4 Advantages of the Proposed System

The various advantages of the proposed System are:

Easy to use interface for the user. This software enables the advocate to interactively add contacts, clients, case details and judgment details. Easy search facility. Client and case details can be easily searched for using various available search criteria thereby avoiding time usage. A complete management of documents. All documents related to a particular case, are stored in the database, thereby facilitating proper management of the same. Accounting Facility. Easy maintenance of all case dues including income and consultancy is made available to the advocates as and when required.1.1.5 REQUIREMENT ANALYSIS

Software requirement analysis is a software-engineering task that bridges the gap between system level software allocation and software design. For Developing our Travel Portal in-depth analysis was deon. The analysis was divided into the following three Parts.

Problem Recognition

Evaluation and Synthesis

Specification & ReviewProblem Recognition

The aim of the poroject was understood and through research was done on internet to get a deep insight of how the proposed system will work, we went to different travel related sites and understood their working. We recorded what all features will be required when we build our website like for eg. We need to keep a database of destinations, Travel Agents and Hotels should be able to register and post their data online etc. All these features were noted down so that they could be incorporated in our application.

Evaluation and Synthesis

Problem evaluation and solution synthesis was the next major area of effort. It was in this step that all externally observable data objects, evaluation of flow and content of information was defined. It was decided in this phase that how our application will look and works, what parameters it will take and what it will return.

Specification & ReviewThe main objective is to improve the quality of software that can be done by inspection or walkthrough of formal technical reviews. The main objective is

To uncover errors in function, logfics or implementation.

Verify software under revies to meet requirement specification.

Ensure that software has been represented according to predefined standards.

Achive software development in uniform manner

Make projexct more meaningfull.

1.1.6 FEASIBILITY STUDY

The feasibility study is carried out to test if the proposed system is worth being implemented. Given unlimited and infinite time, all projects are feasible. Unfortunately such resources and time are not possible in real life situations. Hence it becomes both necessary and prudent to evaluate the feasibility of the project at the earliest possible time in order to avoid unnecessarily wastage of time, effort and professional embarrassment over an ill conceived system. Feasibility study is a test of system proposed regarding its work ability, impact on the organization ability to meet the user needs and effective use of resources.

The main objective of feasibility study is to test the technical, operational and economical feasibility of developing a computer system Application.

The following feasibility studies were carried out for the proposed system:

Economic Feasibility: An evaluation of development cost weighed against the income of benefit derived from the developed system. Here the development cost is evaluated by weighing it against the ultimate benefits derived from the new system. The proposed system is economically feasible if the benefits obtained in the long run compensate rather than overdo the cost incurred in designing and implementing. In this case the benefits outweigh the cost that makes the system economically feasible.

Technical Feasibility: A study of function performance and constraints that may affect the ability to achieve the acceptable system. A system is technically feasible, if it can be designed and implemented within the limitations of available resources like funds, hardware, software etc. The considerations that are normally associated with technical feasibility include development risk, resources availability and technology. Management provides latest hardware and software facilities for successful completion of the project.

The proposed system is technically feasible as the Technology we are using to implement the Project (i.e. ASP.NET) is fully capable to implement our projects requirement analysis that was performed in the analysis section.

Operational Feasibility: The Project is Operationally Feasilbe as it can be implemented easily in the college computer Lab. Schedule Feasibility: Evaluates the time taken in the development of the project. The system had schedule feasibility.

1.2 SYSTEM DESIGN

1.2.1 DESIGN CONCEPTS

The design of an information system produces the detail that state how a system will meet the requirements identified during system analysis. System specialists often refer to this stage as Logical Design, in contrast to the process of development program software, which is referred to as Physical Design.

System Analysis begins process by identifying the reports and the other outputs the system will produce. Then the specific on each are pin pointed. Usually, designers sketch the form or display as they expect it to appear when the system is complete. This may be done on a paper or computer display, using one of the automated system tools available. The system design also describes the data to be input, calculated or stored. Individual data items and calculation procedures are written in detail. The procedure tells how to process the data and produce the output.

1.2.2 DESIGN OBJECTIVES

The following goals were kept in mind while designing the system:

To reduce the manual work required to be done in the existing system.

To avoid errors inherent in the manual working and hence make the outputs consistent and correct.

To improve the management of permanent information of the Computer center by keeping it in properly structured tables and to provide facilities to update this information efficiently as possible.

To make the system completely menu-driven and hence user friendly, and hence user friendly, this was necessary so that even non-programmers could use the system efficiently.

To make the system completely compatible i.e., it should fit in the total integrated system.

To design the system in such a way that reduced future maintenance and enhancement times and efforts.

To make the system reliable, understandable and cost effective.1.2.3 DESIGN MODULES

Our Project Aim is to develop a complete virtual desktop for Online Technical Community to share their ideas etc. The different modules of the portal that we are going to develop are as follows:

Client Profiling: This module enables the Advocate to store all the client information into the database which can be accessed at any moment of time for any future reference.

Case Management: This module provides the advocate with the facility of registering fresh, disposed and pending cases, along with their details (case no, client related to the case, the beginning date of the case, the witness details, and case description in details along with legal documents). Address Book: This module enables the advocate to store important contacts related to the cases being handled. Schedule Manager: This module enables the advocate to store various upcoming meeting details, daily appointments, etc.

Hearing Management: This module provides the advocate with the reference details put forth on a particular hearing date of a case, the next hearing date details, etc.

Judgment Module: This module allows the advocate to store the judgment verdict given for a particular case, which will prove useful to him/her for further upcoming cases. Login Protection: This module provides authentication to the entire system, which makes use of a username and password, thereby preventing any unauthorized access.

Accounting Module: This module provides the advocate with the facility of keeping track of income details due for each case. SMS Alerts: There will be a provision for SMS Alerts in the system where in the Advocate will be sending SMS alerts automatically/manually to his clients Document Management Module: This Module will help the user to scan their documents directly from the software without using the Scanners Software. This Module will also feature the mechanism for Converting the Images of Scanned Documents directly to Text and store that into database using a OCR(Optical Character Recognition) technique.SYSTEM DESIGN

The design stage takes the final specification of the system from analysis stages and finds the best way of filing them, given the technical environment and previous decision on required level of automation.

The system design is carried out in two phases:

i) Architectural Design (High Level Design)

ii) Detailed Design (Low Level Design)

1.2.4 ARCHITECTURAL DESIGN

The high level Design maps the given system to logical data structure. Architectural design involves identifying the software component, decoupling and decomposing the system into processing modules and conceptual data structures and specifying the interconnection among components. Good notation can clarify the interrelationship and interactions if interest, while poor notation can complete and interfere with good design practice. A data flow-oriented approach was used to design the project. This includes Entity Relationship Diagram (ERD) and Data Flow Diagrams (DFD).

1.2.4.1 Entity Relationship Diagram

One of the best design approaches is Entity Relationship Method. This design approach is widely followed in designing projects normally known as Entity Relationship Diagram (ERD).

ERD helps in capturing the business rules governing the data relationships of the system and is a conventional aid for communicating with the end users in the conceptual design phase. ERD consists of:

Entity It is the term use to describe any object, place, person, concept, activity that the enterprise recognizes in the area under investigation and wishes to collect and store data. It is diagrammatically represented as boxes.

Attribute They are the data elements that are used to describe the properties that distinguish the entities.

Relationship It is an association or connection between two or more entities. They are diagrammatically represented as arrows.

A Unary relationship is a relationship between instances of the same entity.

A Binary relationship is a relationship between two entities.

A N-ary relationship is a relationship among N entities. It is defined only when the relationship does have a meaning without the participation of all the N entities.

Degree of Relationship An important aspect of relationship between two or more entities is the degree of relationship. The different relationships recognized among various data stores in the database are: One-to-One (1:1)

It is an association between two entities. For example, each student can have only one Roll No.

One-to-Many (1:M)

It describes entities that may have one or more entities related to it. For example, a father may have one or many children. Many-to-Many (M:M)

It describes entities that may have relationships in both directions. This relationship can be explained by considering items sold by Vendors. A vendor can sell many items and many vendors can sell each item.

ERD representation of the project is given below. It follows Chens convention in which entities are represented as rectangles and relationships as diamonds.1.2.4.2 Context Analysis Diagram

Context Analysis Diagram (CAD) is the top-level data flow diagram, which depicts the overview of the entire system. The major external entities, a single process and the output data stores constitute the CAD. Though this diagram does not depict the system in detail, it presents the overall inputs, process and the output of the entire system at a very high level. The Context Analysis Diagram if the project is given ahead.

Context Level

Data Flow Diagram

1.2.4.3 Data Flow Diagrams

A Data Flow Diagram (DFD) is a graphical tool used to describe and analyze the movement of data through a system manual or automated including the processes, stores of data and delays in the system. They are central tools and the basis from which other components are developed. It depicts the transformation of data from input to output through processes and the interaction between processes.

Transformation of data from input to output through processes logically and independent of physical components is called the DFD. The physical DFD shows the actual implementation and movement of data between people, departments and workstation.

DFDs are an excellent mechanism of communicating with the customers during requirement analysis and are widely used for representing external and top-level internal design specification. In the Later situations, DFDs are quite valuable for establishing naming conventions and names of system components such as subsystems, files and data links.

In a DFD there are four components:1. Sources or Destinations of data such as human, entities that interact with system, outside the system boundary, who form the source and the recipient of information are depicted in the form of a closed rectangle.

2. Data flow is a packet of data. It identifies data flow. It is a pipeline through which information flows. It is depicted in DFD as an arrow with the pointer pointing in the direction of flow. This connecting symbol connects an entity, process and data stores. This arrow mark also specifies the sender and the receiver.

3. Process depicts procedure, function or module that transform input data into output data. It is represented as a circle or a bubble with the procedure name and a unique number inside the circle.

4. Data stores are the physical areas in the computers hard disk where a group of related data is stored in the form of files. They are depicted as an open-ended rectangle. The Data store is used either for storing data into the files or for reference purpose.

1.2.5 DETAILED DESIGN

The Low Level Design maps the logical model of the system to a physical database design. Tables created for the system Entities and Attributes were mapped into Physical tables. The name of the entity is taken as the table name.

During detailed design phase, the database if any and programming modules are designed and detailed user procedures are documented. The interfaces between the System users and computers are also defined.

1.2.5.1 APPLICATION DESIGN

After the detailed problem definition and system analysis of the problem, it was thought of designing web based Computer designing. Simplicity is hard to design. It is difficult to design something that is technically sophisticated but appears simple to use. Any software product must be efficient, fast and functional but more important it must be user friendly, easy to learn and use. For designing good interface we should use the following principles.

i) Clarity and consistency

ii) Visual feedback.

iii) Understanding the people.

iv) Good response.

MODULES

Our Project Aim is to develop a complete virtual desktop for Online Technical Community to share their ideas etc. The different modules of the portal that we are going to develop are as follows:

Client Profiling: This module enables the Advocate to store all the client information into the database which can be accessed at any moment of time for any future reference.

Case Management: This module provides the advocate with the facility of registering fresh, disposed and pending cases, along with their details (case no, client related to the case, the beginning date of the case, the witness details, and case description in details along with legal documents). Address Book: This module enables the advocate to store important contacts related to the cases being handled. Schedule Manager: This module enables the advocate to store various upcoming meeting details, daily appointments, etc.

Hearing Management: This module provides the advocate with the reference details put forth on a particular hearing date of a case, the next hearing date details, etc.

Judgment Module: This module allows the advocate to store the judgment verdict given for a particular case, which will prove useful to him/her for further upcoming cases. Login Protection: This module provides authentication to the entire system, which makes use of a username and password, thereby preventing any unauthorized access.

Accounting Module: This module provides the advocate with the facility of keeping track of income details due for each case. SMS Alerts: There will be a provision for SMS Alerts in the system where in the Advocate will be sending SMS alerts automatically/manually to his clients Document Management Module: This Module will help the user to scan their documents directly from the software without using the Scanners Software. This Module will also feature the mechanism for Converting the Images of Scanned Documents directly to Text and store that into database using a OCR(Optical Character Recognition) technique.WORKING ENVIRONMENT

2.1 Technical Specifications

HARDWARE ENVIRONMENT

PC with the following Configuration

Processor - Pentium-IV 3.0 GHz

RAM - 256 DDR 2 RAM

HARD DISK - 80 GB

SOFTWARE ENVIRONMENT

Operating System - Microsoft Windows XP.

Backend - Microsoft Access

Frontend - VB.NET

Case Tool - Say Microsoft Word 2003, Ms Front Page

Technology Used: VB.NET

VB.NET

VB.NET introduces many exciting new features to the VB developer, though these enhancements do cause some minor compatibility issues with legacy code. The new Integrated Development Environment (IDE) incorporates some of the best ideas of VB 6.0 and InterDev to make it easier and more intuitive to quickly create applications using a wider variety of development resources.The code developed in the IDE can then be compiled to work with the new .NET Framework, which is Microsofts new technology designed to better leverage internal and external Internet resources.The compiler writes the code to Common Language Runtime (CLR), making it easier to interact with other

applications not written in VB.NET. It is now possible to use true inheritance

with VB, which means that a developer can more efficiently leverage code and

reduce application maintenance. Not only is the CLR used for stand-alone VB

applications, it is also used for Web Applications, which makes it easier to exploit

the full feature set of VB from a scripted Web application. Another way in which

security is enhanced is through enforcement of data type compatibility, which

reduces the number of crashes due to poorly designed code. Exploiting the new

features of VB.NET is not a trivial task, and many syntax changes were introduced

that will cause incompatibilities with legacy code. But, many of these are

identified, emphasized, and in some cases automatically updated by the IDE

when a VB 6.0 project is imported into VB.NET.

.NET Architecture

The .NET Framework consists of three parts: the Common Language Runtime,

the Framework classes, and ASP.NET, which are covered in the following sections.

The components of .NET tend to cause some confusion.

ASP.NET

One major headache that Visual Basic developers have had in the past is trying to

reconcile the differences between compiled VB applications and applications built

in the lightweight interpreted subset of VB known as VBScript. Unfortunately,

when Active Server Pages were introduced, the language supported for serverside

scripting was VBScript, not VB. (Technically, other languages could be used

for server side scripting, but VBScript has been the most commonly used.) Now, with ASP.NET, developers have a choice. Files with the ASP extension

are now supported for backwards compatibility, but ASPX files have been introduced

as well. ASPX files are compiled when first run, and they use the same syntax that is used in stand-alone VB.NET applications. Previously, many developers

have gone through the extra step of writing a simple ASP page that simply

executed a compiled method, but now it is possible to run compiled code

directly from an Active Server Page.

Framework Classes

Ironically, one of the reasons that VB.NET is now so much more powerful is

because it does so much less. Up through VB 6.0, the Visual Basic compiler had

to do much more work than a comparable compiler for a language like C++.

This is because much of the functionality that was built into VB was provided in

C++ through external classes.This made it much easier to update and add features

to the language and to increase compatibility among applications that shared

the same libraries.

Now, in VB.NET, the compiler adopts this model. Many features that were

formerly in Visual Basic directly are now implemented through Framework

classes. For example, if you want to take a square root, instead of using the VB

operator, you use a method in the System.Math class.This approach makes the

language much more lightweight and scalable.

.NET Servers

We mention this here only to distinguish .NET servers from .NET Framework.

These servers support Web communication but are not necessarily themselves

written in the .NET Framework.

Common Language Runtime

CLR provides the interface between your code and the operating system, providing

such features as Memory Management, a Common Type System, and

Garbage Collection. It reflects Microsofts efforts to provide a unified and safe

framework for all Microsoft-generated code, regardless of the language used to

create it.

What Is the .NET Framework?

The .NET Framework is Microsofts latest offering in the world of crossdevelopment

(developing both desktop and Web-usable applications), interoperability, and, soon, cross-platform development. As you go through this chapter, youll see just how .NET meets these developmental requirements.

However, Microsofts developers did not stop there; they wanted to completely revamp the way we program. In addition to the more technical changes, .NET strives to be as simple as possible. .NET contains functionality that a developer can easily access. This same functionality operates within the confines of standardized data types and naming

conventions. This internal functionality also encompasses the creation of special data within an assembly file that is vital for interoperability, .NETs built-in security, and NETs automatic resource management.

Another part of the keep it simple philosophy is that .NET applications are geared to be copy-only installations; in other words, the need for a special installation package for your application is no longer a requirement. The majority of .NET applications work if you simply copy them into a directory. This feature substantially eases the burden on the programmer.

The CLR changes the way that programs are written, because VB developers wont be limited to the Windows platform. Just as with ISO C/C++, VB programmers are now capable of seeing their programs run on any platform with the .NET runtime installed. Furthermore, if you delegate a C programmer to oversee future developments on your VB.NET program, the normal learning curve for this programmer will be dramatically reduced by .NETs Multilanguage capabilities.

Introduction to the Common

Language Runtime

CLR controls the .NET code execution. CLR is the step above COM, MTS, and COM+ and will, in due time, replace them as the Visual Basic runtime layer. To developers, this means that our VB.NET code will execute on par with other languages, while aintaining the same, small file size.

The CLR is the runtime environment for .NET. It manages code execution as well as the services that .NET provides.The CLR knows what to do through special data (referred to as metadata) that is contained within the applications. The special data within the applications store a map of where to find classes, when to load classes, and when to set up runtime context boundaries, generate native code, enforce security, determine which classes use which methods, and load classes when needed. Since the CLR is privy to this information, it can also determine when an object is used and when it is released. This is known as managed code.

Managed code allows us to create fully CLR-compliant code. Code thats compiled with COM and Win32API declarations is called unmanaged code, which is what you got with previous versions of Visual Basic.Managed code keeps us from depending on obstinate dynamic link library (DLL) files (discussed in the Ending DLL Hell section later in this chapter). In fact, thanks to the CLR, we dont have to deal with the registry, graphical user identifications (GUIDs), AddRef, HRESULTS, and all the macros and application programming interfaces (APIs) we depended on in the past.They arent even available options in .NET. Removing all the excess also provides a more consistent programming model. Since the CLR encapsulates all the functions that we had with unmanaged code,

we wont have to depend on any pre-existing DLL files residing on the hard drive. This does not mean that we have seen the last of DLLs; it simply means that the .NET Framework contains a system within it that can map out the location of all the resources we are using.We are no longer dependent upon VB runtime files being installed, or certain pre-existing components.

Because CLR-compliant code is also Common Language Specification (CLS)- compliant code, it allows CLR-based code to execute properly. CLS is a subset of the CLR types defined in the Common Type System (CTS), which is also discussed later in the chapter. CLS features are instrumental in the interoperability process, because they contain the basic types required for CLR operability. These combined features allow .NET to handle multiple programming languages. The CLR manages the mapping; all that you need is a compiler that can generate the code and the special data needed within the application for the CLR to operate. This ensures that any dependencies your application might have are always met and never broken.

When you set your compiler to generate the .NET code, it runs through the CTS and inserts the appropriate data within the application for the CLR to read. Once the CLR finds the data, it proceeds to run through it and lay out everything it needs within memory, declaring any objects when they are called (but not before).Any application interaction, such as passing values from classes, is also mapped within the special data and handled by the CLR.

Using .NET-Compliant

Programming Languages

.NET isnt just a single, solitary programming language taking advantage of a multiplatform system. A runtime that allows portability, but requires you to use a single programming model would not truly be delivering on its perceived value. If this were the case, your reliance on that language would become a liability when the language does not meet the requirements for a particular task. All of a sudden, portability takes a back seat to necessityfor something to be truly portable, you require not only a portable runtime but also the ability to code in what you need, when you need it. .NET solves that problem by allowing any .NET compliant programming language to run. Cant get that bug in your class worked out in VB, but you know that you can work around it in C? Use C# to create a class that can be easily used with your VB application. Third-party programming language users dont need to fret for long, either; several companies plan to create .NET-compliant versions of their languages.

Currently, the only .NET-compliant languages are the entire Microsoft flavor; for more information, check these out at http://msdn.microsoft.com/net:

_ C#

_ C++ with Managed Extensions

_ VB.NET

_ ASP.NET (although this one is more a subset of VB.NET)

_ Jscript.NET

Visual Basic for Windows is a little over ten years old. It debuted on March 20, 1991, at a show called Windows World, although its roots go back to a tool called Ruby that Alan Cooper developed in 1988.

Origin of .Net Technology1. Ole Technology Object linking and embedding technology was developed by Microsoft in the early 1990 to enable easy interprocess communications. To embed documents from one application into another application. This enabled users to develop applications which required inter- operability between various products such as MS Word and MS Excel.2. Com Technology

Microsoft introduced component-based model for developing softwares programs. In the components based approaching a program is broken into a number of independent components where each one offers a particular service. It reduces the overall complexity of software. Enables distributed developments across multiple organization or departments and Enhances software maintainability3. Dot Net Technology

.NET technology is a third-generation component model. This provides a new level of inter-operability compared to COM technology. COM provides a standard binary mechanism for inter-module communication .this mechanism is replaced by an intermediate language called Microsoft Intermediate language (MSIL) or simply IL.

Introduction to the .NET Framework and Visual Studio .NETThe .NET Framework (pronounced dot net framework) defines the

environment that you use to execute Visual Basic .NET applications and the services you can use within those applications. One of the main goals of this framework is to make it easier to develop applications that run over the Internet. However, this framework can also be used to develop traditional business applications that run on the Windows desktop. To develop a Visual Basic .NET application, you use a product called Visual Studio .NET (pronounced Visual Studio dot net). This is actually a suite of products that includes the three programming languages described. Visual Basic .NET, which is designed for rapid application development. Visual Studio also includes several other components that make it an outstanding development product. One of these is the Microsoft Development Environment, which youll be introduced to in a moment. Another is the Microsoft SQL Server 2000 Desktop Engine (or MSDE). MSDE is a database engine that runs on your own PC so you can use Visual Studio for developing database applications that are compatible with Microsoft SQL Server. SQL Server in turn is a database management system that can be used to provide the data for large networks of users or for Internet applications.

The two other languages that come with Visual Studio .NET are C# and C++. C# .NET (pronounced C sharp dot net) is a new language that has been developed by Microsoft especially for the .NET Framework. Visual C++ .NET is Microsofts version of the C++ language that is used on many platforms besides Windows PCs.

Programming languages supported by Visual Studio .NET Language Description

Visual Basic .NET - Designed for rapid application development Visual C# .NET - A new language that combines the features of Java and C++ and is suitable for rapid application development.

Visual C++ .NET - Microsofts version of C++ that can be used for developing high- performance applications.

Two other components of Visual Studio .NET

Component Description

Microsoft Development Environment The Integrated Development Environment (IDE) that you use for developing applications in any of the three languages Microsoft SQL Server 2000 Desktop Engine A database engine that runs on your own PC so you can use Visual Studio for developing database applications that are compatible with Microsoft SQL Server.

Platforms that can run Visual Studio .NET

Windows 2000 and later releases of Windows

Platforms that can run Visual Studio .NET applications

Windows 98 and later releases of Windows, depending on which .NET components the application uses.

Visual Basic .NET Standard Edition

An inexpensive alternative to the complete Visual Studio .NET package that supports a limited version of Visual Basic .NET as its only programming language.

Description

The .NET Framework defines the environment that you use for executing Visual Basic .NET applications.

Visual Studio .NET is a suite of products that includes all three of the programming languages listed above. These languages run within the .NET Framework.

You can develop business applications using either Visual Basic .NET or Visual C# .NET.

Both are integrated with the design environment, so the development techniques are similar although the language details vary.

Besides the programming languages listed above, third-party vendors can develop languages for the .NET Framework. However, programs written in these languages cant be developed from within Visual Studio .NET.

The components of the .NET Framework

The .NET Framework provides a common set of services that application programs written in a .NET language such as Visual Basic .NET can use to run on various operating systems and hardware platforms. The .NET Framework is divided into two main components: the .NET Framework Class Library and the Common Language Runtime. The .NET Framework Class Library consists of segments of pre-written code called classes that provide many of the functions that you need for developing .NET applications. For instance, the Windows Forms classes are used for developing Windows Forms applications. The ASP.NET classes are used for developing Web Forms applications. And other classes let you work with databases, manage security, access files, and perform many other functions. Although its not apparent in this figure, the classes in the .NET Framework Class Library are organized in a hierarchical structure. Within this structure,

related classes are organized into groups called namespaces. Each namespace contains the classes used to support a particular function. For example, the System.Windows.Forms namespace contains the classes used to create forms and the System.Data namespace contains the classes you use to access data. The Common Language Runtime, or CLR, provides the services that are needed for executing any application thats developed with one of the .NET languages. This is possible because all of the .NET languages compile to a common intermediate language, which youll learn more about in the next figure. The CLR also provides the Common Type System that defines the data types that are used by all the .NET languages. That way, you can use more than one of the .NET languages as you develop a single application without worrying about incompatible data types. If youre new to programming, the diagram in this figure probably doesnt mean too much to you right now.

Description

.NET applications do not access the operating system or computer hardware directly. Instead, they use services of the .NET Framework, which in turn access the operating system and hardware.

The .NET Framework consists of two main components: the .NET Framework Class Library and the Common Language Runtime.

The .NET Framework Class Library provides pre-written code in the form of classes that are available to all of the .NET programming languages. This class library consists of hundreds of classes, but you can create simple .NET applications once you learn how to use just a few of them.

The Common Language Runtime, or CLR, is the foundation of the .NET Framework. It manages the execution of .NET programs by coordinating essential functions such as memory management, code execution, security, and other services. Because .NET applications are managed by the CLR, they are called managed applications.

The Common Type System is a component of the CLR that ensures that all .NET applications use the same basic data types regardless of what programming languages were used to develop the applications.

The Common Language RuntimeVisual Basic has always used a runtime, so it may seem strange to say that the biggest change to VB that comes with .NET is the change to a Common Language Runtime (CLR) shared by all .NET languages. The reason is that while on the surface the CLR is a runtime library just like the C Runtime library, MSVCRTXX.DLL, or the VB Runtime library, MSVBVMXX.DLL, it is much larger and has greater functionality. Because of its richness, writing programs that take full advantage of

the CLR often seems like you are writing for a whole new operating system API. Since all languages that are .NET-compliant use the same CLR, there is no need for a language-specific runtime. What is more, code that is CLR can be written in any language and still be used equally well by all .NET CLR-compliant languages.

Your VB code can be used by C# programmers and vice versa with no extra work. Next, there is a common file format for .NET executable code, called Microsoft Intermediate Language (MSIL, or just IL). MSIL is a semi compiled language that gets compiled into native code by the .NET runtime at execution time. This is a vast extension of what existed in all versions of VB prior to version 5. VB apps used to be compiled to p-code (or pseudo code, a machine language for a hypothetical machine), which was an intermediate representation of the final executable code.

The various VB runtime engines, interpreted the p-code when a user ran the program. People always complained that VB was too slow because of this, and therefore, constantly begged Microsoft to add native compilation to VB. This happened starting in version 5, when you had a choice of p-code (small) or native code (bigger but presumably faster). The key point is that .NET languages combine the best features of a p-code language with the best features of compiled

languages. By having all languages write to MSIL, a kind of p-code, and then compile the resulting MSIL to native code, it makes it relatively easy to have cross-language compatibility. But by ultimately generating native code you still get good performance.

Completely Object OrientedThe object-oriented features in VB5 and VB6 were (to be polite) somewhat limited. One key issue was that these versions of VB could not automatically initialize the data inside a class when creating an instance of a class. This led to classes being created in an indeterminate (potentially buggy) state and required the programmer to exercise extra care when using objects. To resolve this, VB .NET adds an important feature called parameterized constructors.

Another problem was the lack of true inheritance. Inheritance is a form of code reuse where you use certain objects that are really more specialized versions of existing objects. Inheritance is thus the perfect tool when building something like a better textbox based on an existing textbox. In VB5 and 6 you did not have inheritance, so you had to rely on a fairly cumbersome wizard to help make the process of building a better textbox tolerable.

Automatic Garbage Collection: Fewer Memory Leaks

Programmers who used Visual Basic always had a problem with memory leaks from what are called circular references. (A circular reference is when you have object A referring to object B and object B referring to object A.) Assuming this kind of code was not there for a reason, there was no way for the VB compiler to realize that this circularity was not significant. This meant that the memory for these two objects was never reclaimed. The garbage collection feature built into the .NET CLR eliminates this problem of circular references using much smarter algorithms to determine when circular references can be cut and the memory reclaimed. Of course, this extra power comes at a cost.

Structured Exception Handling

All versions of Visual Basic use a form of error handling that dates back to the first Basic written almost 40 years ago. To be charitable, it had problems. To be uncharitable (but we feel realistic), it is absurd to use On Error GoTo with all the spaghetti code problems that ensue in a modern programming language. Visual Basic adds structured exception handling the most modern and most powerful means of handling errors.

True Multithreading

Multithreaded programs seem to do two things at once. E-mail programs that let you read old e-mail while downloading new e-mail are good examples. Users expect such apps, but you could not write them very easily in earlier versions of VB.

How a Visual Basic application is compiled and runFigure below shows how an application is compiled and run when using Visual Basic .NET. To start, you use Visual Studio .NET to create a project, which is made of one or more source files that contain Visual Basic statements. Most simple projects consist of just one source file, but more complicated projects can have more than one source file. A project may also contain other types of files, such as sound files, image files, or simple text files. As the figure shows, a solution is a container for projects, which youll learn more about in a moment. You use the Visual Basic compiler, which is built into Visual Studio, to compile your Visual Basic source code into Microsoft Intermediate Language (or MSIL). For short, this can be referred to as Intermediate Language (or IL). At this point, the Intermediate Language is stored on disk in a file thats called an assembly. In addition to the IL, the assembly includes references to the classes that the application requires. The assembly can then be run on any PC that has the Common Language Runtime installed on it. When the assembly is run, the CLR converts the Intermediate Language to native code that can be run by the Windows operating system. Although the CLR is only available for Windows systems right now, it is possible that the CLR will eventually be available for other operating systems as well. In other words, the Common Language Runtime makes platform independence possible. If, for example, a CLR is developed for the Unix and Linux operating systems, Visual Basic applications will be able to run on those operating systems as well as Windows operating systems.

Description

1. The programmer uses Visual Studios Integrated Development Environment to create a project, which includes one or more Visual Basic source files. In some cases, a project may contain other types of files, such as graphic image files or sound files. A solution is a container that holds projects. Although a solution can contain more than one project, the solution for most simple applications contains just one project. So you can think of the solution and the project as essentially the same thing.

2. The Visual Basic compiler translates or builds the source code into Microsoft Intermediate Language (MSIL), or just Intermediate Language (IL). This language is stored on disk in an assembly that also contains references to the classes that the application requires. An assembly is simply an executable file that has an .exe or .dll extension.

3. The assembly is then run by the .NET Frameworks Common Language Runtime. The CLR manages all aspects of how the assembly is run, including converting the Intermediate Language to native code that can be run by the operating system, managing memory for the assembly, enforcing security, and so on.

The VB .NET IDE: Visual Studio .NETThe concept of a rapid application development (RAD) tool with controls that you to drag onto forms is certainly still there, and pressing F5 will still run your program, but much has changed and mostly for the better. For example, the horrid Menu Editor that essentially has been unchanged since VB1 has been replaced by an in-place menu editing system that is a dream to use.

Also, VB .NET, unlike earlier versions of VB, can build many kinds of applications other than just GUI-intensive ones. For example, you can build Web-based applications, server-side applications, and even console-based (in what looks like an old-fashioned DOS window) applications. Moreover, there is finally a unified development environment for all of the Visual languages from Microsoft. The days when there were different IDEs for VC++, VJ++, Visual InterDev, Visual Basic, and DevStudio are gone. Another nice feature of the new IDE is the customization possible via an enhanced extensibility model. VS .NET can be set up to look much like the IDE from VB6, or any of the other IDEs, if you like those better.

VB .NET is the first fully object-oriented version of VB

Introduction to OOP

OOP is a vast extension of the event-driven, control-based model of programming used in early versions of VB. With VB .NET, your entire program will be made up of self-contained objects that interact. These objects are stamped out from factories called classes. These objects will:

Have certain properties and certain operations they can perform.

Not interact with each other in ways not provided by your code's public interface.

Only change their current state over time, and only in response to a specific request. (In VB .NET this request is made through a property change or a method call.)

The point is as long as the objects satisfy their specifications as to what they can do (their public interface) and thus how they respond to outside stimuli, the user does not have to be interested in how that functionality is implemented. In OOP-speak, you only care about what objects expose.

Classes As User-Defined Types

Another way to approach classes is to think of them as an extension of user- defined types where, for example, the data that is stored inside one can be validated before any changes take place. Similarly, a class is able to validate a request to return data before doing so. Finally, imagine a type that has methods to return data in a special form rather than simply spew out the internal representation. From this point of view, an object is then simply a generalization of a specific (data-filled) user-defined type with functions attached to it for data access and validation. The key point you need to keep in mind is that:

You are replacing direct access to data by various kinds of function calls that do the work.

For example, in a user-defined type such as this:

Employee Info Type

Name As String

Social Security Number As String

Address as String

End Employee Info Type

the pseudocode that makes this user-defined type "smart" would hide the actual data and have functions instead to return the values. The pseudocode might look like this:

Employee Info as a CLASS

(hidden) Name As String - instead has functions that validate and return and change name

(hidden)Social Security Number As String - instead has functions that validate and return and change the Social Security number

(hidden) Address as String - instead has functions that validate and return and change the address and also return it in a useful form

End Employee Info as CLASS

How Should Objects Interact

One key practice in OOP is making each class (= object factory) responsible for carrying out only a small set of related tasks. You spend less time designing the class and debugging it when your classes are designed to build small objects that perform relatively few tasks, rather than architected with complex internal data along with many properties and methods to manipulate the internal data. If an object needs to do something that is not its responsibility, make a new class whose objects will be optimized for that task instead of adding the code to the first object and thus complicating the original object. If you give the first object access to the second type of object, then the first object can ask the second object to carry out the required task.

Abstraction

Abstraction is a fancy term for building a model of an object in code. In other words, it is the process of taking concrete day-to-day objects and producing a model of the object in code that simulates how the object interacts in the real world. For example, the first object-oriented language was called Simula, because it was invented to make simulations easier. Of course, the more modern ideas of virtual reality carry abstraction to an extreme.

Abstraction is necessary because:

You cannot use OOP successfully if you cannot step back and abstract the key issues from your problem.

Always ask yourself: What properties and methods will I need to mirror in the objects code so that my code will model the situation well enough to solve the problem?

Encapsulation

Encapsulation is the formal term for what we used to call data hiding. It means hide data, but define properties and methods that let people access it. Remember that OOP succeeds only if you manipulate data inside objects, only sending requests to the object. The data in an object is stored in its instance fields. Other terms you will see for the variables that store the data are member variables and instance variables. All three terms are used interchangeably, and which you choose is a matter of taste; we usually use instance fields. The current values of these instance fields for a specific object define the objects current state. Keep in mind that you should:

Never ever give anyone direct access to the instance fields.

Inheritance

As an example of inheritance, imagine specializing the Employee class to get a Programmer class, a Manager class, and so on. Classes such as Manager would inherit from the Employee class. The Employee class is called the base (or parent) class, and the Manager class is called the child class. Child classes are:

Always more specialized than their base (parent) classes.

Have at least as many members as their parent classes (although the behavior of an individual member may be very different).

Polymorphism

Traditionally, polymorphism (from the Greek many forms) means that inherited objects know what methods they should use, depending on where they are in the inheritance chain. For example, as we noted before, an Employee parent class and, therefore, the inherited Manager class both have a method for changing the salary of their object instances. However, the RaiseSalary method probably works differently for individual Manager objects than for plain old Employee objects. The way polymorphism works in the classic situation where a Manager class inherits from an Employee class is that an Employee object would know if it were a plain old employee or really a manager. When it got the word to use the RaiseSalary method, then:

If it were a Manager object, it would call the RaiseSalary method in the Manager class rather than the one in the Employee class.

Otherwise, it would use the usual RaiseSalary method.

Advantages to OOP

At first glance, the OOP approach that leads to classes and their associated methods and properties is much like the structured approach that leads to modules. But the key difference is that:

Classes are factories for making objects whose states can diverge over time.

Sound too abstract? Sound as though it has nothing to do with VB programming?

Well, this is exactly what the Toolbox is! Each control on the Toolbox in earlier versions of VB was a little factory for making objects that are instances of that controls class. Suppose the Toolbox was not a bunch of little class factories waiting to churn out new textboxes and command buttons in response to your requests. Can you imagine how convoluted your VB code would been if you needed a separate code

module for each textbox? After all, the same code module cannot be linked into your code twice, so you would have to do some fairly complicated coding to build a form with two identical textboxes whose states can diverge over time.

Windows Forms, Drawing, and Printing

EVERYTHING YOU HEAR ABOUT .NET development in the magazines or online seems to focus on features such as Web Services, using the browser as the delivery platform, ASP .NET, and other Web-based topics. The many, many improvements made to client-side Windows GUI development under .NET using the Visual Studio IDE are barely mentioned. This may sound strange to say of a Microsoft product, but GUI development in Visual Studio is under-hyped; there are, in fact, many improvements that VB programmers have long awaited! Although we agree that using the browser as a delivery platform is clearly becoming more and more important, we also feel pretty strongly that the traditional Windows-based client is not going away. In this chapter, we hope to counterbalance this general trend by showing you the fundamentals of the programming needed to build GUIs in VB .NET. We will not spend a lot of time on how to use the RAD (Rapid Application Development) features of the IDE,1 or the properties, methods, and events for the various controls in the Toolboxdoing this justice would take a book at least as long as this one. Instead, by concentrating on the programming issues involved, we hope to show you how GUI development in .NET works. At that point, you can look at the documentation as needed or wait for a complete book on GUI development to learn more. After discussing how to program with forms and controls, we take up the basics of graphics programming in VB .NET, which is quite a bit different than it was.Form Designer Basics

For VB6 programmers, adjusting to how the VS .NET IDE handles forms and controls is pretty simple. You have a couple of new (and very cool) tools that we briefly describe later, but the basic idea of how to work with the Toolbox has not changed very much. (See the sections in this chapter on the Menu Editor and on how to change the tab order, for our two favorite additions.) For those who have never used an older version of the VB IDE, here is what you need to do to add a control to the Form window:

1. Double-click on a control or drag it from the Toolbox to the form in the default size.

2. Position it by clicking inside it and then dragging it to the correct location.

3. Resize it by dragging one of the small square sizing boxes that the cursor points to.

You can also add controls to a form by following these steps:

1. In the Toolbox, click on the control you want to add to your form.

2. Move the cursor to the form. (Unlike earlier versions of VB, the cursor now gives you a clue about which control you are working with.)

3. Click where you want to position the top left corner of the control and then drag to the lower right corner position. (You can then use Shift+ an Arrow key to resize the control as needed.)

For controls without a user interface, such as timers, simply double-click on them. They end up in a tray beneath the form, thus reducing clutter. You can use the Format menu to reposition and resize controls once they are on the form. Of course, many of the items on the Format menu, such as the ones on the Align submenu, make sense only for a group of controls. One way to select a group of controls is to click the first control in the group and then hold down the Control key while clicking the other members you want in the group. At this point they will all show sizing handles but only one control will have dark sizing handles.

MDI Forms

In earlier versions of VB, Multiple Document Interface (MDI) applications required you to decide which form was the MDI parent form at design time. In .NET, you need only set the IsMdiContainer property of the form to True. You create the child forms at design time or at run time via code, and then set their MdiParent properties to reference a form whose IsMdiContainer property is true. This lets you do something that was essentially impossible in earlier versions of VB: change a MDI parent/child relationship at run time. It also allows an application to contain multiple MDI parent forms.

Database Access with VB .NETADO .NET

With each version of VB came a different model for accessing a database. VB .NET follows in this tradition with a whole new way of accessing data: ADO .NET. This means ADO .NET is horribly misnamed. Why? Because it is hardly the next generation of ADO! In fact, it is a completely different model for accessing data than classic ADO. In particular, you must learn a new object model based on a DataSet object for your results. (Because they are not tied to a single table, ADO .NET DataSet objects are far more capable than ADO RecordSet objects, for example.) In addition, ADO .NET:

Is designed as a completely disconnected architecture (although the DataAdapter, Connection, Command, and DataReader classes are still

connection-oriented).

Does not support server-side cursors. ADOs dynamic cursors are no

longer available.

Is XML-based1 (which lets you work over the Internet, even if the client sits behind a firewall).

Is part of the .NET System.Data.DLL assembly, rather than being language-based.

Is unlikely to support legacy Windows 95 clients.

The other interesting point is that in order to have essential features such as two-phase commit, you need to use Enterprise Services (which is basically COM+/MTS with a .NET wrapper).

In VB6, a typical database application opened a connection to the database and then used that connection for all queries for the life of the program. In VB .NET, database access through ADO .NET usually depends on disconnected (detached) data access. This is a fancy way of saying that you most often ask for the data from a database and then, after your program retrieves the data, the connection is dropped. With ADO .NET, you are very unlikely to have a persistent connection to a data source. (You can continue to use persistent connections through classic ADO using the COM/Interop facilities of .NET with the attendant scalability problems that classic ADO always had.)

Because data is usually disconnected, a typical .NET database application has to reconnect to the database for each query it executes. At first, this seems like a big step backward, but it really is not. The old way of maintaining a connection is not really practical for a distributed world: if your application opens a connection to a database and then leaves it open, the server has to maintain that connection until the client closes it. With heavily loaded servers pushing googles of bits of data, maintaining all those per-client connections is very costly in terms of bandwidth.

System.Data.SqlClient

Retrieving data from a SQL Server database is similar: the syntax for the OleDb and SqlClient namespaces is almost identical. The key difference (aside from the different class names) is the form of the connection string, which assumes there is a test account with a password of apress on a server named Apress. The SQL Server connection string requires the user ID, password, server, and database name. We pass the connection string to get a connection object. Finally, as you can imagine, more complicated SQL queries are easy to construct: just build up the query string one piece at a time.

Software Engineering process

The attribute of web based system and application have a profound influence on the web engineering process that is chosen. If immediacy and continuous evolution are primary attribute of a web engineering, a web engineering team might choose an agile process model that produces web applications releases in the rapid fire sequence. On the other hand, if the web application is to be developed over a long time period) e.g., a major (e-commerce application), an incremental process model can be chosen.

The network intensive nature of the application in this domain suggests a population of the user that is diverse (thereby making special demands on requirements elicitation and modeling) and an application architecture that can be highly specialized. Because web applications are often content-driven with an emphasis on aesthetic, it is likely that parallel development activities will be scheduled within the web applications process and involve a team of both technical and non technical people (e.g., copywriter, graphic designer).

Defining the framework

Any one of the agile process models (e.g., extreme programming, adaptive software development, SCRUM)

To be effective, any engineering process must be adaptive. That is, the organization of the project team, the modes of communication among team members, the engineering activities and tasks to be performed, the information that is collected and created, and the methods used to produce a high quality product must all be adapted tom the people doing the work, the project time line and constraint, and the problem to be solved. Before we define a process framework for web engineering, we must recognize that:

1. Webapps are often delivered incrementally. That is, frame work activities will occur repeatedly as each increment is engineered and delivered.

2. Changes will occur frequently. These changes may occur as a result of the evaluation of a delivered increment or as a consequence of changing business conditions.

3. Timelines are short. This mitigates against the creation and review of voluminous engineering documentation, but it does not preclude the simple reality that critical analysis, design, and testing must be recorded in some manner.

Software Model of the Project

The software model used in our Project is the Increment Model. We used incremental model because the project was done in increments or parts and these parts were tested individually. For ex. Like the Candidate Registration and music uploading Page was developed first and tested thoroughly, then other part the registration module was developed and tested individually.

Incremental model combines elements of the linear sequential model with the iterative philosophy of prototyping. The incremental model applies linear sequences in a staggered fashion as time progresses. Each linear sequence produces a deliverable increment of the software. For example, word processing software may deliver basic file management, editing and document production functions in the first increment. More sophisticated editing and document production in the second increment, spelling and grammar checking in the third increment, advanced page layout in the fourth increment and so on. The process flow for any increment can incorporate the prototyping model.

When an incremental model is used, the first increment is often a core product. Hence, basic requirements are met, but supplementary features remain undelivered. The client uses the core product. As a result of his evaluation, a plan is developed for the next increment. The plan addresses improvement of the core features and addition of supplementary features. This process is repeated following delivery of each increment, until the complete product is produced.

As opposed to prototyping, incremental models focus on the delivery of an operational product after every iteration.

Advantages:

Particularly useful when staffing is inadequate for a complete implementation by the business deadline.

Early increments can be implemented with fewer people. If the core product is well received.

additional staff can be added to implement the next increment.

Increments can be planned to manage technical risks. For example, the system may require availability of some hardware that is under development. It may be possible to plan early increments without the use of this hardware, thus enabling partial functionality and avoiding unnecessary delay.

Time Scheduling

Scheduling of a software project does not differ greatly from scheduling of any multitask development effort. Therefore, generalized project scheduling tools and techniques can be applied to software with little modification.

The program evaluation and review technique (PERT) and the critical path method (CPM) are two project scheduling methods that can be applied to software development. Both techniques a task network description of a project, that is, a pictorial or tabular representation of tasks that must be accomplished from beginning to end of project. The network is defined by developing a list of all tasks, sometimes called the project work breakdown structure (WBS), associated with a specific project and list of orderings (sometimes called a restriction list) that indicates in what order tasks must be accomplished.

Both PERT and CPM provide quantitative tools that allow the software planner to:

i)Determine the critical path- the chain of tasks that determines the duration of the project

ii)Establish most likely time estimates for individual tasks by applying statistical models

iii)Calculate boundary times that define a time window for a particular task.

Boundary time calculations can be very useful in software project scheduling. Riggs describes important boundary times that may be discerned from a PERT or CPM networks.

Earliest time that a task can begin when all preceding tasks are completed in the shortest possible time

The latest time for task initiation before the minimum project completion time is delayed

The earliest finish-the sum of the earliest start-and the task duration

The latest finish-the latest start time added to task duration

The total float-the amount of surplus time or leeway allowed in scheduling tasks to that so that the network critical path is maintained on schedule. Boundary time calculations lead to a determination of critical path and provide the manger with a quantitative method for evaluating progress as tasks are completed. The planner must recognize that effort expended on software does not terminate at the end of development. Maintenance effort, although not easy to schedule at this stage, will ultimately become the largest cost factor. A primary goal of software engineering is to help reduce this cost.

Time-scheduling for Our Project will be like this:

Project Analysis: Two Weeks

GUI Designing: Three Weeks

Core Coding and Algorithm: Four Weeks

Testing and Debugging: Two Weeks

Screen Shots

TESTING PROCESSES

All software intended for public consumption should receive some level of testing. The more complex or widely distributed a piece of software is, the more essential testing is to its success. Without testing, you have no assurance that software will behave as expected. The results in a public environment can be truly embarrassing.

For software, testing almost always means automated testing. Automated tests use a programming language to replay recorded user actions or to simulate the internal use of a component. Automated tests are reproducible (the same test can be run again and again) and measurable (the test either succeeds or fails). These two advantages are key to ensuring that software meets product requirements.

Developing a Test Plan

The first step in testing is developing a test plan based on the product requirements. The test plan is usually a formal document that ensures that the product meets the following standards:

Is thoroughly tested. Untested code adds an unknown element to the product and increases the risk of product failure.

Meets product requirements. To meet customer needs, the product must provide the features and behavior described in the product specification. For this reason, product specifications should be clearly written and well understood.

Does not contain defects. Features must work within established quality standards .Having a test plan helps you avoid ad hoc testingthe kind of testing that relies on the uncoordinated efforts of developers or testers to ensure that code works. The results of ad hoc testing are usually uneven and always unpredictable. A good test plan answers the following questions:

How are tests written? Describe the languages and tools used for testing.

Who is responsible for the testing? List the teams or individuals who write and perform the tests.

When are the tests performed? The testing schedule closely follows the development schedule.

Where are the tests and how are test results shared? Tests should be organized so that they can be rerun on a regular basis.

What is being tested? Measurable goals with concrete targets let you know when you have achieved success.

Types of TestsThe test plan specifies the different types of tests that will be performed to ensure that product meets customer requirements and does not contain defects.

Types of Tests

Test typeEnsures that

Unit testEach independent piece of code works correctly.

Integration testAll units work together without errors.

Regression testNewly added features do not introduce errors to other features that are already working.