mobile sinks

Upload: rajiv-vaddi

Post on 03-Apr-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Mobile Sinks

    1/42

    EFFICIENT DATA COLLECTION IN WIRELESS SENSOR

    NETWORKS WITH PATH-CONSTRAINED MOBILE SINKS

    By

    Rajiv Vaddi

    ABSTRACT

    Recent work has shown that sink mobility along a constrained path can improve the

    energy efficiency in wireless sensor networks. However, due to the path constraint, a mobile sink

    with constant speed has limited communication time to collect data from the sensor nodes

    deployed randomly. This poses significant challenges in jointly improving the amount of data

    collected and reducing the energy consumption. To address this issue, we propose a novel data

    collection scheme, called the Maximum Amount Shortest Path (MASP) that increases network

    throughput as well as conserves energy by optimizing the assignment of sensor nodes. MASP is

    formulated as an integer linear programming problem and then solved with the help of a genetic

    algorithm. A two-phase communication protocol based on zone partition is designed to

    implement the MASP scheme. We also develop a practical distributed approximate algorithm to

    solve the MASP problem. In addition, the impact of different overlapping time partition methods

    is studied. The proposed algorithms and protocols are validated through simulation experiments

    using OMNET++.

    EXISTING SYSTEM

    Due to the path constraint, a mobile sink with constant speed has limited communication

    time to collect data from the sensor nodes deployed randomly. Shortest Path Tree (SPT) used

    to choose subsink and relay data from members. It is possible that some subsinks with longer

    communication time with fewer members, implying that the mobile sink may collect less data

    than expected. Some subsinks with very short communication time with too many members the

    excess data traffic may result in oversaturated subsinks which are not able to transmit all data to

    the mobile sink in the limited communication duration. SPT method has low energy efficiency

  • 7/28/2019 Mobile Sinks

    2/42

    for data collection. Network throughput is also low. Network throughput depends on Data

    collection, number of members belongs to each subsink.

    PROPOSED SYSTEM

    To improve amount of data collection and reduces energy consumption, we propose a

    novel data collection scheme, called the Maximum Amount Shortest Path (MASP) which

    increases the Network throughput and reduces the energy consumption. Based on the

    communication range of Mobile sink two parts in monitor region, Direct Communication Area

    (DCA) called subsink which directly transmits data to mobile sink. Multihop Communication

    Area (MCA) called members which transmit data through subsink. MASP is assigned

    corresponding subsinks according to length communication time between mobile sink and

    subsink. MASP is to find optimized mapping between members and subsink to minimize energy

    consumption under some condition and data collection is maximized.

    SYSTEM REQUIREMENTS:

    HARDWARE REQUIREMENTS

    Processor : Any Processor above 500 MHz.

    Ram : 128Mb.

    Hard Disk : 10 Gb.

    Compact Disk : 650 Mb.

    Input device : Standard Keyboard and Mouse.

    Output device : VGA and High Resolution Monitor.

    SOFTWARE REQUIREMENTS

    Operating System : Windows 2000 and Above.

    Language : JDK 1.6

    Data Bases : MS SQL Server 2005

  • 7/28/2019 Mobile Sinks

    3/42

    Front End : Java Swing

    Efficient Data Collection in Wireless Sensor Networks with Path-Constrained Mobile Sinks

    Modules

    1. Assign Member and Subsinks

    1.1Process of Assign Subsinks

    1.2Process of Assign Member

    2. Genetic Algorithm Process

    2.1Building Shortest Path Tree process

    2.2Process of Fitness value calculation

    2.3Process of Subsink Confirmation

    3. Data Collection

    3.1Zone-Partition

    3.2Data Collection

  • 7/28/2019 Mobile Sinks

    4/42

    Modules Description

    1. Assign Member and Subsinks

    1.1 Process of Assign Subsinks

    In this module we are going to assign the subsinks in the network. The

    node which is within the communication range of mobilesink that is called

    subsink. The initial solutions generated randomly satisfy some Constraint but possibly

    violate another some Constraints. So the initial population may consist of some infeasible

    solutions. Our objective function based on GA and also to improve the feasibility of the

    solutions.

    1.2 Process of Assign Member

  • 7/28/2019 Mobile Sinks

    5/42

    In this module we are going to assign the subsinks in the network. The

    node which is out of the communication range of mobilesink that is called

    subsink. The broadcast message consists of the list of the mapping relation between

    each member and its destination subsink. The optimized member assignment information

    will be disseminated to the entire network.

    2. Genetic Algorithm Process

    2.1 Building Shortest Path Tree process

    In this module we are going to build shortest path tree for all subsinks that we

    found in first module.The subsinks start building the shortest path trees (SPTs) rooted

    from themselves in entire network. As a result, each node obtains the shortest hop

    information from themselves to all subsinks and then sends the related hop information to

    the corresponding subsink. The mobile sink needs to record the time when each node

    enters and leaves its communication range. The data collection processes in the forward

    direction.

    2.2 Process of Fitness value calculation

    In this module we are going to calculate fitness value for shortest path tree. It is

    used to apply the Genetic algorithm. A binary tournament selection is used to select

    parents from the initial population.

    First, two pairs of solutions are drawn randomly from the population. Then, for

    each pair, the solution with higher fitness value is discarded, and the other one is chosen

    as one parent for crossover. Here, the fitness value is the only criterion to choose parents.

    2.3 Process of Subsink Confirmation

    In this module we are going to confirm the Subsink to communicate with

    member. The solution with the highest unfitness value is replaced by the child solution if

    the latter has lower unfitness value, which helps eliminate the infeasible solution more

    quickly in the population. If all solutions are feasible with zero unfitness value, the

    individual with the highest fitness will be replaced by the child solution if the fitness of

    the latter is lower.

  • 7/28/2019 Mobile Sinks

    6/42

    3. Data Collection

    3.1 Zone-Partition

    In this module we are going to partition the zone based on the member count. We

    can divide the whole monitored area into several zones. And then, the MASP scheme is

    executed separately to get the optimal assignment of the members to the subsinks in each

    zone.

    3.2 Data Collection

    In this module we are going to collect data for transmission. The members send

    the sensed data or forward data to the destination subsinks. To deal with the network

    dynamics caused by the node failure or node addition, existing on-demand routing

    protocols may be used to find the closest valid subsink as the temporary destination forone node, when it cannot reach its subsink successfully.

    USECASE DIAGRAM

  • 7/28/2019 Mobile Sinks

    7/42

    DATA FLOW DIAGRAM

    Level 0

  • 7/28/2019 Mobile Sinks

    8/42

    Level 1

    Level 2

  • 7/28/2019 Mobile Sinks

    9/42

  • 7/28/2019 Mobile Sinks

    10/42

    ARCHITECTURE DIAGRAM

    CLASS DIAGRAM

  • 7/28/2019 Mobile Sinks

    11/42

  • 7/28/2019 Mobile Sinks

    12/42

    DATABASE DIAGRAM

    ACTIVITY DIAGRAM

  • 7/28/2019 Mobile Sinks

    13/42

  • 7/28/2019 Mobile Sinks

    14/42

    SEQUENCE DIAGRAM

  • 7/28/2019 Mobile Sinks

    15/42

    Functional Requirements:

    Functional requirements capture the intended behavior of the system. This behavior may be

    expressed as services, tasks or functions the system is required to perform. In product

    development, it is useful to distinguish between the baseline functionality necessary for any

    system to compete in that product domain, and features that differentiate the system from

    competitors products, and from variants in your companys own product line/family. Features

    may be additional functionality, or differ from the basic functionality along some quality

    attribute (such as performance or memory utilization).

    One strategy for quickly penetrating a market, is to produce the core, or stripped down, basic

    product, and adding features to variants of the product to be released shortly thereafter. This

    release strategy is obviously also beneficial in information systems development, staging core

    functionality for early releases and adding features over the course of several subsequent

    releases.

    In many industries, companies produce product lines with different cost/feature variations per

    product in the line, and product families that include a number of product lines targeted at

    somewhat different markets or usage situations. What makes these product lines part of a family,

    are some common elements of functionality and identity. A platform-based developmentapproach leverages this commonality, utilizing a set of reusable assets across the family.

    These strategies have important implications for software architecture. In particular, it is not just

    the functional requirements of the first product or release that must be supported by the

    architecture. The functional requirements of early (nearly concurrent) releases need to be

    explicitly taken into account. Later releases are accommodated through architectural qualities

    such as extensibility, flexibility, etc. The latter are expressed as non-functional requirements.

    Use cases have quickly become a widespread practice for capturing functional requirements.

    This is especially true in the object-oriented community where they originated, but their

    applicability is not limited to object-oriented systems.

  • 7/28/2019 Mobile Sinks

    16/42

    Non Functional Requirements

    1.Usability

    The purpose is to establish the usability requirements which can be tested later in the system

    development process. This section will list all of requirements that relate and relevant to the

    usability of the system development.

    1.1 Attractiveness

    The screen layout and color of the system must be attractive and appealing to the users.

    The system should guide the users with helpful information and all keyboard shortcuts

    will be customized.

    1.2 Operability

    The second part of usability requirements consist of system operability. In this section

    it will explain on the actions or any other method that should be taken if there have

    any errors. The operability requirements consist of following part:

    Undo function should available for most actions

    The system must be customizable according to user needs (to meet specific user

    needs)

    The interface actions and layout should be consistent

    Error messages should explain to the users on how to recover from that error

    1.3 Learn ability

    Another part of usability requirements is, learn ability. It consists of following part:

    The system should be easy to learn by users

    The help information should explain on how to achieve common tasks

  • 7/28/2019 Mobile Sinks

    17/42

    1.4 Understandability

    This is the last part that can be considered as relevant requirement that should be include

    in system development. The element of interfaces such as menu should be easy to

    understand and use.

    2. Reliability

    Reliability requirement are typically part of a technical specifications document. Reliability

    metric are measurable by test during the product development time.

    2.1 Mean Time between Failures

    Our system needs to exceed the users expectation. The mean time failures shall exceed

    our expected hours.

    2.2 Availability

    Our system shall be available for 24 hours a day, 7 days a week to the users.

    3. Performance

    The performance characteristics can be described with performance constraints, design

    constraints or quality constraints.

    3.1 Design Constraints

    Developer has limited time to develop the system which involves function, programming

    language and material has to be incorporated.

    3.2 Database Access Response Time

    Time taken for accessing form, catalog database from the system should less than 10

    seconds.

  • 7/28/2019 Mobile Sinks

    18/42

    3.3 Transactions Response Time

    Any transactions done by the users in the system should be able to complete no more than

    3 minutes.

    ABOUT THE SOFTWARE

    The various technologies used in the software system are

    Client-Server Architecture:

    Over the years there have been 3 different approaches to application development

    Traditional Approach

    Client/Server Approach

    Component-based Approach

    In a traditional approach there was a single application that handles the presentation logic,

    business logic, and database interactivity. These applications were also called Monolithic applications.

    The drawback of this approach was that if even a minor change, extension, or enhancement was required

    in the application, the entire application had to be recompiled and integrated again.

    Due the disadvantage of the traditional approach, the client/Server architecture (also called 2-Tier

    Architecture) was introduced. In this architecture, the data is separated from the client-side and is stored

    at a centralized location that acts as a server. The business logic is combined with the presentation logic

    either at the client- side or at the server-side that has the database connectivity code.

    If the business logic is combined with the presentation logic at the client-side, the client is called

    afat client.If the business logic is combined with the database server, the server is called a fat server.

    Thus a 2-tiered architecture divides an application into 2 pieces:

    The GUI (client)

    Database (server).

    The client sends a request to the server. The server being a more powerful machine does all the

    fetching and processing and returns only the desired result set back to the client for the finishing touches.

  • 7/28/2019 Mobile Sinks

    19/42

    In effect, weaker machines virtually have a shared access to the strength of the server at the back-end.

    Faster execution at the server side results in less network traffic and increased response time for the

    program to fetch and process the data.

    However the client/server architecture also had certain disadvantages:

    Any change in the business policies required a change in the business logic. To change the

    business logic, either the presentation logic or database connectivity code has to be changed,

    depending on the location of the business logic.

    Applications developed using 2-Tier architecture might be difficult to scale up because of the

    limited number of database connections available to the client. Connections requests beyond a

    particular limit are simply rejected by the server.

    The disadvantages with the client/server architecture led to the development of

    the 3-Tier Architecture. In 3-Tier architecture, the presentation logic resides at the client-

    side, the database is controlled by the server-side, and the business logic resides between the

    two layers. This business logic layer is referred to as the application server (also called

    middle-tier of component based architecture).This type of architecture is also called server-

    centric.

    Since the middle-tier handles the business logic, the work load is balanced

    between the client, the database server, and the server handling the business logic. This

    architecture also provides efficient data access. The problem with database connection

    limitation is minimized since the database sees only the business logic layer and not all its

    clients. In the case of a two-tier application, a database connection is established early and is

    maintained, where as in a three-tier application, a database connection is established only

    when data access is required and releases as soon as the data is retrieved or sent to the server.

    The application where the presentation logic, the business logic, and the database

    reside on multiple computers is called distributed application.

  • 7/28/2019 Mobile Sinks

    20/42

    3.3.1 Features of Java Programming Language

    Java technology is both a programming language and a platform.

    The Java Programming Language

    The Java Programming language is a high-level language that can be characterized by all of the

    following buzzwords:

    Simple Architecture neutral

    Object oriented Portable

    Distributed High performance

    Multithreaded Robust

    Dynamic Secure

    In the Java Programming language, all source code is first written in plain text files ending with

    the .java extension. Those source files are then compiled into .class files by the Java compiler(javac).

    A .class file does not contain code that is native to your processor; it instead contains bytecodes the

    machine language of the Java Virtual Machine. The Java launcher tool(java) then runs your application

    with an instance of the Java Virtual Machine.

    Because the Java Virtual Machine is available on many different operating systems, the same

    .class files are capable of running on Microsoft Windows, the Solaris TM Operating System(Solaris OS),

    Linux, or MacOS. Some virtual machines, such as the Java HotSport Virtual Machine, perform

    additional steps at runtime to give your application a performance boost. This include various tasks such

    as finding performance bottlenecks and recompiling(to native code) frequently-used sections of your

    code.

    The Java Platform

    A platform is the hardware or software environment in which a program runs. We have already

    mentioned some of the most popular platforms like Microsoft Windows, Linux,Solaris OS, MacOS.

    Most platforms can be described as a combination of the operating system and underlying hardware. The

    Java platform differs from most other platforms in that its a software-only platform that runs on top of

    other hardware=based platforms.

  • 7/28/2019 Mobile Sinks

    21/42

    The Java platform has two components:

    The Java Virtual Machine

    The Java Application Programming Interface(API)

    The API is a large collection of ready-made software components that provide many

    useful capabilities. Such as graphical user interface(GUI)widgets. It is grouped into libraries of

    related classes and interfaces; these libraries are known as packages. The next section, what can

    Java Technology Do?, highlights some of the functionality provided by the API.

    The following figure depicts how the API and the Java Virtual Machine insulate the

    program from the hardware.

    As a platform-independent environment, the Java platform can be a bit slower than native code.

    However, advances in compiler and virtual machine technologies are bringing performance closeto that of native code without threatening portability.

    The general-purpose, high-level Java programming language is a powerful software

    platform. Every full implementation of the Java platform gives you the following features:

    Development Tools: The development tools provide everything youll need for compiling,

    running, monitoring, debugging, and documenting your applications. As a new developer,

    the main tools youll be using are the Java compiler(javac), the Java launcher (java), and the

    Java documentation tool (javadoc).

    Application Programming Interface(API): The API provides the core functionality of the

    Java programming language. It offers a wide array of useful classes ready for use in your

    own applications. It spans everything from basic objects, to networking and security, to

    XML generation and database access. The core API is very large, to get an overview of

    what it contains, consult the release documentation liked to at the bottom of this page.

    Development Technologies: The JDK provides standard mechanisms, such as Java Web

    Start and Java Plug-In, for deploying your applications to end users.

    User Interface Toolkits:

    The Swing and Java 2D toolkits make it possible to create

    sophisticated Graphical User Interfaces(GUIs).

    Integration Libraries: Integration libraries such as IDL, JDBC, JNDL,RMI, and RMI-

    IIOP, enable database access and manipulation of remote objects.

  • 7/28/2019 Mobile Sinks

    22/42

    3.3.2 Features of the Java Technology

    Write less code: Comparisons of program metrics (class counts, method counts, and so on)

    suggest that a program written in the Java programming language can be four times smaller than

    the same program in C++. Write better code: The Java programming language encourages good coding practices, and its

    garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans

    component architecture, and its wide-ranging, easily extendible API let you reuse other peoples

    tested code and introduce fewer bugs.

    Develop programs more quickly: Your development time may be as much as

    twice as fast versus writing the same program in C++. Why? You write fewer

    lines of code and it is a simpler programming language than C++.

    Avoid platform dependencies: You can keep you program portable by avoiding the use of

    libraries written in other languages.

    Write one, run anywhere: Because java applications are compiled into machine-independent

    bytecodes, they run consistently on any Java platform.

    Distribute software more easily: With Java Web Start technology, users will be able to launch

    your applications with a single click of the mouse. An automatic version check at startup ensures that

    users are always up to date with the latest version of your software. If an update is available, Java

    Web Start will automatically upgrade their installation.3.3.3 J2EE in ClientServer:

    Database (server).

    The client sends a request to the server. The server being a more powerful machine does all the fetching

    and processing and returns only the desired result set back to the client for the finishing touches. In effect,

    weaker machines virtually have a shared access to the strength of the server at the back-end. Faster

    execution at the server side results in less network traffic and increased response time for the program to

    fetch and process the data.

    However the client/server architecture also had certain disadvantages:

  • 7/28/2019 Mobile Sinks

    23/42

    Any change in the business policies required a change in the business logic. To change the

    business logic, either the presentation logic or database connectivity code has to be changed,

    depending on the location of the business logic.

    Applications developed using 2-Tier architecture might be difficult to scale up because of the

    limited number of database connections available to the client. Connections requests beyond a

    particular limit are simply rejected by the server.

    The disadvantages with the client/server architecture led to the development of the 3-Tier

    Architecture. In 3-Tier architecture, the presentation logic resides at the client-side, the

    database is controlled by the server-side, and the business logic resides between the two

    layers. This business logic layer is referred to as the application server (also called middle-

    tier of component based architecture).This type of architecture is also called server-centric.

    Since the middle-tier handles the business logic, the work load is balanced between the

    client, the database server, and the server handling the business logic. This architecture also

    provides efficient data access. The problem with database connection limitation is minimized

    since the database sees only the business logic layer and not all its clients. In the case of a

    two-tier application, a database connection is established early and is maintained, where as in

    a three-tier application, a database connection is established only when data access is

    required and releases as soon as the data is retrieved or sent to the server.

    The application where the presentation logic, the business logic, and the database reside on

    multiple computers is called distributed application.

    J2EE in ClientServer:

    Java 2 Enterprise Edition provides more flexible, secure and allow high density of data

    transaction through its powerful implicit middleware services. J2EE Architecture facilitates system to

    execute under multi-tier client server architecture and distributed environment, enhancing the functional

    approach of the systems

    Client Script : HTML, DHTML, JavaScript

  • 7/28/2019 Mobile Sinks

    24/42

    Middleware : JSP

    3.3.4 Features of Servler-Side Programming(JSP):

    Java Server Pages (JSP) technology enables you to mix regular, static HTML with dynamically

    generated content from servlets. Many Web pages that are built by CGI programs are primarily static,

    with the parts that change limited to a few small locations. For example, the initial page at most on-line

    stores is the same for all visitors, except for a small welcome message giving the visitors name if it is

    known. But most GCI variations, including servlets, make you generate the entire page via your program,

    even though most of it is always the same. JSP lets you create the two parts separately. Listing 1.1 gives

    an example. Most of the page consists of regular HTML, which is passed to the visitor unchanged. Parts

    that are generated dynamically are marked with special HTML like tags and mixed right into the page.

    JSP:

    The JavaServer Pages technology offers a number of advantages:

    Write Once, Run Anywhere properties

    The Java Server Pages technology is platform independent, both in its dynamic Web pages, its

    Web servers, and its underlying server components. You can author JSP pages on any platform, run them

    on any Web server enabled application server, and access them from any web browser. You can also

    build the server components on any platform and run them on any server.

    High quality tool support

    The Write Once, Run Anywhere properties of JSP allows the user to choose best-of-breed tools.

    Additionally, an explicit goal of the Java Server Pages design is to enable the creation of high quality

    portable tools.

    Separation of dynamic and static content

    The Java Server Pages technology enables the separation of static content from dynamic that is

    inserted into the static template. This greatly simplifies the creation of content. This separation is

    supported by beans specifically designed for the interaction

    With server-side objects, and specially, by the tag extension mechanism.

    Support for scripting and actions

  • 7/28/2019 Mobile Sinks

    25/42

    The Java Server Pages technology supports scripting elements as well as actions. Action permit the

    encapsulation of useful functionality in a convenient form that can also be manipulated by tools; scripts

    provide a mechanism to glue together this functionality in a per-page manner.

    Web access layer for N-tier enterprise application architecture(s)

    The Java Server Pages technology is an integral part of the Java 2 Platform Enterprise

    Edition(J2EE), which brings Java technology to enterprise computing. you can now develop powerful

    middle-tier server applications, using a Website that uses JavaServer Pages technology as a front end to

    Enterprise JavaBeans components in a J2EE compliant environment.

    JSP Page

    A JSP page is a text-based document that describes how to process a request to create a response.The description intermixes template data with some dynamic actions and leverages on the Java Platform.

    The features in the JSP technology support a number of different paradigms for authoring of dynamic

    content; some of them are described in section 1.6. The next couple of examples only attempt to present

    the technical components of the JSP specification and are not prescribing good or bad paradigms.

    Overview of JSP Page Semantics

    Translating and Executing JSP Pages

    A JSP page is executed in a JSP container, which is installed on a Web server, or on a web

    enabled application server. The JSP container delivers requests from a client to a JSP page and responses

    from the JSP page to the client. The semantic model underlying JSP pages is that of a servlet; a JSP page

    describes how to create a response object from a request object for a given protocol, possibly creating

    and/or using in the process some other objects.

    All JSP containers must support HTTP as a protocol for requests and responses, but a container

    may also support additional request/response protocols. The default request and response objects are of

    type HttpServletRequest and HttpServletResponse, respectively.

    A JSP page may also indicate how some events are to be handled. In JSP 1.1 only init and

    destroy events can be described; the first time a request is delivered to a JSP page a jspInit() method, if

    present, will be called to prepare the page. Similarly, a JSP container can reclaim the resources used by a

    JSP page at any time that a request is not being serviced by the JSP page by invoking first its jspDestroy()

    method; this is the same life-cycle as that of servlets.

  • 7/28/2019 Mobile Sinks

    26/42

    A JSP page is represented at request-time by a JSP page implementation class that implements

    the javax.servlet interface. JSP pages are often implemented using a JSP page translation phase that is

    done only once, followed by some request processing phase that is done once per request. The translation

    phase creates the JSP page implementation class. If the JSP page is delivered to the JSP container in

    source form, the translation of a JSP source page can occur at any time between initial deployment of the

    JSP page into the runtime environment of a JSP container and the receipt and processing of a client

    request for the target JSP page.

    A JSP age contains some declarations, some fixed template data, some (perhaps nested) action

    instances, and some scripting elements. When a request is delivered to a JSP page, all these pieces are

    used to create a response object that is then returned to the client. Usually, the important part of this

    response object is the result stream.

    SQL SERVER 2005

    DATABASE

    A database management, or DBMS, gives the user access to their data and helps them transform

    the data into information. Such database management systems include dBase, paradox, IMS, and SQL

    Server. These systems allow users to create, update and extract information from their database.

    A database is a structured collection of data. Data refers to the characteristics of people, thingsand events. SQL Server stores each data item in its own fields. In SQL Server, the fields relating to a

    particular person, thing or event are bundled together to form a single complete unit of data, called a

    record (it can also be referred to as raw or an occurrence). Each record is made up of a number of fields.

    No two fields in a record can have the same field name.

    During an SQL Server Database design project, the analysis of your business needs identifies all

    the fields or attributes of interest. If your business needs change over time, you define any additional

    fields or change the definition of existing fields.

  • 7/28/2019 Mobile Sinks

    27/42

    SQL Server:

    Microsoft SQL Server 2000 automatically tunes many of the server configuration options,

    therefore requiring little, if any, tuning by a system administrator. Although these configuration

    options can be modified by the system administrator, it is generally recommended that these options

    be left at their default values, allowing SQL Server to automatically tune itself based on run-time

    conditions.

    However, if necessary, the following components can be configured to optimize server

    performance:

    SQL Server Memory

    I/O subsystem

    Microsoft Windows NT options

    Indexes are structured to facilitate the rapid return of result sets. The two types of indexes

    that SQL Server supports are clustered and non-clustered indexes. Indexes are applied to one or more

    columns in tables or views. The characteristics of an index affect its use of system resources and its

    lookup performance. The Query Optimizer uses an index if it will increase query performance.

    An index in SQL Server assists the database engine with locating records, just like an index

    in a book helps you locate information quickly. Without indexes, a query causes SQL Server to

    search all records in a table (table scan) in order to find matches. A database index contains one or

    more column values from a table (called the index key) and pointers to the corresponding table

    records. When you perform a query using the index key, the Query Optimizer will likely use an

    index to locate the records that match the query.

    A B-tree is analogous to an upside-down tree with the root of the tree at the top, the leaf

    levels at the bottom, and intermediate levels in between. Each object in the tree structure is a group

    of sorted index keys called an index page. A B-tree facilitates fast and consistent query performance

    by carefully balancing the width and depth of the tree as the index grows. Sorting the index on the

    index key also improves query performance. All search requests begin at the root of a B-tree and

    then move through the tree to the appropriate leaf level. The number of table records and the size of

    the index key affect the width and depth of the tree. Index key size is called the key width.

  • 7/28/2019 Mobile Sinks

    28/42

    A table that has many records and a large index key width creates a deep and wide B-tree.

    The smaller the tree, the more quickly a search result is returned.

    SQL Server Tables

    SQL Server stores records relating to each other in a table. Different tables are

    created for the various groups of information. Related tables are grouped together to form

    a database.

    Primary Key

    Every table in SQL Server has a field or a combination of fields that uniquely

    identifies each record in the table. The Unique identifier is called the Primary Key, or

    simply the Key. The primary key provides the means to distinguish one record from all

    other in a table. It allows the user and the database system to identify, locate and refer to

    one particular record in the database.

    Relational Database

    Sometimes all the information of interest to a business operation can be stored in

    one table. SQL Server makes it very easy to link the data in multiple tables. Matching an

    employee to the department in which they work is one example. This is what makes SQL

    Server a relational database management system, or RDBMS. It stores data in two or

    more tables and enables you to define relationships between the table and enables you to

    define relationships between the tables.

  • 7/28/2019 Mobile Sinks

    29/42

    Foreign Key

    When a field is one table matches the primary key of another field is referred to as

    a foreign key. A foreign key is a field or a group of fields in one table whose values

    match those of the primary key of another table.

    Referential Integrity

    Not only does SQL Server allow you to link multiple tables, it also maintains

    consistency between them. Ensuring that the data among related tables is correctly

    matched is referred to as maintaining referential integrity.

    Data Abstraction

    A major purpose of a database system is to provide users with an abstract view of

    the data. This system hides certain details of how the data is stored and maintained. Data

    abstraction is divided into three levels.

    Physical level: This is the lowest level of abstraction at which one describes how the data

    are actually stored.

    Conceptual Level: At this level of database abstraction all the attributed and what data

    are actually stored is described and entries and relationship among them.

    View level: This is the highest level of abstraction at which one describes only part of

    the database.

    Advantages of RDBMS

    Redundancy can be avoided

    Inconsistency can be eliminated

    Data can be Shared

  • 7/28/2019 Mobile Sinks

    30/42

    Standards can be enforced

    Security restrictions ca be applied

    Integrity can be maintained

    Conflicting requirements can be balanced

    Data independence can be achieved.

    Disadvantages of DBMS

    A significant disadvantage of the DBMS system is cost. In addition to the cost of

    purchasing of developing the software, the hardware has to be upgraded to allow for the

    extensive programs and the workspace required for their execution and storage. While

    centralization reduces duplication, the lack of duplication requires that the database be

    adequately backed up so that in case of failure the data can be recovered.

    FEATURES OF SQL SERVER (RDBMS)

    SQL SERVER is one of the leading database management systems (DBMS)

    because it is the only Database that meets the uncompromising requirements of todays

    most demanding information systems. From complex decision support systems (DSS) to

    the most rigorous online transaction processing (OLTP) application, even application that

    require simultaneous DSS and OLTP access to the same critical data, SQL Server leads

    the industry in both performance and capability

    SQL SERVER is a truly portable, distributed, and open DBMS that deliversunmatched performance, continuous operation and support for every database.

    SQL SERVER RDBMS is high performance fault tolerant DBMS which is

    specially designed for online transactions processing and for handling large database

    application.

  • 7/28/2019 Mobile Sinks

    31/42

    SQL SERVER with transactions processing option offers two features which

    contribute to very high level of transaction processing throughput, which are

    The row level lock manager

    Enterprise wide Data Sharing

    The unrivaled portability and connectivity of the SQL SERVER DBMS enables

    all the systems in the organization to be linked into a singular, integrated computing

    resource.

    Portability

    SQL SERVER is fully portable to more than 80 distinct hardware and operating

    systems platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary

    platforms. This portability gives complete freedom to choose the database sever platform

    that meets the system requirements.

    Open Systems

    SQL SERVER offers a leading implementation of industry standard SQL. SQL

    Servers open architecture integrates SQL SERVER and non SQL SERVER DBMS

    with industries most comprehensive collection of tools, application, and third party

    software products SQL Servers Open architecture provides transparent access to data

    from other relational database and even non-relational database.

    Distributed Data Sharing

    SQL Servers networking and distributed database capabilities to access data

    stored on remote server with the same ease as if the information was stored on a single

    local computer. A single SQL statement can access data at multiple sites. You can store

    data where system requirements such as performance, security or availability dictate.

  • 7/28/2019 Mobile Sinks

    32/42

    Unmatched Performance

    The most advanced architecture in the industry allows the SQL SERVER DBMS

    to deliver unmatched performance.

    Sophisticated Concurrency Control

    Real World applications demand access to critical data. With most database

    Systems application becomes contention bound which performance is limited not by

    the CPU power or by disk I/O, but user waiting on one another for data access . SQL

    Server employs full, unrestricted row-level locking and contention free queries to

    minimize and in many cases entirely eliminates contention wait times.

    No I/O Bottlenecks

    SQL Servers fast commit groups commit and deferred write technologies

    dramatically reduce disk I/O bottlenecks. While some database write whole data block to

    disk at commit time, SQL Server commits transactions with at most sequential log file on

    disk at commit time, On high throughput systems, one sequential writes typically group

    commit multiple transactions. Data read by the transaction remains as shared memory so

    that other transactions may access that data without reading it again from disk. Since fast

    commits write all data necessary to the recovery to the log file, modified blocks are

    written back to the database independently of the transaction commit, when written from

    memory to disk.

    FEASIBILITY SYSTEM

    The feasibility of the project is analyzed in this phase and business proposal is put forth

    with a very general plan for the project and some cost estimates. During system analysis the

    feasibility study of the proposed system is to be carried out. This is to ensure that the proposed

    system is not a burden to the company. For feasibility analysis, some understanding of the major

    requirements for the system is essential.

  • 7/28/2019 Mobile Sinks

    33/42

    Three key considerations involved in the feasibility analysis are :

    ECONOMICAL FEASIBILITY

    TECHNICAL FEASIBILITY

    SOCIAL FEASIBILITY

    ECONOMICAL FEASIBILITY:

    This study is carried out to check the economic impact that the system will have on the

    organization. The amount of fund that the company can pour into the research and development

    of the system is limited. The expenditures must be justified. Thus the developed system as well

    within the budget and this was achieved because most of the technologies used are freely

    available. Only the customized products had to be purchased.

    TECHNICAL FEASIBILITY:

    This study is carried out to check the technical feasibility, that is, the technical

    requirements of the system. Any system developed must not have a high demand on the available

    technical resources. This will lead to high demands on the available technical resources. This

    will lead to high demands being placed on the client. The developed system must have a modest

    requirement, as only minimal or null changes are required for implementing this system.

  • 7/28/2019 Mobile Sinks

    34/42

    SOCIAL FEASIBILITY:

    The aspect of study is to check the level of acceptance of the system by the user. This

    includes the process of training the user to use the system efficiently. The user must not feel

    threatened by the system, instead must accept it as a necessity. The level of acceptance by the

    users solely depends on the methods that are employed to educate the user about the system and

    to make him familiar with it. His level of confidence must be raised so that he is also able to

    make some constructive criticism, which is welcomed, as he is the final user of the system.

    TESTING

    5.1 TEST PROCEDURE

    5.1.1 SYSTEM TESTING

    Testing is performed to identify errors. It is used for quality assurance.

    Testing is an integral part of the entire development and maintenance process. The

    goal of the testing during phase is to verify that the specification has been

    accurately and completely incorporated into the design, as well as to ensure the

    correctness of the design itself. For example the design must not have any logic

    faults in the design is detected before coding commences, otherwise the cost of

    fixing the faults will be considerably higher as reflected. Detection of design faults

    can be achieved by means of inspection as well as walkthrough.

    Testing is one of the important steps in the software development phase.

    Testing checks for the errors, as a whole of the project testing involves the

    following test cases:

    Static analysis is used to investigate the structural properties of the

    Source code.

  • 7/28/2019 Mobile Sinks

    35/42

    Dynamic testing is used to investigate the behavior of the source code

    by executing the program on the test data.

    5.2 TEST DATA AND OUTPUT

    5.2.1 UNIT TESTING

    Unit testing is conducted to verify the functional performance of each

    modular component of the software. Unit testing focuses on the smallest unit of the

    software design (i.e.), the module. The white-box testing techniques were heavily

    employed for unit testing.

    5.2.2 FUNCTIONAL TESTS

    Functional test cases involved exercising the code with nominal input

    values for which the expected results are known, as well as boundary values and

    special values, such as logically related inputs, files of identical elements, and

    empty files. Three types of tests in Functional test:

    Performance Test

    Stress Test

    Structure Test

    5.2.3 PERFORMANCE TEST

    It determines the amount of execution time spent in various parts of the

    unit, program throughput, and response time and device utilization by the program

    unit.

    5.2.4 STRESS TEST

    Stress Test is the test designed to intentionally break the unit. A Great

    deal can be learned about the strength and limitations of a program by examining

    the manner in which a programmer in which a program unit breaks.

  • 7/28/2019 Mobile Sinks

    36/42

    5.2.5 STRUCTURED TEST

    Structure Tests are concerned with exercising the internal logic of a

    program and traversing particular execution paths. The way in which White-Box

    test strategy was employed to ensure that the test cases could guarantee that all

    independent paths within a module have been exercised at least once.

    Exercise all logical decisions on their true or false sides.

    Execute all loops at their boundaries and within their operational

    bounds.

    Exercise internal data structures to assure their validity.

    Checking attributes for their correctness.

    Handling end of file condition, I/O errors, buffer problems and textual

    errors in output information

    5.2.6 INTEGRATION TESTING

    Integration testing is a systematic technique for construction the program

    structure while at the same time conducting tests to uncover errors associated with

    interfacing. i.e., integration testing is the complete testing of the set of modules

    which makes up the product. The objective is to take untested modules and build a

    program structure tester should identify critical modules. Critical modules should

    be tested as early as possible. One approach is to wait until all the units have

    passed testing, and then combine them and then tested. This approach is evolved

    from unstructured testing of small programs. Another strategy is to construct theproduct in increments of tested units. A small set of modules are integrated

    together and tested, to which another module is added and tested in combination.

    And so on. The advantages of this approach are that, interface dispenses can be

    easily found and corrected.

  • 7/28/2019 Mobile Sinks

    37/42

    The major error that was faced during the project is linking error. When

    all the modules are combined the link is not set properly with all support files.

    Then we checked out for interconnection and the links. Errors are localized to the

    new module and its intercommunications. The product development can be staged,

    and modules integrated in as they complete unit testing. Testing is completed when

    the last module is integrated and tested.

    5.3 TESTING TECHNIQUES / TESTING STRATERGIES

    5.3.1 TESTING

    Testing is a process of executing a program with the intent of finding an

    error. A good test case is one that has a high probability of finding an as-yet

    undiscovered error. A successful test is one that uncovers an as-yet undiscovered

    error. System testing is the stage of implementation, which is aimed at ensuring

    that the system works accurately and efficiently as expected before live operation

    commences. It verifies that the whole set of programs hang together. System

    testing requires a test consists of several key activities and steps for run program,

    string, system and is important in adopting a successful new system. This is the last

    chance to detect and correct errors before the system is installed for user

    acceptance testing. The software testing process commences once the program is

    created and the documentation and related data structures are designed. Software

    testing is essential for correcting errors. Otherwise the program or the project is not

    said to be complete. Software testing is the critical element of software quality

    assurance and represents the ultimate the review of specification design and

    coding. Testing is the process of executing the program with the intent of finding

    the error. A good test case design is one that as a probability of finding an yet

  • 7/28/2019 Mobile Sinks

    38/42

    undiscovered error. A successful test is one that uncovers an yet undiscovered

    error. Any engineering product can be tested in one of the two ways:

    5.3.2 WHITE BOX TESTING

    This testing is also called as Glass box testing. In this testing, by knowing

    the specific functions that a product has been design to perform test can be

    conducted that demonstrate each function is fully operational at the same time

    searching for errors in each function. It is a test case design method that uses the

    control structure of the procedural design to derive test cases. Basis path testing is

    a white box testing.

    Basis path testing:

    Flow graph notation

    Cyclometric complexity

    Deriving test cases

    Graph matrices Control

    5.3.3 BLACK BOX TESTING

    In this testing by knowing the internal operation of a product, test can be

    conducted to ensure that all gears mesh, that is the internal operation performs

    according to specification and all internal components have been adequately

    exercised. It fundamentally focuses on the functional requirements of the software.

    The steps involved in black box test case design are:

    Graph based testing methods

    Equivalence partitioning

    Boundary value analysis

  • 7/28/2019 Mobile Sinks

    39/42

    Comparison testing

    5.3.4 SOFTWARE TESTING STRATEGIES:

    A software testing strategy provides a road map for the software

    developer. Testing is a set activity that can be planned in advance and conducted

    systematically. For this reason a template for software testing a set of steps into

    which we can place specific test case design methods should be strategy should

    have the following characteristics:

    Testing begins at the module level and works outward toward the

    integration of the entire computer based system. Different testing techniques are appropriate at different points in time.

    The developer of the software and an independent test group conducts

    testing.

    Testing and Debugging are different activities but debugging must be

    accommodated in any testing strategy.

    5.3.5 INTEGRATION TESTING:

    Integration testing is a systematic technique for constructing the program

    structure while at the same time conducting tests to uncover errors associated with.

    Individual modules, which are highly prone to interface errors, should not be

    assumed to work instantly when we put them together. The problem of course, is

    putting them together- interfacing. There may be the chances of data lost across

    on anothers sub functions, when combined may not produce the desired major

    function; individually acceptable impression may be magnified to unacceptable

    levels; global data structures can present problems.

  • 7/28/2019 Mobile Sinks

    40/42

    5.3.6 PROGRAM TESTING:

    The logical and syntax errors have been pointed out by program testing.

    A syntax error is an error in a program statement that in violates one or more rules

    of the language in which it is written. An improperly defined field dimension or

    omitted keywords are common syntax error. These errors are shown through error

    messages generated by the computer. A logic error on the other hand deals with the

    incorrect data fields, out-off-range items and invalid combinations. Since the

    compiler will not deduct logical error, the programmer must examine the output.

    Condition testing exercises the logical conditions contained in a module. The

    possible types of elements in a condition include a Boolean operator, Boolean

    variable, a pair of Boolean parentheses A relational operator or on arithmetic

    expression. Condition testing method focuses on testing each condition in the

    program the purpose of condition test is to deduct not only errors in the condition

    of a program but also other a errors in the program.

    5.3.7 SECURITY TESTING:

    Security testing attempts to verify the protection mechanisms built in to

    a system well, in fact, protect it from improper penetration. The system security

    must be tested for invulnerability from frontal attack must also be tested for

    invulnerability from rear attack. During security, the tester places the role of

    individual who desires to penetrate system.

    5.3.8 VALIDATION TESTING

    At the culmination of integration testing, software is completely

    assembled as a package. Interfacing errors have been uncovered and corrected and

    a final series of software test-validation testing begins. Validation testing can be

    defined in many ways, but a simple definition is that validation succeeds when the

  • 7/28/2019 Mobile Sinks

    41/42

    software functions in manner that is reasonably expected by the customer.

    Software validation is achieved through a series of black box tests that

    demonstrate conformity with requirement. After validation test has been

    conducted, one of two conditions exists.

    The function or performance characteristics confirm to specifications

    and are accepted.

    A validation from specification is uncovered and a deficiency created.

    Deviation or errors discovered at this step in this project is corrected

    prior to completion of the project with the help of the user by negotiating toestablish a method for resolving deficiencies. Thus the proposed system under

    consideration has been tested by using validation testing and found to be working

    satisfactorily. Though there were deficiencies in the system they were not

    catastrophic.

    5.3.9 USER ACCEPTANCE TESTING

    User acceptance of the system is key factor for the success of any system.

    The system under consideration is tested for user acceptance by constantly keeping

    in touch with prospective system and user at the time of developing and making

    changes whenever required. This is done in regarding to the following points.

    Input screen design.

    Output screen design.

  • 7/28/2019 Mobile Sinks

    42/42