back prop doc

Upload: veenadorus

Post on 07-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Back Prop Doc

    1/40

    INTRODUCTION

    1

  • 8/3/2019 Back Prop Doc

    2/40

    CHAPTER 1

    INTRODUCTION

    1.1 ABOUT THE PROJECT

    We propose localization schemes for wireless sensor networks. Soft computing

    plays a crucial role in localization schemes. Here we consider the edge weight of

    each anchor node separately and combine them to compute the location of sensor

    node. The edge weights are modeled by the fuzzy logic system (FLS) . Then

    consider localization as a single problem and approximate the entire sensor

    location mapping from the anchor node signals by a neural network(NN). The

    stimulation and experimental results demonstrate the effectiveness of the proposed

    schemes by comparing them with previous methods.

    1.2 Problem Statement

    In this paper, two range-free localization schemes based on RSS information are

    presented. They employ soft computing techniques to overcome the limitations of

    previous range-free localization methods. In the first scheme, the localization is

    decomposed into a collection of individual problems in which we compute the

    2

  • 8/3/2019 Back Prop Doc

    3/40

    proximity of a sensor node to each anchor node. That is, we consider the edge

    weight of each anchor node separately and combine them to compute the location

    of the sensor nodes. The edge weights are modeled by the fuzzy logic system

    (FLS). Contrary to the first scheme, we consider the localization as a single

    problem in the second scheme and approximate the whole mapping from the

    anchor node signals to the locations of sensor nodes by a neural network (NN).

    1.3 Objective

    The main objective is to Limit hardware such as Storage, Processing,

    Communication, Energy supply (battery power) ,Limited support for

    networking such as Peer-to-peer network, Unreliable communication,

    Dynamically changing, Limited support for software development, Real-time

    tasks that involve dynamic collaboration among nodes, Software architecture

    needs to be co-designed with the information processing architecture .

    CHAPTER 2

    3

  • 8/3/2019 Back Prop Doc

    4/40

    LITERATURE REVIEW

    2.1 Localization in Cooperative Wireless Sensor Networks: A Review

    Localization in Wireless Sensor Networks has become a significant research

    challenge, attracting many researchers in the past decade. This paper provides a

    review of basic techniques and the state-of-the-art approaches for wireless sensors

    localization. The challenges and future research opportunities are discussed in

    relation to the design of the collaborative workspaces based on cooperative

    wireless sensor networks.

    2.2 Localization in Wireless Sensor Networks: A Probabilistic Approach

    In this paper we consider a probabilistic approach to the problem of localization in

    wireless sensor networks and propose a distributed algorithm that helps unknown

    nodes to determine confident position estimates. The proposed algorithm is RF

    based, robust to range measurement inaccuracies and can be tailored to varying

    environmental conditions. The proposed position estimation algorithm considers

    the errors and inaccuracies usually found in RF signal strength measurements. We

    also evaluate and validate the algorithm with an experimental testbed. The test bed

    results indicate that the actual position of nodes are well bounded by the position

    estimates obtained despite ranging inaccuracies.

    4

  • 8/3/2019 Back Prop Doc

    5/40

    2.3 Robust Node Localization for Wireless SensorNetworksThe node localization problem in Wireless Sensor Networks has received

    considerable attention, driven by the need to obtain a higher location accuracy

    without incurring a large, per node, cost (dollar cost, power consumption and form

    factor). Despite the eorts made, no system has emerged as a robust, practical,

    solution for the node localization problem in realistic, complex, outdoor

    environments. In this paper, we argue that the existing localization algorithms,

    individually, work well for single sets of assumptions. These assumptions do not

    always hold, as in the case of outdoor, complex environments. To solve this

    problem, we propose a framework that allows the execution of multiple

    localization schemes. This \protocol multi-modality" enables robustness against

    any single protocol failure, due to its assumptions. We present the design of the

    framework, and show a 50% decrease in localization error in comparison with state

    of art node localization protocols. We also show that complex, more robust,

    localization systems can be build from localization schemes that have limitations.

    2.4 SOFT COMPUTING IN WIRELESS SENSORS NETWORKS

    The embedded soft computing approach in wireless sensor networks is suggested.

    This approach means a combination of embedded fuzzy logic and neural networks

    5

  • 8/3/2019 Back Prop Doc

    6/40

    models for information processing in complex environment with uncertain,

    imprecise, fuzzy measuring data. It is generalization of soft computing concept for

    the embedded, distributed, adaptive systems.

    2.5 A Probabilistic Fuzzy Approach for Sensor Location Estimation

    in Wireless Sensor Networks

    Nowadays, wireless sensor networks are widely used in a variety of applications

    such as in military, vehicle tracking, disaster management and environmental

    monitoring. Accurate estimation of the sensor position can be crucial in

    many of these applications. In this research, we propose a localization algorithm

    based on probabilistic fuzzy logic systems (PFLS) for range-free localization. The

    algorithm utilizes received signal strength (RSS) from the anchor nodes

    embedded with variant degrees of environmental noise. The proposed system is

    compared with another algorithm based on Fuzzy Logic Systems (FLS) against

    variant amount of noise. Simulation results demonstrate that FLS can be much

    more accurate than PFLS method if the environment is noise-free. However, as the

    environmental noise increases, the PFLS reaches better performance.

    6

  • 8/3/2019 Back Prop Doc

    7/40

    7

  • 8/3/2019 Back Prop Doc

    8/40

    SYSTEM ANALYSIS

    CHAPTER 3

    SYSTEM ANALYSIS

    3.1 EXISTING SYSTEM

    8

  • 8/3/2019 Back Prop Doc

    9/40

    Bulusu et al. proposed a range-free, proximity-based, and coarse-grained

    localization algorithm in Bulusu et al. In this method, the anchor nodes broadcast

    the their position (Xi,Yi) and each sensor node computes its position as a centroid

    of the positions of all the connected anchor nodes to itself by

    where (Xest,Yest) represents the estimated position of the sensor node and N is the

    number of the connected anchor nodes to the sensor node. This scheme is simple

    and economic but exhibits large amount of error. Kim and Kwon proposed an

    improved version of Bulusu et al. In this method, anchor nodes are weighted

    according to their proximity to the sensor nodes, and each sensor node computes

    its position by

    This method has the weakness that the choice of the weights (w1,w2,. . .,wn), is

    very heuristic, and the performance highly depends on the design of the weights.

    3.2 PROPOSED SYSTEM

    We propose two intelligent localization schemes for wireless sensor

    networks (WSNs). The two schemes introduced in this paper exhibit

    range-free localization, which utilize the received signal strength

    9

  • 8/3/2019 Back Prop Doc

    10/40

    (RSS) from the anchor nodes. Soft computing plays a crucial role in

    both schemes. In the first scheme, we consider the edge weight of

    each anchor node separately and combine them to compute the

    location of sensor nodes. The edge weights are modeled by the fuzzy

    logic system (FLS) . This proposal leads to following advantages such

    as, Detection: Improved signal-to-noise ratio by reducing average

    distance between source and sensor, Energy: A path with many short

    hops has less energy consumption than a path with few long hops,

    Robustness: Address individual sensor node or link failures . This

    system is used under following applications, Environmental

    monitoring ,Traffic, habitat, security, Industrial sensing and

    diagnostics, Manufacturing, supply chains ,Context-aware computing,

    Intelligent homes, Military applications: Multi-target tracking,

    Infrastructure protection: Power grids

    3.3 SYSTEM SPECIFICATION

    3.3.1 HARDWARE SPECIFICATION

    Hard disk : 40 GB

    10

  • 8/3/2019 Back Prop Doc

    11/40

    RAM : 512 MB

    Processor Speed : 3.00GHz

    Processor : Pentium IV Processor

    3.3.2 SOFTWARE SPECIFICATION

    OS : Windows XP

    Language : MatLab

    Version : 2009

    3.4 SOFTWARE DESCRIPTION

    11

  • 8/3/2019 Back Prop Doc

    12/40

    MATLAB is a high-level language and interactive environment that enables

    you to perform computationally intensive tasks faster than with traditional

    programming languages such as C, C++, and Fortran. You can use MATLAB in a

    wide range of applications, including signal and image processing,

    communications, control design, test and measurement, financial modeling and

    analysis, and computational biology. Add-on toolboxes (collections of special-

    purpose MATLAB functions, available separately) extend the MATLAB

    environment to solve particular classes of problems in these application areas.

    MATLAB provides a number of features for documenting and sharing your work.

    You can integrate your MATLAB code with other languages and applications, and

    distribute your MATLAB algorithms and applications.

    3.4.1 Key Features

    12

    http://www.mathworks.com/applications/t_mhttp://www.mathworks.com/computational-biology.htmlhttp://www.mathworks.com/applications/t_mhttp://www.mathworks.com/computational-biology.html
  • 8/3/2019 Back Prop Doc

    13/40

    High-level language for technical computing

    Development environment for managing code, files, and data

    Interactive tools for iterative exploration, design, and problem solving

    Mathematical functions for linear algebra, statistics, Fourier analysis,

    filtering, optimization, and numerical integration

    2-D and 3-D graphics functions for visualizing data

    Tools for building custom graphical user interfaces

    Functions for integrating MATLAB based algorithms with external

    applications and languages, such as C, C++, Fortran, Java, COM, and

    Microsoft Excel.

    MATLAB is a well-respected software environment and programming

    language created by MathWorks and now available directly from Agilent as

    an option with most signal generators, signal analyzers, and spectrum

    analyzers. MATLAB extends the capabilities of Agilent signal analyzers and

    generators to make custom measurements, analyze and visualize data, create

    arbitrary waveforms, control instruments, and build test systems. It provides

    interactive tools and command-line functions for data analysis tasks such as

    signal processing, signal modulation, digital filtering, and curve fitting.

    MATLAB has over 1,000,000 users in diverse industries and disciplines,

    13

  • 8/3/2019 Back Prop Doc

    14/40

    and it is a standard at more than 3,500 colleges and universities worldwide.

    Three MATLAB configurations are available and range from basic

    MATLAB capabilities that allow acquisition and analysis of data to full

    support for signal processing, communications, filter design, and automated

    testing.

    3.4.2 MATLAB Capabilities

    Extend the functionality of Agilent signal and spectrum analyzers with

    MATLAB by analyzing and visualizing measurements, testing modulation

    schemes, and automating measurements

    Excite electronic devices using Agilent signal generators with simple or

    complex waveforms created in MATLAB

    Test the functionality of electronic devices by making measurements with

    Agilent instruments and comparing them against known baselines in

    MATLAB

    Develop a GUI or application that enables users to perform data analysis or

    testing

    Characterize an electronic device to determine how closely it matches the

    design

    14

  • 8/3/2019 Back Prop Doc

    15/40

    Verify new algorithms or measurement routines using live data from Agilent

    instruments

    Design custom digital filters in MATLAB and apply them to signals

    acquired from an Agilent instrument.

    3.5 INTRODUCTION TO FUZZY LOGIC & FUZZY CONTROL

    * "Fuzzy logic" has become a common buzzword in machine control. However,

    the term itself inspires a certain skepticism, sounding equivalent to "half-baked

    logic" or "bogus logic". Some other nomenclature might have been preferable, but

    15

  • 8/3/2019 Back Prop Doc

    16/40

    it's too late now, and fuzzy logic is actually very straightforward. Fuzzy logic is a

    way of interfacing inherently analog processes, that move through a continuous

    range of values, to a digital computer, that likes to see things as well-defined

    discrete numeric values.

    For example, consider an antilock braking system, directed by a microcontroller

    chip. The microcontroller has to make decisions based on brake temperature,

    speed, and other variables in the system.

    The variable "temperature" in this system can be divided into a range of "states",

    such as: "cold", "cool", "moderate", "warm", "hot", "very hot". Defining the

    bounds of these states is a bit tricky. An arbitrary threshold might be set to divide

    "warm" from "hot", but this would result in a discontinuous change when the input

    value passed over that threshold.

    The way around this is to make the states "fuzzy", that is, allow them to change

    gradually from one state to the next. You could define the input temperature states

    using "membership functions" such as the following:

    With this scheme, the input variable's state no longer jumps abruptly from one state

    to the next. Instead, as the temperature changes, it loses value in one membership

    function while gaining value in the next. At any one time, the "truth value" of the

    16

  • 8/3/2019 Back Prop Doc

    17/40

    brake temperature will almost always be in some degree part of two membership

    functions: 0.6 nominal and 0.4 warm, or 0.7 nominal and 0.3 cool, and so on.

    The input variables in a fuzzy control system are in general mapped into by sets of

    membership functions similar to this, known as "fuzzy sets". The process of

    converting a crisp input value to a fuzzy value is called "fuzzification".

    A control system may also have various types of switch, or "ON-OFF", inputs

    along with its analog inputs, and such switch inputs of course will always have a

    truth value equal to either 1 or 0, but the scheme can deal with them as simplified

    fuzzy functions that are either one value or another.

    Given "mappings" of input variables into membership functions and truth values,

    the microcontroller then makes decisions for what action to take based on a set of

    "rules", each of the form:

    IF brake temperature IS warm AND speed IS not very fast

    THEN brake pressure IS slightly decreased.

    In this example, the two input variables are "brake temperature" and "speed" that

    have values defined as fuzzy sets. The output variable, "brake pressure", is also

    defined by a fuzzy set that can have values like "static", "slightly increased",

    "slightly decreased", and so on.

    17

  • 8/3/2019 Back Prop Doc

    18/40

    This rule by itself is very puzzling since it looks like it could be used without

    bothering with fuzzy logic, but remember the decision is based on a setof rules:

    All the rules that apply are invoked, using the membership functions and

    truth values obtained from the inputs, to determine the result of the rule.

    This result in turn will be mapped into a membership function and truth

    value controlling the output variable.

    These results are combined to give a specific ("crisp") answer, the actual

    brake pressure, a procedure known as "defuzzification".

    This combination of fuzzy operations and rule-based "inference" describes a

    "fuzzy expert system".

    Traditional control systems are based on mathematical models in which the

    the control system is described using one or more differential equations that

    define the system response to its inputs. Such systems are often implemented

    as "proportional-integral-derivative (PID)" controllers. They are the products

    of decades of development and theoretical analysis, and are highly effective.

    If PID and other traditional control systems are so well-developed, why

    bother with fuzzy control? It has some advantages. In many cases, the

    mathematical model of the control process may not exist, or may be too

    18

  • 8/3/2019 Back Prop Doc

    19/40

    "expensive" in terms of computer processing power and memory, and a

    system based on empirical rules may be more effective.

    Furthermore, fuzzy logic is well suited to low-cost implementations based

    on cheap sensors, low-resolution analog-to-digital converters, and 4-bit or 8-

    bit one-chip microcontroller chips. Such systems can be easily upgraded by

    adding new rules to improve performance or add new features. In many

    cases, fuzzy control can be used to improve existing traditional controller

    systems by adding an extra layer of intelligence to the current control

    method.

    Fuzzy controllers are very simple conceptually. They consist of an input

    stage, a processing stage, and an output stage. The input stage maps sensor

    or other inputs, such as switches, thumbwheels, and so on, to the appropriate

    membership functions and truth values. The processing stage invokes each

    appropriate rule and generates a result for each, then combines the results of

    the rules. Finally, the output stage converts the combined result back into a

    specific control output value.

    The most common shape of membership functions is triangular, although

    trapezoids and bell curves are also used, but the shape is generally less

    important than the number of curves and their placement. From three to

    19

  • 8/3/2019 Back Prop Doc

    20/40

    seven curves are generally appropriate to cover the required range of an

    input value, or the "universe of discourse" in fuzzy jargon.

    As discussed earlier, the processing stage is based on a collection of logic

    rules in the form of IF-THEN statements, where the IF part is called the

    "antecedent" and the THEN part is called the "consequent". Typical fuzzy

    control systems have dozens of rules.

    Consider a rule for a thermostat:

    IF (temperature is "cold") THEN (heater is "high")

    This rule uses the truth value of the "temperature" input, which is some truth

    value of "cold", to generate a result in the fuzzy set for the "heater" output,

    which is some value of "high". This result is used with the results of other

    rules to finally generate the crisp composite output. Obviously, the greater

    the truth value of "cold", the higher the truth value of "high", though this

    does not necessarily mean that the output itself will be set to "high", since

    this is only one rule among many.

    In some cases, the membership functions can be modified by "hedges" that

    are equivalent to adjectives. Common hedges include "about", "near", "close

    to", "approximately", "very", "slightly", "too", "extremely", and

    "somewhat". These operations may have precise definitions, though the

    20

  • 8/3/2019 Back Prop Doc

    21/40

    definitions can vary considerably between different implementations.

    "Very", for one example, squares membership functions; since the

    membership values are always less than 1, this narrows the membership

    function. "Extremely" cubes the values to give greater narrowing, while

    "somewhat" broadens the function by taking the square root.

    In practice, the fuzzy rule sets usually have several antecedents that are

    combined using fuzzy operators, such as AND, OR, and NOT, though again

    the definitions tend to vary: AND, in one popular definition, simply uses the

    minimum weight of all the antecedents, while OR uses the maximum value.

    There is also a NOT operator that subtracts a membership function from 1 to

    give the "complementary" function.

    There are several different ways to define the result of a rule, but one of the

    most common and simplest is the "max-min" inference method, in which the

    output membership function is given the truth value generated by the

    premise.

    Rules can be solved in parallel in hardware, or sequentially in software. The

    results of all the rules that have fired are "defuzzified" to a crisp value by

    one of of several methods. There are dozens in theory, each with various

    advantages and drawbacks.

    21

  • 8/3/2019 Back Prop Doc

    22/40

    The "centroid" method is very popular, in which the "center of mass" of the

    result provides the crisp value. Another approach is the "height" method,

    which takes the value of the biggest contributor. The centroid method favors

    the rule with the output of greatest area, while the height method obviously

    favors the rule with the greatest output value.

    3.5.1 What Are Fuzzy Inference Systems?

    Fuzzy inference is the process of formulating the mapping from a given input to an

    output using fuzzy logic. The mapping then provides a basis from which decisions

    can be made, or patterns discerned. The process of fuzzy inference involves all of

    the pieces that are described in the previous sections: Membership Functions,

    Logical Operations, and If-Then Rules. You can implement two types of fuzzy

    inference systems in the toolbox: Mamdani -type and Sugeno- type. These two

    types of inference systems vary somewhat in the way outputs are determined. See

    the Fuzzy inference systems have been successfully applied in fields such as

    automatic control, data classification, decision analysis, expert systems, and

    computer vision. Because of its multidisciplinary nature, fuzzy inference systems

    are associated with a number of names, such as fuzzy-rule-based systems, fuzzy

    expert systems, fuzzy modeling, fuzzy associative memory, fuzzy logic controllers,

    and simply (and ambiguously) fuzzy systems.

    22

    http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-2http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-5http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-7http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-2http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-5http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-7
  • 8/3/2019 Back Prop Doc

    23/40

    Mamdani's fuzzy inference method is the most commonly seen fuzzy

    methodology. Mamdani's method was among the first control systems built using

    fuzzy set theory. It was proposed in 1975 by Ebrahim Mamdani as an attempt to

    control a steam engine and boiler combination by synthesizing a set of linguistic

    control rules obtained from experienced human operators. Mamdani's effort was

    based on Lotfi Zadeh's 1973 paper on fuzzy algorithms for complex systems and

    decision processes. Although the inference process described in the next few

    sections differs somewhat from the methods described in the original paper, the

    basic idea is much the same.

    Mamdani-type inference, as defined for the toolbox, expects the output

    membership functions to be fuzzy sets. After the aggregation process, there is a

    fuzzy set for each output variable that needs defuzzification. It is possible, and in

    many cases much more efficient, to use a single spike as the output membership

    function rather than a distributed fuzzy set. This type of output is sometimes

    known as a singleton output membership function, and it can be thought of as a

    pre-defuzzified fuzzy set. It enhances the efficiency of the defuzzification process

    because it greatly simplifies the computation required by the more general

    Mamdani method, which finds the centroid of a two-dimensional function. Rather

    than integrating across the two-dimensional function to find the centroid, you use

    the weighted average of a few data points. Sugeno-type systems support this type

    23

  • 8/3/2019 Back Prop Doc

    24/40

    of model. In general, Sugeno-type systems can be used to model any inference

    system in which the output membership functions are either linear or constant.

    3.5.2 Fuzzify Inputs

    The first step is to take the inputs and determine the degree to which they belong to

    each of the appropriate fuzzy sets via membership functions. In Fuzzy Logic

    Toolbox software, the input is always a crisp numerical value limited to the

    universe of discourse of the input variable (in this case the interval between 0 and

    10) and the output is a fuzzy degree of membership in the qualifying linguistic set

    (always the interval between 0 and 1). Fuzzification of the input amounts to either

    a table lookup or a function evaluation.

    This example is built on three rules, and each of the rules depends on resolving the

    inputs into a number of different fuzzy linguistic sets: service is poor, service is

    good, food is rancid, food is delicious, and so on. Before the rules can be

    evaluated, the inputs must be fuzzified according to each of these linguistic sets.

    For example, to what extent is the food really delicious? The following figure

    shows how well the food at the hypothetical restaurant (rated on a scale of 0 to 10)

    qualifies, (via its membership function), as the linguistic variable delicious. In this

    case, we rated the food as an 8, which, given your graphical definition of delicious,

    corresponds to = 0.7 for the delicious membership function.

    24

  • 8/3/2019 Back Prop Doc

    25/40

    3.5.3 Apply Fuzzy Operator

    After the inputs are fuzzified, you know the degree to which each part of the

    antecedent is satisfied for each rule. If the antecedent of a given rule has more than

    one part, the fuzzy operator is applied to obtain one number that represents the

    result of the antecedent for that rule. This number is then applied to the output

    function. The input to the fuzzy operator is two or more membership values from

    fuzzified input variables. The output is a single truth value.

    As is described in Logical Operations section, any number of well-defined

    methods can fill in for the AND operation or the OR operation. In the toolbox, two

    built-in AND methods are supported: min (minimum) and prod (product). Two

    built-in OR methods are also supported: max (maximum), and the probabilistic OR

    method probor. The probabilistic OR method (also known as the algebraic sum) is

    calculated according to the equation

    probor(a,b) = a + b - ab

    In addition to these built-in methods, you can create your own methods for AND

    and OR by writing any function and setting that to be your method of choice.

    25

    http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-5http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html#bp78l70-5
  • 8/3/2019 Back Prop Doc

    26/40

    3.5.4 Apply Implication Method

    Before applying the implication method, you must determine the rule's weight.

    Every rule has a weight (a number between 0 and 1), which is applied to the

    number given by the antecedent. Generally, this weight is 1 (as it is for this

    example) and thus has no effect at all on the implication process. From time to

    time you may want to weight one rule relative to the others by changing its weight

    value to something other than 1.

    After proper weighting has been assigned to each rule, the implication method is

    implemented. A consequent is a fuzzy set represented by a membership function,

    which weights appropriately the linguistic characteristics that are attributed to it.

    The consequent is reshaped using a function associated with the antecedent (a

    single number). The input for the implication process is a single number given by

    the antecedent, and the output is a fuzzy set. Implication is implemented for each

    rule. Two built-in methods are supported, and they are the same functions that are

    used by the AND method: min (minimum), which truncates the output fuzzy set,

    andprod(product), which scales the output fuzzy set.

    3.5.5 Aggregate All Outputs

    Because decisions are based on the testing of all of the rules in a FIS, the rules

    must be combined in some manner in order to make a decision. Aggregation is the

    26

  • 8/3/2019 Back Prop Doc

    27/40

    process by which the fuzzy sets that represent the outputs of each rule are

    combined into a single fuzzy set. Aggregation only occurs once for each output

    variable, just prior to the fifth and final step, defuzzification. The input of the

    aggregation process is the list of truncated output functions returned by the

    implication process for each rule. The output of the aggregation process is one

    fuzzy set for each output variable.

    As long as the aggregation method is commutative (which it always should be),

    then the order in which the rules are executed is unimportant. Three built-in

    methods are supported:

    max (maximum)

    probor (probabilistic OR)

    sum (simply the sum of each rule's output set)

    3.5.6 Defuzzify

    The input for the defuzzification process is a fuzzy set (the aggregate output fuzzy

    set) and the output is a single number. As much as fuzziness helps the rule

    evaluation during the intermediate steps, the final desired output for each variable

    is generally a single number. However, the aggregate of a fuzzy set encompasses a

    range of output values, and so must be defuzzified in order to resolve a single

    output value from the set.

    27

  • 8/3/2019 Back Prop Doc

    28/40

    Perhaps the most popular defuzzification method is the centroid calculation, which

    returns the center of area under the curve. There are five built-in methods

    supported: centroid, bisector, middle of maximum (the average of the maximum

    value of the output set), largest of maximum, and smallest of maximum.

    Fig.1 An example of the fuzzy logic system (FLS).

    CHAPTER 4

    SYSTEM DESIGN

    4.1 Modules

    Fuzzy Rule Generation

    28

  • 8/3/2019 Back Prop Doc

    29/40

    Node Distribution

    Neural Network

    Localization

    1. Fuzzy Rule Generation

    Fuzzy Input Variable:

    X RSS from Anchor Node

    Range [0,Rssmax]

    Output Y Edge weight of each anchor node for a given sensor node.

    Range- [0,max ]

    Rule base about Edge Weight:

    Rule if :RSS then :Weight

    1 verylow verylow

    2 low low

    3 medium medium

    4 High high

    5 veryhigh veryhigh

    29

  • 8/3/2019 Back Prop Doc

    30/40

    Node Distribution:

    Determination of width and length of the area where the sensor nodes are to be

    placed. This is done under the following condition

    x(j+(i-1)*10)=(i-1)*10;

    y(j+(i-1)*10)=(j-1)*10;

    Neural Network:

    1.Set thenumber of network replications.

    2.Set number of hidden units in network.

    3. Set the Total number of iterations

    30

  • 8/3/2019 Back Prop Doc

    31/40

    4. Set the number of epochs for network.

    5.Load the rules generated in Fuzzy to neural network.

    6. Implement Cluster structure.

    7. Compute the global training and test errors

    Fig. 2. An example of the MLP neural networks.

    Each sensor node finds its position by the following procedures:

    STEP1: Find the adjacent anchor nodes using connectivity.

    STEP2: Collect the IDs and positions of anchor nodes and measure

    their RSSs.

    STEP3: Calculate the edge weight of each anchor node. The edge

    31

  • 8/3/2019 Back Prop Doc

    32/40

    weight should already be modeled by FLS. The FLS modeling

    is explained in the subsequent subsections.

    STEP4: Use the edge weights of all anchor nodes to determine

    the position of sensor node by .,More specifically, when the positions of the anchor

    nodes are (X1,Y1), (X2,Y2), . . ., (Xk,Yk), the position of the sensor node is

    estimated as

    Back Propagation:

    A back propagation neural network uses a feed-forward topology, supervised

    learning,

    and back propagation learning algorithm. Back propagation is a general

    purpose learning algorithm. It is powerful but also expensive in terms of

    computational requirements for training. A back propagation network with a

    single hidden layer of processing elements can model any continuous

    function to any degree of accuracy (given enough processing elements in the

    hidden layer).

    32

  • 8/3/2019 Back Prop Doc

    33/40

    Fig- 3: Back Propagation Network

    There are literally hundreds of variations of back propagation in the neural

    network

    literature, and all claim to be superior to basic back propagation in one

    way or the

    other. Indeed, since back propagation is based on a relatively simple form of

    optimization known as gradient descent, mathematically astute observers

    soon proposed modifications using more powerful techniques such as

    conjugate gradient and Newtons methods. However, basic back

    propagation is still the most widely used variant. Its two primary virtues are

    33

  • 8/3/2019 Back Prop Doc

    34/40

    that it is simple and easy to understand, and it works for a wide range of

    problems. The basic back propagation algorithm consists of three steps.

    The input pattern is presented to the input layer of the network. These inputs

    are

    propagated through the network until they reach the output units. This

    forward pass

    produces the actual or predicted output pattern. Because back propagation

    is a supervised learning algorithm, the desired outputs are given as part of

    the training vector. The actual network outputs are subtracted from the

    desired outputs and an error signal is produced. This error signal is then the

    basis for the back propagation step, whereby the errors are passed back

    through the neural network by computing the contribution of each hidden

    processing unit and deriving the corresponding adjustment needed to

    produce the correct output. The connection weights are then adjusted and

    the neural network has just learned from an experience. Two major

    learning parameters are used to control the training process of a back

    propagation network. The learn rate is used to specify whether the neural

    network is going to make major adjustments after each learning trial or if it is

    only going to make minor adjustments. Momentum is used to control

    possible oscillations in the weights ,which could be caused by alternately

    34

  • 8/3/2019 Back Prop Doc

    35/40

    signed error signals. While most commercial back propagation tools provide

    anywhere from 1 to 10 or more parameters for you to set, these

    two will usually produce the most impact on the neural network training time

    and

    performance.

    35

  • 8/3/2019 Back Prop Doc

    36/40

    Localization

    36

  • 8/3/2019 Back Prop Doc

    37/40

    Calculate the deviation between the real position & estimated position of

    sensor nodes .

    Fig. 5. Distribution of all nodes for the simulation.

    37

  • 8/3/2019 Back Prop Doc

    38/40

    Fig. 6. Simulation result of the proposed overall method by NN: (a) result of

    location estimation; (b) error result.

    38

  • 8/3/2019 Back Prop Doc

    39/40

    CONCLUSION:

    Two intelligent range-free localization schemes for wireless sensor networks

    (WSNs) are presented. In our proposed methods, the sensor nodes do not need

    any complicated hardware to obtain the distance or TOA/AOA information.

    The sensor nodes can estimate their positions with only RSS information

    between itself and its neighbor anchor nodes. Soft computing techniques play

    the crucial role in our proposed schemes. In the first scheme, we consider the

    edge weight of each anchor node which is the neighbor of the sensor nodes

    separately and combine these edge weights to compute the location of sensor

    nodes.

    Future Work :

    The future work includes adapting the proposed localization methods to the

    noisy indoor environment and reduces the time requirement to optimize the FLS

    and to train the NN.

    39

  • 8/3/2019 Back Prop Doc

    40/40