o c t o p u s scalable routing protocol for wireless...

49
- L Lil y y Itkin - E v v g gen y y Gure v vich - Inna Vaisband - L L a a b b C C h h i i e e f f E E n n g g i i n n e e e e r r : : D D r r . . I I l l a a n n a a D D a a v v i i d d I I n n s s t t r r u u c c t t o o r r : : R R o o i i e e M M e e l l a a m m e e d d O O C C T T O O P P U U S S S S c c a a l l a a b b l l e e R R o o u u t t i i n n g g P P r r o o t t o o c c o o l l F F o o r r W W i i r r e e l l e e s s s s A A d d H H o o c c N N e e t t w w o o r r k k s s

Upload: hoangnga

Post on 15-Jul-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

-- LLiillyy IIttkkiinn -- EEvvggeennyy GGuurreevviicchh -- IInnnnaa VVaaiissbbaanndd --

LLaabb CChhiieeff EEnnggiinneeeerr:: DDrr.. IIllaannaa DDaavviidd IInnssttrruuccttoorr:: RRooiiee MMeellaammeedd

OO CC TT OO PP UU SS SSccaallaabbllee RRoouuttiinngg PPrroottooccooll

FFoorr WWiirreelleessss AAdd HHoocc NNeettwwoorrkkss

Table of Contents 1. Introduction................................................................................................................. 4

2. Detailed Description ................................................................................................... 6

2.1. The Octopus Node .............................................................................................. 6 2.2. OctopusAgent - Structure and Main Features..................................................... 8

2.2.1. Octopus Agent Class Diagram.................................................................... 8 2.2.2. Main Type Definitions Used in OctopusAgent .......................................... 9 2.2.3. NS2 Packets .............................................................................................. 10 2.2.4. The Octopus Packet Header...................................................................... 10 2.2.5. Main Tcl Commands Binded to Octopus Agent....................................... 10 2.2.6. OctopusAgent::recv Method..................................................................... 11 2.2.7. OctopusAgent::forward_pkt Method........................................................ 11 2.2.8. OctopusAgent Modules and Components ................................................ 11

2.2.8.1. Octopus Core Module ....................................................................... 11 2.2.8.1.1. Core Neighbor Tables Management .............................................. 12 2.2.8.1.2. Core One-Hop Update ................................................................... 14 2.2.8.1.3. Core Strip Update .......................................................................... 16 2.2.8.1.4. Core End node................................................................................ 18 2.2.8.1.5. Core Bypassing (Optimization) ..................................................... 20 2.2.8.1.6. Core Queue (Optimization)............................................................ 21 2.2.8.1.7. Core Two Hop Neighbor Table (Optimization)............................. 22 2.2.8.1.8. Core Neighbor Tables Validation (Optimization) ......................... 24 2.2.8.1.9. Low Energy (unstable) Nodes (Experiment) ................................. 25

2.2.8.2. Octopus Find Location Module ........................................................ 27 2.2.8.2.1. FL Seek Target............................................................................... 29 2.2.8.2.2. FL Reply (Optimization)................................................................ 30 2.2.8.2.3. FL Cache (Optimization) ............................................................... 30 2.2.8.2.4. FL Queue (Optimization)............................................................... 31 2.2.8.2.5. FL Step Queue (Optimization)....................................................... 32 2.2.8.2.6. FL Bypass (Optimization).............................................................. 33 2.2.8.2.7. FL Estimated Location (Optimization).......................................... 34 2.2.8.2.8. FL Forward to Target (Optimization) ............................................ 34 2.2.8.2.9. Multiple Sending Directions (Optimization) ................................. 35

2.2.8.3. The OctopusDB Class....................................................................... 35 3. Octopus Agent Installation and Usage...................................................................... 36

3.3. Prerequisites...................................................................................................... 36 3.3.1. Development Platform.............................................................................. 36 3.3.2. NS2 ........................................................................................................... 36 3.3.3. Octopus Agent Integration into NS2......................................................... 37

3.3.3.1. C++ Source Code Installation........................................................... 37 3.3.3.2. OTcl Source Code Installation.......................................................... 38 3.3.3.3. Building NS2 with Octopus Agent ................................................... 38

3.4. Octopus Agent Parameters................................................................................ 38 3.4.1. Tcl Parameters .......................................................................................... 38

3.4.2. C++ Parameters......................................................................................... 41 3.5. Running Octopus Simulations .......................................................................... 42

3.5.1. Octopus Shell Scripts................................................................................ 42 3.5.2. Running a Single Octopus Simulation...................................................... 42

3.5.2.1. single_test.csh CShell Script............................................................. 42 3.5.2.2. Responsibility ................................................................................... 42 3.5.2.3. Execution .......................................................................................... 42 3.5.2.4. Input Parameters ............................................................................... 42 3.5.2.5. Output ............................................................................................... 44 3.5.2.6. file.tcl Tcl Script ............................................................................... 44

3.5.2.6.1. Responsibility ................................................................................ 44 3.5.2.6.2. Execution ....................................................................................... 44 3.5.2.6.3. Input Parameters ............................................................................ 44 3.5.2.6.4. Output ............................................................................................ 44

3.5.3. Running Multiple Octopus Simulations ................................................... 46 3.5.3.1. test_all.csh CShell Script .................................................................. 46

3.5.3.1.1. Responsibility ................................................................................ 46 3.5.3.1.2. Execution ....................................................................................... 46 3.5.3.1.3. Input Parameters ............................................................................ 46 3.5.3.1.4. Output ............................................................................................ 46

3.5.4. Editing Octopus Simulations .................................................................... 46 3.5.5. Analyzing Simulation Results................................................................... 47

4. Octopus Agent Terms ............................................................................................... 47

1. Introduction

Imagine that Octopus network is spread on the following area:

Figure 1

Octopus Routing Protocol will divide the area into horizontal and

vertical "strips" as follows:

Figure 2

The strip width is defined by the STRIPE_RESOLUTION C++ parameter

and can be easily changed (see Octopus C++ Parameters chapter for

more information regarding this parameter). Several experiments have

been performed to evaluate the optimal strip width that will provide

maximum effectiveness.

The nodes (appear as laptops on the map) are randomly spread on the

grid. The nodes located in the same vertical column are said to belong

to the same vertical strip. The nodes located in the same horizontal

row are said to belong to the same horizontal strip.

2. Detailed Description

2.1. The Octopus Node

Each wireless node in NS2 is represented by an instance of a

MobileNode object. Among others, the MobileNode object holds

information on node’s current geographic location, routing protocol

used to communicate, node’s velocity, etc. So-called routing Agents,

one Agent for each protocol, represent the routing protocols in NS2.

Each MobileNode object holds its own single instance of the routing

Agent. The routing Agent to be used in a given experiment is defined

by the input parameters of the simulation and cannot be altered during

NS2 execution. For a more detailed description of NS2 MobileNode

structure, see the NS Manual.

NS2

* DSR Agent and DSDV Agent (as well as many o

implemented in NS2

The Octopus routing Agent is represented by the

(octopus.h/cc). As stated before, each MobileNode

of OctopusAgent object, and the routing itself i

Agents of different nodes communicating one with

MobileNode

List of routing agents

Module

Octopus Agent

th

O

s

th

DSDV Agent*

DSR Agent*

ers) were already

ctopusAgent class

holds one instance

done by Octopus

e other.

The following information is retrieved by OctopusAgent from its parent

MobileNode in order to perform the routing:

Current geographical location (in terms of X and Y)

Speed

Energy level

Direction of movement (in terms of start and destination locations)

Implementation Aspect

Each octopus agent handles, among others, a reference to its parent

instance of MobileNode:

MobileNode *node_;

For example, to retrieve current geographical location, following

commands can be used:

int x = octAgent_->node_->X();

int y = octAgent_->node_->Y();

2.2. OctopusAgent - Structure and Main Features

2.2.1. Octopus Agent Class Diagram

2.2.2. Main Type Definitions Used in OctopusAgent

OctPktType – defines the module and component that Octopus

packet belongs to. OCT_DEFAULT_TYPE is used to identify new,

uninitialized packets. For a more detailed description of each value,

see relevant modules’ description.

enum octpkttypes {OCT_DEFAULT_TYPE = -1,

OCT_CORE_HOP_UPDATE = 0,

OCT_CORE_STRIPE_UPDATE_TO_NORTH,

OCT_CORE_STRIPE_UPDATE_TO_EAST,

OCT_CORE_STRIPE_UPDATE_TO_SOUTH,

OCT_CORE_STRIPE_UPDATE_TO_WEST,

OCT_FIND_LOCATION, OCT_GF,

OCT_END_STRIPE_UPDATE,

OCT_FIND_LOCATION_REPLY,

OCT_BYPASSED,

OCT_RETURN_BYPASSED};

typedef octpkttypes OctPktType;

OctRouteEntry – represents Octopus Route Entry, which is the unit

of data Octopus Agents use to store information on other nodes.

typedef struct oct_route_entry{

int id_; // ID of the node

double xLoc_; // Latest X location

double yLoc_; // Latest Y location

double xLocPrev_; // Previous X location

double yLocPrev_; // Previous Y location

double timetag_loc_prev_; // Timestamp of the previous location

double velocity_; // Node’s speed

double lastUpdateTime_; // The timestamp of last update of this entry

SquareDirection sqrDirection_; // The direction in which the node is located

} OctRouteEntry;

2.2.3. NS2 Packets

The communication between nodes in NS2 is performed by simulation

of sending and receiving packets of data (the packets are represented

by the Packet class). Each packet holds several headers, one header

for each communication layer. Headers are used to enable data

transfer between same communication layers of different nodes. For

further details regarding packet structure in NS2, see the NS Manual.

2.2.4. The Octopus Packet Header

The Octopus Header (struct oct_header) was added to NS2 packets in

order to enable communication between Octopus Agents of different

nodes. The following list describes Octopus Header features that are

used by all Octopus modules and components (for a detailed

description of module-specific Octopus header fields see relevant

module’s description):

double send_time; // Simulation time at current packet sending

OctPktType octType; // The module and component that the packet is

// intended to

double YoctLocation; // Current sending node’s x location

double XoctLocation; // Current sending node’s y location

int myaddr_; // Address of currently transmitting node

2.2.5. Main Tcl Commands Binded to Octopus Agent

NS2 supports controlling routing agents from Tcl input file. The

method that is responsible for receiving such commands on C++ end

is <agent-class>::command. For further details regarding handling Tcl

commands in NS2 the NS Manual.

The OctopusAgent supports the following Tcl commands:

start-octopus – starts Octopus Agent (triggers proactive data

collection).

fl_debug seek_loc <target id> - initiates Find Location query to

locate the <target id> node.

2.2.6. OctopusAgent::recv Method

Each Agent in NS2 has an <agent-name>::recv method. This method

is the entry point of all packets into the routing agent.

The OctopusAgent::recv method handles the following tasks:

Adds all valid information to the Find Location Cache (for more

information, see Find Location Module detailed description).

Checks whether the packet was intended to be received by this

node (in case not – the packet is ignored).

Checks the Octopus Type of the packet and invokes the relevant

module and component (in case of OCT_DEFAULT_PKT type, the

packet assumed to be received from the application layer of this

node, therefore the invoked module is Find Location Seek Target).

2.2.7. OctopusAgent::forward_pkt Method

The method handles sending an Octopus packet (including updating

the general fields of the Octopus header with the relevant data).

2.2.8. OctopusAgent Modules and Components

2.2.8.1. Octopus Core Module

The Octopus Core Module represents the proactive part of the

Octopus protocol.

The module is responsible for managing the neighbor tables of

Octopus Agent.

Follows is a list of terms used in the Octopus Core Module:

End Node – (used by Core Strip Update) is a node that does not

have any neighbors in one (or more) of the four geographical

directions, in current strip. End Nodes are the ones that initiate

Strip Updates.

Grid – the area covered by Octopus network and divided into

Octopus Strips.

Node’s Database – all node’s Neighbors Tables (One-Hop and

Strips).

Radio Range – currently defined to be 250m, is the range within

which wireless transmission of one node may be received by

another node.

2.2.8.1.1. Core Neighbor Tables Management

Each node manages five neighbor tables:

One-Hop Neighbors Table - consists of nodes located within the

Radio Range (see Octopus Terms chapter). For example, on Figure

2 there are three such areas:

° Node E one-hop area contains the following nodes: T and I.

° Node M’s one-hop area contains no nodes.

° Node V one-hop area contains node U.

In this case, one-hop neighbor table of node E will contain nodes T

and I, one-hop table of M will be empty and one-hop table of V will

contain node U.

West Strip Neighbors Table – consists of nodes located to the left of

current node’s vertical strip.

East Strip Neighbors Table – consists of nodes located to the right

of current node’s vertical strip.

North Strip Neighbors Table – consists of nodes located above

current node’s horizontal strip.

South Strip Neighbors Table – consists of nodes located below

current node’s horizontal strip.

For example, on Figure 2, the strip tables of node V will be as follows:

West Strip Table: C, T, I, E

East Strip Table: M

South Strip Table: R, K

North Strip Table: J

Note: The nodes located in one-hop neighbors table will not appear in

strip tables. Thus, node U is not contained in node’s V strip tables.

Implementation Aspect

The tables maintained by the Core module are instances of the

OctopusDB class (see OctopusDB Class chapter).

Each node initializes the following data structures before it becomes

active:

OctopusDB * hopTable = new OctopusDB(HOP_TABLE);

OctopusDB * northStripeTable = new OctopusDB(NORTH_STRIPE_TABLE);

OctopusDB * southStripeTable = new OctopusDB(SOUTH_STRIPE_TABLE);

OctopusDB * westStripeTable = new OctopusDB(WEST_STRIPE_TABLE);

OctopusDB * eastStripeTable = new OctopusDB(EAST_STRIPE_TABLE);

2.2.8.1.2. Core One-Hop Update

Every OCT_BROADCAST_INTERVAL (see Tcl Parameters chapter)

seconds, each node broadcasts its ID and location. Each node within

its Radio Range (see Octopus Terms chapter) receives this message

and updates its one-hop neighbors table accordingly. Experiments

have been performed in order to evaluate the optimal

OCT_BROADCAST_INTERVAL: on one hand, short timeout will create

congestion and packets will be lost; on the other hand, longer

OCT_BROADCAST_INTERVAL will decrease the data reliability in the

neighbor tables.

Implementation Aspect of the Sending Side

When created, the Octopus Core module schedules a location

broadcast event in OCT_BROADCAST_INTERVAL seconds (see NS Manual

for information regarding event scheduling in NS2). Octopus scheduled

events are handled by the OctopusHandler::handle method, which

initiates location broadcast and schedules the next broadcast event.

void OctopusPeriodicHandler::handle(Event *e)

{

octAgent_->broadcastMyLocation();

Scheduler::instance().schedule(this, e, octAgent_->

OCT_BROADCAST_INTERVAL + jitter(0.3,1));

}

Note: jitter(0.3,1) function produces a random float number, which is

added to the time of next broadcasting event. It helps avoiding

network load each OCT_BROADCAST_INTERVAL (see Tcl Parameters

chapter) interval.

broadcastMyLocation() method creates new packet and initializes its

octopus header* in the following way:

void OctopusAgent::updateHeaderSenderDetailes(hdr_octopus* hdr)

{

hdr->send_time = CURRENT_TIME;

hdr->myaddr_ = myaddr_; /*sender id */

hdr->XoctLocation = node_->X(); /*sender X location */

hdr->YoctLocation = node_->Y(); /*sender Y location */

hdr->velocity_ = node_->speed(); /*sender velocity */

hdr->XoctLocationPrev = myPrevX_; /*sender X location in the

previous update */

hdr->YoctLocationPrev = myPrevY_; /*sender Y location in the

previous update */

hdr->send_timePrev = my_prev_loc_timetag_; /*time stamp of the

previous update */

hdr->send_timeq = CURRENT_TIME; /*time stamp of the current

update */

return;

}

Implementation Aspect of the Receiving Side

In the case of "Hello" packet, the packet type will be set to

OCT_CORE_HOP_UPDATE. The octopus agent will perform the following

tasks:

Check whether hello packets from the sending node had already

been received in the past.

If sending node ID already appears in the table, all relevant

information, including sending node's location, and the last update

time*, will be updated in the appropriate entry in the table.

Otherwise, new entry will be created in the table. The entry will

hold all the relevant information.

The appropriate case in recv(Packet* pkt) method looks as follows:

case OCT_CORE_HOP_UPDATE:

entry = hopTable->findEntry(hdr->myaddr_);

octopusCore->handleCoreHopUpdate(hdr,entry.xLoc_, entry.yLoc_,

entry.lastUpdateTime_);

// here appears additional code that handles GF/CORE/FL Queues

break;

Where hdr is a reference to the octopus header of the received packet.

In case the entry is not found in the table, the default entry is returned

by findEntry method, so handleCoreHopUpdate will add a new entry to

the table.

*Last Update Time is saved for each entry in the table in order to

support validating mechanism. It may occur, that after node B had

successfully received “Hello” message from node A, node A has moved

away from node B and the two nodes are no longer within each-others

Radio Range (see Octopus Terms chapter). In such case, B’s table

entry for A should be invalidated after a certain period of time

(otherwise B will forever "think" that A is its one hop neighbor). The

timeout for invalidating such entries in the one-hop table is defined by

HOP_NEIGHBOR_OUTDATE_INTERVAL (see Tcl Parameters chapter).

2.2.8.1.3. Core Strip Update

Every STRIPE_BROADCAST_INTERVAL (see Tcl Parameters chapter)

seconds, each node will initiate a strip update. Only End Nodes (see

Octopus Terms chapter) are able to do that. Therefore, every

STRIPE_BROADCAST_INTERVAL (see Tcl Parameters chapter), each

node checks whether it is currently an End Node in any of the four

geographical directions. If the node happens to be an End Node, it will

initiate strip update.

STRIPE_BROADCAST_INTERVAL: on one hand, short timeout will

create congestion and packets will be lost; on the other hand, longer

interval will decrease the reliability of the neighbor tables’ data.

Implementation Aspect of the Sending Side

In case the node happens to be an End Node (see Octopus Terms

chapter) it initiates a strip update. In that case, a new neighbor table

is created in the Octopus header, which will hold information on who

belong to the strip for which the update is initiated. A packet with this

header will be sent across the relevant strip, from the End Node to the

opposite end of the strip, so that all nodes in-between will receive the

list of their strip neighbors.

Let’s discuss the example of node R on Figure 2. R node appears to be

an End Node of the vertical strip from South and initiates an update in

North direction. R updates the Octopus header neighbor table with all

its one-hop neighbors that belong to its vertical strip.

As opposed to the “Hello” messages, that are broadcasted across the

Grid (see Octopus Terms chapter), strip update packets are unicast,

i.e. have a specific destination.

The destination for strip update packet is always the furthest node that

can be reached by current transmitter located in the relevant strip. If

such node is not found, bypassing mechanism is invoked. The

destination can be found by findNextDest method.

When the first parameter is a direction of the strip update, second and

third parameters are sending node coordinates and the last parameter

will be set to true if bypassing mechanism should be used or to false

otherwise.

Implementation Aspect of the Receiving Side

Each node receiving a packet of type STRIP_UPDATE (see Main

Octopus Agent Type Definitions chapter) will check whether he is in

the relevant strip. In case the strip update is relevant for current node,

it will update its strip tables according to the data in the Octopus

header.

If the receiving node is also the destination of the packet, it will

perform all the functionality of the sending side (described earlier) and

continue the strip update further in same direction.

2.2.8.1.4. Core End node

Each node that does not have one-hop neighbors in a specific

geographic direction is defined as an End Node in that direction.

Implementation Aspect of the Receiving Side

Note: This is relevant only when the Core Queue is ON.

When a node gets a strip update and reveals that

It is a destination of the current packet

There is no destination that it can transmit updated packet to

It will create a new packet with the type OCT_END_STRIPE_UPDATE

(see Main Octopus Agent Type Definitions chapter) and transmit it to

the sender of the received packet. The purpose of this packet is to

prove to the sending node that the packet was not lost, but received

by the destination and the current update is finished.

The following question can be asked: why such packets are not needed

for other nodes that are not located in the end of the strip?

The answer is simple. The proof for other nodes is the fact that they

can hear whether their strip update was continued, because the

destination that they chose is within their Radio Range (see Octopus

Terms chapter) (so each packet the destination transmits can be heard

by the sender).

Implementation Aspect of the Sending Side

In order to be able to initialize a strip update, each node assures it is

an End Node (see Octopus Terms chapter) in at least one of the four

geographic directions. The OctopusAgent::amIEndNode performs such

a check.

Let's discuss the example of Figure 2. When the strip update timer of J

is over, J will check whether it is the End Node for each one of the

geographic directions.

When the check for NORTH direction will be performed, one-hop table

of J will be examined. If there are any node whose Y coordinate is

greater than that of J, J will not be the End Node of the NORTH

direction. However, in the described example, J will not find any node

to North of itself, therefore J is the End Node in this case and will

initiate a Strip Update to South. J will create a new packet of type

OCT_CORE_STRIPE_UPDATE_TO_SOUTH (see Tcl Parameters

chapter). All nodes in J’s one-hop list that also belong to J’s vertical

strip will be added to the Octopus header of the new packet. Before

sending the packet, J will examine its one hop table again in order to

find the best destination for this packet. Assuming V is within J’s Radio

Range (see Octopus Terms chapter) and U - is not, V will be chosen as

the destination and will be the next node in handling the update.

2.2.8.1.5. Core Bypassing (Optimization)

When projecting the protocol on real geographic conditions, there are

"hurdles" that can be left behind by adding the functionality called

"bypassing". The obstacles are the “holes” in topology – parts of the

Grid (see Octopus Terms chapter) where no nodes are located. Such

holes make the strip that the hole located in non-continuous and

create situation where nodes in the same strip do not “know” each

other, though belong to the same strip and have a continuous route

between them (based on nodes that are not necessarily located in the

same strip).

For example, such a situation, based on Figure 2, happens for nodes N

and O in the upper horizontal strip. Bypassing algorithm offers to

"bypass" strip update via the node J. If J is reachable from N and N

can reach O, the upper horizontal strip will be continuous and will

contain A, O, N and Q nodes.

Note: bypassing node (J in our case) is not a part of the strip, though

actually takes part in creating/updating it.

Implementation Aspect

When findNextDest fails to find the destination for the strip update (by

the regular way) and Core Bypass is enabled, it will search for a

bypassing node.

The search will be performed in one-hop table of the sending node.

The goal is to find the furthest node in strip update direction, which

does not belong to the same strip as sending node (otherwise, the

next destination would be found by regular way).

The method implementing the looking for bypass node is

OctopusCore::findBypassNode.

As the input parameters, it receives the direction of the location and

sending node location. It will return the receiving node location

(parameters transferred by reference) and the bypassing node ID. If

there is no such node, invalid values will be returned for location and

ID. The search for the bypassing node is based on a "grading" system.

For each node in one-hop neighbors table, a grade is calculated. The

node with the maximal grade will be the bypassing node. When a node

gets a strip update, it will check whether this is a strip update of

node's strip or it was chosen by representative of the neighbor strip to

be the bypassing node for it. The node can check it by reading the

packet type in the header of the packet it received.

For bypassed update, the type will be OCT_BYPASSED (see Main

Octopus Agent Type Definitions chapter). The node will update neither

its own tables nor the header table with any new information.

The packet type will be changed to OCT_RETURN_BYPASSED (see Main

Octopus Agent Type Definitions chapter) as soon as bypassing node

will find the next destination for the strip update in the original strip. It

will acquire the information about the specific strip from the location of

the sending node (the one that initiated bypassing) and will calculate

the strip update direction by relative location between the sending

node and its own location.

Whenever a node receives a packet of type OCT_RETURN_BYPASSED

it will continue the strip update, as it was a regular strip update.

2.2.8.1.6. Core Queue (Optimization)

The purpose of this optimization is to avoid loss of strip updates. Each

node that takes active part in the update (initiates or occurs to be a

destination) handles a queue, where it inserts the strip update packets

it has scheduled. In case the node does not "hear" that the destination

it chose continues the update during CR_Q_TIMEOUT (see Tcl

Parameters chapter) time, it looks for another destination and

reschedules the packet. The queue is implemented in the module

octopus_core_queue.cc.

Note: OCT_END_STRIPE_UPDATE (see Main Octopus Agent Type

Definitions chapter) packet will help to avoid endless rescheduling for

the receiving End Nodes (see Octopus Terms chapter).

Implementation Aspect

The core queue is different from queues used in the other two

modules. Its maximal length is four packets (as the number of

geographic directions). For each direction, only the last scheduled

packet (in the relevant direction) is stored. Each CR_Q_TIMEOUT

period, the agent reschedules all valid packets that appear to be

stored in the queue. This is performed by the purgeCoreQueue()

method.

Each time a node hears an update transmitted by some other node

and the locations of the sender and receiver match the direction of

strip update (calculated according to the packet type), the node will

dequeue the packet:

case OCT_CORE_STRIPE_UPDATE_TO_NORTH:

if ((CRQ)&&(CR_QUEUE->foundInCoreQueue(NORTH, hdr->myaddr_)))

{

CR_QUEUE->dequeCoreQueue(NORTH);

}

2.2.8.1.7. Core Two Hop Neighbor Table (Optimization)

The idea of this optimization came from the existing network

protocols: the nodes will share their one-hop tables with their one-hop

neighbors. Each time a "hello" messages broadcasted, the

broadcasting node will attach its one-hop neighbor table to the header.

Each time a "hello" message received by the node, it copies all the

attached entries that do not already appear in its one-hop table, into

its two-hop neighbor table.

Implementation Aspect

Each time 'hello' message received, the following code is invoked (the

relevant line is underlined. The case is the part of the octopus agent

'recv' function):

case OCT_CORE_HOP_UPDATE:

entry = hopTable->findEntry(hdr->myaddr_);

octopusCore->handleCoreHopUpdate(hdr,entry.xLoc_, entry.yLoc_,

entry.lastUpdateTime_); //previous location

if (GFQ) GF_QUEUE->purge();

if (FL_STEP_Q) FL_STEP_QUEUE -> purge();

if (FLQ) FL_QUEUE->purge();

if (CRQ) purgeCoreQueue(); //Lily 12/03

twohopTable->editDB(hdr->neiTable);

break;

The editDB method: synchronizes between own and parameter tables,

the data in the parameter table is considered to be of higher priority.

The positive influence of the optimization is the fact that each node

"knows" more nodes in the Grid (see Octopus Terms chapter). This is

can shorten the time needed for finding other nodes and as a result

improve the algorithm performance.

The negative influence of the optimization is a significantly higher load

on the network, since "hello" messages are frequently sent. Increasing

the size of the "hello" messages leads to longer transmission time,

bigger packet size and as a result to greater packet loss rate in the

network.

2.2.8.1.8. Core Neighbor Tables Validation (Optimization)

In order to avoid situations where a node broadcasts the "hello"

message, its one hop neighbors update their one hop tables and then

the node 'disappears' (moves away or turns off), but the information it

sent is still stored in the tables of its neighbors, validation mechanisms

can be used.

Implementation Aspect of validation based on time

Each time the node sends a 'hello' packet, its tables can be validated

by time. For each entry, the timestamp of its last update is stored. The

information considered valid for a period of

HOP_NEIGHBOR_OUTDATE_INTERVAL seconds (see Tcl Parameters

chapter). The method responsible for validating the table by time is

found in octopusDB module: OctopusDB::validateTableByTime.

In other words, all table entries that are older than the defined interval

are invalidated.

Implementation Aspect of validation based on location

Each time the node sends a 'hello' packet, its tables can be validated

by location. Each entry contains two locations: the one received from

the latest update and the one - from the update before the latest.

Basing on these locations, the direction of node's movement can be

evaluated. The table entries are invalidated if calculated approximate

location (X+dX, Y+dY) of the entry is out of one hop range of the

validating node.

The method responsible for validating the table by time is found in

octopusDB module: OctopusDB::validateTableByLocation.

2.2.8.1.9. Low Energy (unstable) Nodes (Experiment)

Introduction

The experiment is based on Energy Model implemented in NS2 (see

Octopus Terms chapter), which is an attribute of NS2 node. Energy

Model represents the level of energy in a mobile host.

Each Mobile Node is assigned with an initial amount of energy at

the beginning of simulation by setting the initialEnergy_ Tcl

parameter (see NS Manual for further information regarding Energy

Model).

Each node also receives txPower_ and rxPower_ Tcl parameters

that represent the amount of energy spent on transmitting and

receiving packets respectively. Further details regarding Energy

Model in NS2 can be found in the NS Manual.

When the energy level of the node goes down to zero, no more

packets can be received or transmitted by the node.

Motivation

The experiment idea is taken from real life conditions, where each

mobile host has a battery, whose charge can run out and the

mobile node will not be able to receive or transmit packets for a

certain period of time. As soon as the mobile node charges the

battery again, it returns to its usual activity.

Experiment-related terms

"Sleep Mode" – when node’s battery is on zero level (no packets

can be received nor transmitted).

"Awake Mode" – when node’s battery is not on zero level (regular

network activity).

"Stable Node" – is a mobile node that never enters into a "Sleep

Mode".

"Unstable node" – is a mobile node that enters into "Sleep Mode" in

random periods of time.

Simulation Input

The total number of nodes that take part in the simulation (both

stable and unstable).

The percent of unstable nodes.

For each unstable node periods of time when it will be in a "Sleep

Mode" (defined in a random manner).

For each node: list of simulation times when the node will initiate

packet transmission.

For each node and each simulation time: a randomly chosen

destination (can be either stable or unstable).

Simulation Output

S U C C E S S R A T E S V S N O D E S ' S T A B I L I T Y

0.96

0.97

0.98

0.99

200 225 250 275 300

strip resuolution

succ

ess

rate

[%]

s tatic

mobile

2.2.8.2. Octopus Find Location Module

The Find Location module is responsible for supplying the source

node information regarding target node’s geographic location.

The Find Location module is represented by the

OctopusFindLocation and Octopus_FLStepQueue classes and is

implemented in octopus_find_location.h/cc files.

Follows is a list of terms used in Find Location Module:

Access Node – (used by Find Location Seek Target) is a node whose

Database contains data on both the Source and the Target nodes.

In other words, any node that is located in a cell that is the crossing

of Source’s strip and Target’s strip may be an Access Node. In most

cases, the Access Node is the node that has first found the Target.

Best Bypass Destination – (used by FL Seek Target and FL Reply),

in case FL Seek Target or FL Reply components are unable to find

Best Next Destination and the FL Bypass feature is ON, the Query is

forwarded to the Best Bypass Destination which is node that is

chosen in the same way as Best Next Destination, but without the

restriction of being in the same strip as the current node. In case

more than one suitable node found, the Best Bypass Destination will

be the one closest to the original strip.

Best Next Destination – (used by FL Seek Target and FL Reply) is

the node to whom FL Query will be forwarded by current node. It

may only be a node in current node’s One-Hop Table, and is always

a node located in current node’s strip. The priority of choosing Best

Next Destination is as follows:

° The furthest node in the next to current cell in Sending Direction.

° The furthest node in the one after the next to current cell in

Sending Direction.

° The furthest reachable node in Sending Direction.

° The furthest node in current cell.

Bypass Offsets – (used by FL Seek Target and FL Reply), are the

horizontal and vertical offsets of current FL Seek Target or FL Reply

Query from its original strip.

Grid – the area covered by Octopus network and divided into

Octopus Strips.

Node’s Database – all node’s Neighbors Tables (One-Hop and

Strips).

Radio Range – currently defined to be 250m, is the range within

which wireless transmission of one node may be received by

another node.

Sending Direction – (used by FL Seek Target and FL Reply) is the

direction in which FL Query should be forwarded across the Grid.

The Sending Direction of each query is defined by Source and

cannot be altered by other nodes.

Source (node) – (used by Find Location Module), is the node that

has initiated the search for the Target, and is the node to receive

the result of FL Query.

Target (node) – (used by Find Location Module), is the node whose

location being searched by the Source.

2.2.8.2.1. FL Seek Target

Responsible for the actual seeking of target node’s location.

Represented by the OctopusFindLocation::Oct_FL_SeekTarget

method.

Invoked whenever a packet of type OCT_FIND_LOCATION (see

Main Octopus Agent Type Definitions chapter) is received or need to

be sent.

In case of initiating node: builds the FL part of Octopus packet

header and redirects the packet to the Best Next Destination or

nodes (defined by simulation state, initial settings, the chosen FL

algorithm and other input parameters).

In case of forwarding node:

° Makes local snapshots of all node’s neighbors tables (in order to

retain DB integrity while the function is running).

° Searches the target node in local snapshots. If found – searches

for Access Node, updates relevant data in FL part of the Octopus

packet header and invokes the chosen reply algorithm according

to Route_Reply_Algorithm (see Tcl Parameters chapter).

° If not found – defines the appropriate Sending Direction, updates

relevant data in the FL part of Octopus packet header and

redirects the packet to the Best Next Destination.

2.2.8.2.2. FL Reply (Optimization)

Responsible for transferring target location data from the Access

Node back to the source (invoked as soon as target location is

determined)

Represented by the OctopusFindLocation::FL_REPLY_GetNextDest

method.

Invoked whenever a packet of type OCT_FIND_LOCATION_REPLY is

received.

Makes local snapshots of one-hop neighbors table (in order to retain

DB integrity while the function is running).

Searches the source node in the local snapshot. If found – sends

the data directly to the source node.

If not found – updates relevant data in the FL Reply part of Octopus

packet header and redirects the packet to the Best Next

Destination.

2.2.8.2.3. FL Cache (Optimization)

Responsible for maintaining cached location data, its retrieval and

validation.

Represented by the OctopusFindLocation::Oct_FL_UpdateCache and

OctopusFindLocation::Oct_FL_GetCachedTarget methods.

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

any packets to be sent or received (passive “listener”).

On every execution of the Octopus::recv method (every time a

packet is received), the cache table is updated with valid location

data from its Octopus header and its timestamp.

On every execution of the Find Location Seek Target component,

the cache table is being checked for a valid target location data.

Cached target location data is defined valid in case its timestamp is

not older than the cache_validity_period_ (see Tcl Parameters

chapter) parameter.

In case target location data is valid, it is used and Find Location

Seek Target component terminates.

In case the data is invalid, it is removed from the cache table and

Find Location Seek Target proceeds as usual.

2.2.8.2.4. FL Queue (Optimization)

Responsible for reinitiating failed target location queries in different

Sending Directions in order to increase FL Seek Target component

reliability and success rate.

Represented by the OctopusQUEUE class (octopus_queue.h/c).

Relevant only for nodes that are the first to initiate the FL Seek

Target query.

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

any packets of special structure to be sent or received.

The FL Queue can be turned ON and OFF by setting the FLQ_Switch

Tcl parameter to 1 or 0 respectively (see Tcl Parameters chapter).

When target search query is originated, once the FL Seek Target

component has built an appropriate search packet and is ready to

send it, a copy of the packet to be sent is added to the FL Queue.

In case no result for the query is received during the

FL_Q_TIMEOUT (see C++ Parameters chapter) period, the FL

Queue invokes the FL Seek Target component to initiate the same

query in a different Search Direction.

The number of times the query will be reinitiated is defined by the

FLQ_MAX_RETRIES (see C++ Parameters chapter) parameter.

As soon as the result of the query is received, the packet is

removed from the FL Queue.

2.2.8.2.5. FL Step Queue (Optimization)

Responsible for retransmitting packets that were not successfully

received by the Next Best Destination.

Represented by the Octopus_FLStepQueue class (located in

octopus_find_location.h/c).

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

any packets to be sent or received (passive “listener”).

The FL Step Queue may be turned ON and OFF by setting the

FL_Step_Q_Switch Tcl parameter to 1 and 0 respectively (see Tcl

Parameters chapter).

Just before FL Seek Target and FL Reply component send a packet,

a copy of the packet is added to the FL Step Queue.

In case the FL Step Queue does not “hear” the destination forward

the packet further within time period defined by the

FL_STEP_Q_TimeOut Tcl parameter, the packet is rescheduled.

The number of times the same packet will be rescheduled is defined

by the FL_STEP_Q_MaxRetries Tcl parameter.

In case FL Step Queue “hears” that the destination has forwarded

the packet to the next destination (or has found the target of the

packet), the packet is removed from the queue.

2.2.8.2.6. FL Bypass (Optimization)

Responsible for overcoming obstacles and “holes” in the Octopus

strips that appear on the way of FL queries.

The FL Bypass feature is integrated directly into the FL Seek Target

and FL Reply components (octopus_find_location.h/cc).

The FL Bypass feature may be turned ON and OFF by setting the

FL_Bypass_Enabled Tcl parameter to 1 and 0 respectively.

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

packets of special format to be sent or received.

Sending side: when the FL Seek Target or FL Reply components are

unable to find Best Next Destination for a query, the query is

forwarded to the Best Bypass Destination.

Receiving side: when the Bypass Offsets in the Octopus header of

the received packet are not 0, the FL components calculate the

original strip of the query and the search for Best Next Destination

is done assuming as if current node was located inside the original

strip.

Receiving side: when the Bypass Offsets in the Octopus header of

the received packet are not 0, the FL components calculate the

original strip of the query and the search for Best Next Destination

is done as if current node was located inside the original strip.

There is no limitation regarding the distance a query may “travel”

outside its original strip, nor the maximal offset off the strip it can

reach. However, the query will always “yearn” to return to its

original strip (by minimizing the Bypass Offsets as much as

possible).

2.2.8.2.7. FL Estimated Location (Optimization)

Responsible for calculating estimated source/target location for FL

queries.

Integrated directly into the FL Reply component (located in

octopus_find_location.h/cc).

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

any packets of special structure to be sent or received.

FL Reply component is responsible for transferring the query back

to the source node or directly to the target node (according to the

Reply_To_Source Tcl parameter (see Tcl Parameters chapter)).

Once invoked, the FL Reply component checks the status of the

relevant use-estimated-location flag (Source_Loc_Type Tcl

parameter (see Tcl Parameters chapter) when returning to source

node, Target_Loc_Type Tcl parameter when forwarding to target).

In case estimated location should be used, it is calculated

considering node’s previous location, speed and most current

location.

2.2.8.2.8. FL Forward to Target (Optimization)

Responsible for forwarding the packet directly to the target, as soon

as target location is determined.

Integrated directly into the FL Reply component (located in

octopus_find_location.h/cc).

No Octopus Packet Type (see Main Octopus Agent Type Definitions

chapter) is associated with the component since it does not require

any packets of special structure to be sent or received.

Forwarding to target may be turned ON and OFF by setting the

Reply_To_Source Tcl parameter (see Tcl Parameters chapter) to 0

and 1 respectively.

When invoked, the FL Reply algorithm determines the Sending

Direction and therefore the Next Best Destination upon the above

settings.

2.2.8.2.9. Multiple Sending Directions (Optimization)

Responsible for initializing multiple instances of the same FL Seek

Target query in different Search Directions.

Integrated directly into the FL Seek Target component (located in

octopus_find_location.h/cc).

The number of Sending Directions the query will be initiated in is

defined by the FindLoc_Mode Tcl parameter (valid values are 1, 2

and 4).

Note: Since all nodes within the Radio Range will receive the

transmitted packet, there is no need to send multiple packets (one

packet for each Best Next Destination), but define multiple Best

Next Destinations for one packet (for more information regarding

sending packets to multiple destinations see detailed description of

the Octopus Packet Header (see Main Octopus Agent Type

Definitions chapter)).

2.2.8.3. The OctopusDB Class

OctopusDB class represents a single neighbors table.

OctopusDB class holds an array of OCT_MAX_TABLE_SIZE (see

C++ Parameters chapter) entries of OctRouteEntry (see Main

Octopus Agent Type Definitions chapter) type

In addition to basic methods of such storage object (add entry,

delete entry, update entry) other methods have been implemented

in order to make the work with the tables easier and quicker:

addToDb - Adds the whole table transferred as a parameter to

itself.

editDB - Updates the local table with transferred entries. Helpful

during updating neighbors tables from the received tables.

3. Octopus Agent Installation and Usage

3.3. Prerequisites

3.3.1. Development Platform

The latest version of our code has been checked and found fully

functional in Linux environment. Portions of the project were

developed on Cygwin under Windows XP, but large simulations in this

environment were very time consuming, that is why the project was

ported to Linux.

3.3.2. NS2

NS2.26 – Network Simulator (version 2.26) is a platform for simulating

large-scale networks (static and dynamic). In our project, we used

package called NS-allinone-2.26. The NS2 simulator and all relevant

information regarding installation and usage is available at

http://www.isi.edu/nsnam/ns/index.html. A complete manual to NS2

can be found at http://www.isi.edu/nsnam/ns/ns-documentation.html.

3.3.3. Octopus Agent Integration into NS2

3.3.3.1. C++ Source Code Installation

The Octopus Agent requires the following C++ source code files to be

placed in ns-2.26/octopus directory:

octopus.h/cc – contain implementation of the OctopusAgent class

octopus_core.h/cc – contain implementation of the Octopus Core

Module

octopus_core_queue.h/cc – contain implementation of the Octopus

Core Queue component

octopus_find_location.h/cc – contain implementation of the Octopus

Find Location module

octopus_gf.h/cc – contain the implementation of the Octopus GF

module (not covered by this report)

octopus_definitions.h – contains general Octopus Agent definitions

used by all modules

octopus_db.h/cc – contain implementation of the OctopusDB class

octopus_queue.h/cc – contain implementation of the OctopusQueue

class

The following C++ source code files should be placed in ns-

2.26/common directory:

mobilenode.h – header file of the MobileNode class

packet.h – header file of the Packet class

For a more detailed description of each module, see the Octopus

Modules chapter.

3.3.3.2. OTcl Source Code Installation

The Octopus Agent requires the following OTcl source code files in

NS2.26/tcl/mobility directory:

octopus.tcl – contains implementation of the OctopusAgent OTcl

class

The following OTcl source code files should be placed in NS2.26/tcl/lib

directory:

ns-lib.tcl – general purpose OTcl module

ns-packet.tcl – contains implementation of the Packet OTcl class

For a more detailed description of each module, see Tcl Parameters

chapter.

3.3.3.3. Building NS2 with Octopus Agent

In order to build NS2 after integrating Octopus Agent, the Makefile file

should be placed in the ns-2.26 directory. After that, typing “make”

while in ns-2.26 will build NS2. To execute NS2, simply type “ns

<input-file.tcl>”.

3.4. Octopus Agent Parameters

3.4.1. Tcl Parameters

NS2 supports setting simulation configuration by passing Tcl

parameters to it. Each Tcl parameter has a corresponding C++

parameter that will be updated with the value of Tcl parameter as soon

as the simulation starts. Further information regarding passing Tcl

parameters to NS2 can be found in NS Manual.

Follows is a list of Octopus Agent specific Tcl parameters that were

defined during project development (the parameters are first defined

and assigned a default value in the octopus.tcl source file):

OCT_BROADCAST_INTERVAL – The interval, in seconds, at which

“Hello” (one-hop update) messages are being broadcasted (see

Core Module description for further information).

STRIPE_BROADCAST_INTERVAL – The interval, in seconds, at

which strip update messages are being broadcasted (see Core

Module description for further information).

HOP_NEIGHBOR_OUTDATE_INTERVAL – The interval, in seconds,

after which an entry in one-hop neighbors table becomes outdated

(see Core Module description for further information).

STRIPE_NEIGHBOR_OUTDATE_INTERVAL – The interval, in

seconds, after which an entry in strip neighbors table becomes

outdated (see Core Module description for further information).

VALIDATE_BY_LOCATION – Enable (1) / Disable (0) neighbors

tables validation by location (see Core Module description for

further information).

FindLoc_Mode – Defines the number of Sending Directions a FL

Seek Target query will be initialized in (valid values are 1, 2 and 4,

see Find Location Module description for further information).

FindLoc_Init_Dir – Defines the Sending Direction that will be used

first to initialize FL Seek Target query (valid values are 1 for North,

and 9 for North East, see Find Location Module description for

further information).

Source_Loc_Type – Defines whether the real (0) or estimated (1)

source location will be used for initiating Route Reply (see Find

Location Module description for further information).

Reply_To_Source – Defines whether the Access Node will send the

packet back to Source (1) or forward the packet directly to the

Target (0) (see Find Location Module description for further

information).

Target_Loc_Type – Defines whether the real (0) or estimated (1)

target location will be used for to transfer the packet from Access

Node to the Target (see Find Location Module description for further

information).

Route_Reply_Algorithm – Defines the algorithm to be used when

transferring packets further from Access Node (to the Source or the

Target). 0 to use GF (not covered in current project); 1 to use FL

Reply (see Find Location Module description for further

information).

FL_STEP_Q_Switch – Enables (1) / Disables (0) the FL Step Queue

(see Find Location Module description for further information).

FL_STEP_Q_TimeOut – Timeout, in seconds, before FL Step Queue

reschedules a packet (see Find Location Module description for

further information).

FL_STEP_Q_MaxRetries – Number of times a packet in FL Step

Queue will be rescheduled (see Find Location Module description for

further information).

Second_Chance – Input parameter used by the GF Module (not

covered in current project).

Correct_Location – Input parameter used by the GF Module (not

covered in current project).

Cache_Valid_Period – The time, in seconds, before an entry in FL

Cache table becomes outdated (see Find Location Module

description for further information).

Routing_Mode - Input parameter used by the GF Module (not

covered in current project).

FLQ_Switch – Enables (1) / Disables (0) the FL Queue (see Find

Location Module description for further information).

GFQ_Switch - Input parameter used by the GF Module (not covered

in current project).

CRQ_Switch – Enables (1) / Disables (0) the Core Queue (see Core

Module description for further information).

Node_Has_GPS – Parameter used by module not covered in current

project.

FL_Bypass_Enabled – Enables (1) / Disables (0) the FL Bypass

feature (see Find Location Module description for further

information).

CR_Bypass_Enabled – Enables (1) / Disables (0) the Core Bypass

feature (see Core Module description for further information).

Asleep_Awake_Sim_Type – Indicates whether current simulation is

Low Energy Simulation (1) or not (0) (see Core Module description

for further information).

3.4.2. C++ Parameters

Follows is a list of Octopus Agent specific C++ parameters (defined in

octopus_definitions.h C++ header file):

FL_Q_TIMEOUT – Timeout, in seconds, before FL Queue

reschedules a packet (see Find Location Module description for

further information).

CR_Q_TIMEOUT – Timeout, in seconds, before Core Queue

reschedules a packet (see Core Module description for further

information).

FLQ_MAX_RETRIES – Number of times a packet in FL Queue is

rescheduled (see Find Location Module description for further

information).

STRIPE_RESOLUTION – The width of Octopus strips.

OCT_MAX_TABLE_SIZE – The maximal number of nodes in

simulation (does not have to be exactly the number of nodes, but

cannot be lower than the actual number).

3.5. Running Octopus Simulations

3.5.1. Octopus Shell Scripts

In order to automate the process of running experiments the following

scripts were written (should be placed in ns-2.26/sim directory):

single_test.csh <parameter list>

file.tcl

test_all.csh <output file>

found_but_not_replied.pl

3.5.2. Running a Single Octopus Simulation

3.5.2.1. single_test.csh CShell Script

3.5.2.2. Responsibility

Setting the desired configuration

Running the experiment several times (for better accuracy)

Collecting and processing the relevant statistical data

3.5.2.3. Execution

While in ns-2.26, type ./sim/single_test.csh <list of parameters>

3.5.2.4. Input Parameters

1. results file

2. log file

3. horizontal side of Grid - [m] (see Octopus Terms chapter)

4. vertical side of Grid - [m] (see Octopus Terms chapter)

5. number of nodes - integer

6. Static/Mobile mode - ON/OFF

7. ROUTING_MODE - ON/OFF (see Tcl Parameters chapter)

8. FLQ_SWITCH - ON/OFF (see Tcl Parameters chapter)

9. GFQ_SWITCH - ON/OFF (see Tcl Parameters chapter)

10. CRQ_Switch - ON/OFF (see Tcl Parameters chapter)

11. STRIPE_RESOLUTION - [m] (see C++ Parameters chapter)

12. OCT_BROADCAST_INTERVAL - [sec] (see Tcl Parameters chapter)

13. STRIPE_BROADCAST_INTERVAL - [sec] (see Tcl Parameters

chapter)

14. CORRECT_LOCATION - ON/OFF (see Tcl Parameters chapter)

15. VALIDATE_BY_LOCATION - ON/OFF (see Tcl Parameters chapter)

16. SECOND_CHANCE - ON/OFF (see Tcl Parameters chapter)

17. FL_BYPASS - ON/OFF (see Tcl Parameters chapter)

18. CR_BYPASS - ON/OFF (see Tcl Parameters chapter)

3.5.2.5. Output

Single text file with processed results of NS2 simulations. Follows is an

example of such file after running two experiments (the FL Cache

feature is disabled):

Sent by

source

forwardings

by FL

forwardings

by FL_REPLY

Net-found

by FL

Net-found

in cache

Net-found

Total

Replied to

source

297 1355 309 295 0 295 284

297 1537 553 295 0 295 284

3.5.2.6. file.tcl Tcl Script

3.5.2.6.1. Responsibility

file.tcl is the actual input parameter to the NS2 application

Configures most of the NS2 simulator options and Octopus Agent

features and parameters

Among other responsibilities, randomly defines each node’s route

on the Grid during simulation

Dynamically updated by single_test.csh on each execution

3.5.2.6.2. Execution

While in ns-2.26 directory, type ns sim/file.tcl

3.5.2.6.3. Input Parameters

No input parameters

The configuration should be set in the file itself

3.5.2.6.4. Output

Output is a log file that contains data on nodes’ locations and

communication during the simulation. This information is processed

and displayed by single_test.csh script.

Examples from output of NS execution with file.tcl containing mobile

configuration:

node 68: 582.02331961591221 937.00960219478725

setdest: node 68 will move at 0 from (582 937) to (544 853) with velocity

1.6526491769547325

setdest: node 68 will move at 55 from (544 853) to (920 947) with velocity

1.6526491769547325

node 138: 521.94101508916322 511.49519890260632

setdest: node 138 will move at 0 from (521 511) to (810 1465) with velocity

5.9684927983539096

rand_source = 68

rand_target 138, start_time 30.116126543209877

rand_target 92, start_time 49.111882716049379

rand_target 193, start_time 58.640046296296298

Starting Simulation...

num_nodes is set 200

Node 68 (561.57, 891.64) thinks he is originating this packet (ip_dst = 138, dst =

0).

Cache: Node 68 found target 138 (573.95, 683.49). The search was originated by 68

(561.57, 891.64).

FL_REPLY: Node 68 got route_reply for target 138 (573.946460, 683.487522).

3.5.3. Running Multiple Octopus Simulations

3.5.3.1. test_all.csh CShell Script

3.5.3.1.1. Responsibility

A list of single_test.csh executions with different parameters

3.5.3.1.2. Execution

While in ns-2.26 directory, type ./sim/test_all.csh <results file>

3.5.3.1.3. Input Parameters

The only input parameter is the file that will eventually contain all the

processed results.

3.5.3.1.4. Output

Output - a single text file with processed results of all NS2 simulations

that were defined in test_all.csh.

3.5.4. Editing Octopus Simulations

To add a new test parameter (PARAM), follow the next steps:

Add the desired definition with a unique comment in the end to the

appropriate file (sim/file.tcl or octopus/octopus_definitions.h). For

example, adding a new parameter (PARAM) to file.tcl file:

PARAM <default value> ;# set_PARAM

Add to single_test.csh a new input parameter and lines that

responsible of it initialization. The lines should be added to the loop

that treats the appropriate file (sim/file.tcl – the first loop or

octopus/octopus_definitions.h – the second loop). For example,

adding lines that treats the above parameter:

if ( "$line" =~ *set_PARAM* ) then

echo "PARAM $19 ;# set_PARAM" >> "sim/oct_tcl"

set written = 1

endif

3.5.5. Analyzing Simulation Results

The main output of the simulation – the processed results file which

name supplied as an input parameter for test_all.csh script can be

opened in Microsoft Office Excel: Open -> Files of type: All Files (*.*) -

> Delimited -> Comma, and processed accordingly to the desired

information. Useful commands: Insert function (sort, average, sum,

max …) and Insert chart.

4. Octopus Agent Terms

Follows is a list of Octopus Agent specific terms used in this project

and their explanation (in alphabetical order):

Access Node – (used by Find Location Seek Target) is a node whose

Database contains data on both the Source and the Target nodes.

In other words, any node that is located in a cell that is the crossing

of Source’s strip and Target’s strip may be an Access Node. In most

cases, the Access Node is the node that has first found the Target.

Best Bypass Destination – (used by FL Seek Target and FL Reply),

in case FL Seek Target or FL Reply components are unable to find

Best Next Destination and the FL Bypass feature is ON, the Query is

forwarded to the Best Bypass Destination which is node that is

chosen in the same way as Best Next Destination, but without the

restriction of being in the same strip as the current node. In case

more than one suitable node found, the Best Bypass Destination will

be the one closest to the original strip.

Best Next Destination – (used by FL Seek Target and FL Reply) is

the node to whom FL Query will be forwarded by current node. It

may only be a node in current node’s One-Hop Table, and is always

a node located in current node’s strip. The priority of choosing Best

Next Destination is as follows:

° The furthest node in the next to current cell in Sending Direction.

° The furthest node in the one after the next to current cell in

Sending Direction.

° The furthest reachable node in Sending Direction.

° The furthest node in current cell.

Bypass Offsets – (used by FL Seek Target and FL Reply), are the

horizontal and vertical offsets of current FL Seek Target or FL Reply

Query from its original strip.

End Node – (used by Core Strip Update) is a node that does not

have any neighbors in one (or more) of the four geographical

directions, in current strip. End Nodes are the ones that initiate

Strip Updates.

Energy Model – (used for Low Energy Nodes Experiment), is an

attribute of each Mobile Node in NS2 that is responsible for

simulating node’s battery.

Grid – the area covered by Octopus network and divided into

Octopus Strips.

Node’s Database – all node’s Neighbors Tables (One-Hop and

Strips).

Radio Range – currently defined to be 250m, is the range within

which wireless transmission of one node may be received by

another node.

Sending Direction – (used by FL Seek Target and FL Reply) is the

direction in which FL Query should be forwarded across the Grid.

The Sending Direction of each query is defined by Source and

cannot be altered by other nodes.

Source (node) – (used by Find Location Module), is the node that

has initiated the search for the Target, and is the node to receive

the result of FL Query.

Target (node) – (used by Find Location Module), is the node whose

location being searched by the Source.