advances in computer science: an international journalacsij.org/documents/v2i2/acsij...

55
Advances in Computer Science: an International Journal Vol. 2, May 2013 © ACSIJ PUBLICATION www.ACSIJ.org ISSN : 2322-5157

Upload: others

Post on 16-Mar-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Advances in Computer Science: an International Journal

Vol. 2, May 2013

© ACSIJ PUBLICATION www.ACSIJ.org

ISSN : 2322-5157

ACSIJ Editorial Board 2013 Dr. Seyyed Hossein Erfani ( Chief Editor ) Azad University Tehran, Iran Dr. Indrajit Saha Department of Computer Science and Engineering, Jadavpur University India Mohammad Alamery University of Baghdad Baghdad, Iraq

ACSIJ Reviewers Committee

• Prof. José Santos Reyes, Faculty of Computer Science, University of A Coruña, Spain • Dr. Dariusz Jacek Jakóbczak, Technical University of Koszalin, Poland • Dr. ANANDAKUMAR.H, PSG College of Technology (Anna University of Technology), India • Dr. Mohammad Nassiri, Faculty of Engineering, Computer Departement, Bu-Ali Sina University,

Hamedan, Iran • Dr. Indrajit Saha, Department of Computer Science and Engineering, Jadavpur University, India • Prof. Zhong Ji, School of Electronic Information Engineering, Tianjin University, Tianjin, China • Dr. Heinz DOBLER, University of Applied Sciences Upper Austria, Austria • Dr. Ahlem Nabli, Faculty of sciences of Sfax,Tunisia • Dr. Ajit Kumar Shrivastava, TRUBA Institute of Engg. & I.T, Bhopal, RGPV University, India • Mr. S. Arockiaraj, Mepco Schlenk Engineering College, Sivakasi, India • Prof. Noura AKNIN, Abdelmalek Essaadi University, Morocco

ACSIJ Published Papers are Indexed By: 1. Google Scholar 2. EZB, Electronic Journals Library ( University Library of Regensburg, Germany) 3. DOAJ, Directory of Open Access Journals 4. Academic Journals Database 5. Bielefeld University Library - BASE ( Germany ) 6. AcademicKeys 7. WorldCat (OCLC) 8. Technical University of Applied Sciences ( TH - WILDAU Germany) 9. University of Rochester ( River Campus Libraries _ USA New York)

TABLE OF CONTENTS

1. Proposed New Mechanism to Detect and Defend the Malicious Attackers in AODV– (pg 1-6) Vijay Kumar, Ashwani Kush 2. An Adaptive K-random Walks Method for Peer-to-Peer Networks –(pg 7-12) Mahdi Ghorbani 3. Strategic Evaluation of Web-based E-learning; a review on 8 articles –(pg 13-18) Shahriar Mohammadi, Sajad Homayoun

4. Distributed Data Storage Model for Cattle Health Monitoring Using WSN –(pg 19-24) Ankit R. Bhavsar, Disha J. Shah, Harshal A. Arolkar 5. Presentation of an approach for adapting software production process based ISO/IEC 12207 to ITIL Service –(pg 25-30) Samira Haghighatfar, Nasser Modiri, Amir Houshang Tajfar 6. Applying a natural intelligence pattern in cognitive robots –(pg 31-37) Seyedeh Negar Jafari, Jafar Jafari Amirbandi, Amir Masoud Rahmani 7. A Multi Hop Clustering Algorithm For Reduce Energy Consumption in Wireless Sensor Networks –(pg 38-43) Mohammad Ahmadi, Mahdi Darvishi 8. Enterprise Microblogging tool to increase employee participation in organizational knowledge management –(pg 44-47) Jalal Rezaeenour, Mahdi Niknam 9. Security-Aware Dispatching of Virtual Machines in Cloud Environment –(pg 48-52) Mohammad Amin Keshtkar, Seyed Mohammad Ghoreyshi, Saman Zad Tootaghaj

Proposed New Mechanism to Detect and Defend the Malicious Attackers in AODV

Vijay Kumar1, Ashwani Kush 1Department of Computer Science & Applications,

IEC University, Baddi (Solan). H.P. – INDIA [email protected]

2Department of Computer Science & Applications, University College, Kurukshetra University, Kurukshetra-INDIA

[email protected]

Abstract In MANETs to protect a network layer from malicious attacks is an important and challenging security issue. In this paper, A new mechanism has been proposed to detect and defend the network against such attack which may be launched cooperatively by a set of malicious nodes. The proposed algorithm has been incorporated on AODV routing protocol. The proposed algorithm does not use any cryptographic primitives on the routing messages. But, it is protecting the network by detecting and deactivating the malicious activities of node. Simulations have been carried out using NS2. Simulation results show that the proposed algorithm encouraging results.

Keywords: Mobile ad hoc networks, malicious attack, routing misbehavior.

1. Introduction

Due to recent performance advancements in computer and wireless communicative technologies, mobile wireless computing is becoming increasingly widespread. One type of wireless network that is quickly evolving is the Mobile Ad Hoc Network (MANET). Unlike other mobile network paradigms, such as cell phone networks with fixed radio towers and centrally accessible routers and servers, MANETs have dynamic, rapidly-changing, random, multi-hop topologies composed of bandwidth, constrained wireless links and no centrally accessed routers or servers. While these characteristics are essential for the flexibility of MANETs, they introduce specific security concerns that are either absent or less severe in wired networks. MANETs are vulnerable to various types of attacks including passive eavesdropping, active interfering, impersonation, and denial-of-service. Intrusion prevention measures such as strong authentication and redundant transmission should be complemented by detection techniques to monitor security status of these networks and identify malicious behavior of any participating node(s). The rest of the paper is organized as follows. Section II

describes Statement of the problem. Section III describes the Malicious Node Detection Process Using Proposed Modi_AODV. Section IV presents the simulations and the performance analysis of the scheme. Section V concludes the paper.

2. Statement of the Problem

Recently, the use of deceptive mechanisms for security and stability has become very common in wired and infrastructure based wireless networks. They have traffic concentration and control points such as switches, routers or gate ways where wired/ wireless resources are deliberately deployed to lure and capture the attackers. MANET does not have such concentration or control points, therefore no proper architecture has been proposed till now for use of deceptive techniques in MANETs. However, the specific features of deception techniques like reliability, control over deployed resources and their luring capabilities can be used overcome the limitations of earlier security schemes used in general ad-hoc environment.

3. Malicious Node Detection Process Using Proposed Modi_Aodv

The basic idea of the Modi_AODV protocol is to identify and detect the malicious node using the proposed algorithm and select all possible alternative routes to a target node that does not pass through the malicious node from the source to destination node. In modi_AODV protocol, the numbers of various paths from the source node to the destination node are determined based on the number of edges emitting from the source node. Afterwards these messages will be delivered through the several other paths detected by the algorithm. The selection process is conducted in sequential path. In Modi_AODV it is assumed that the malicious node will

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

1

not succeed to disrupt communication between the source and the destination nodes.

Proposed Algorithm to Detect and Reactivate Malicious Nodes

The following assumptions are taken to design the proposed algorithm.

1. A node interacts with its 1-hop neighbours directly and with other nodes via intermediate nodes using multi-hop packet forwarding.

2. Every node has a unique ID in the network, which is assigned to a new node collaboratively by existing nodes.

3. The network is considered to be layered.

4. Source and Destination node will not be malicious node.

Steps of Modi_AODV Algorithm

Algorithm section 1: Working of RREQ packet

Step 1: Set htype “0” or “1”

Htype = “0” means non malicious node

Htype = “1” means malicious node

Step 2: Broadcast RREQ packet (p) by source node

Step 3: if node htype = “0” then broadcast RREQ to this node

If htype = “1” then deactivate this node and don’t broadcast RREQ

Step4: repeat steps 2 and 3 until it reaches the destination

Algorithm Section2: Working of RREP packet

Step 1: Destination node rebroadcast the RREP packet like the RREQ

Step 2: All the possible routes will be searched by RREP

Step 3: If any node is out of signal range or dead from the network after getting RREQ then available route will be selected by the RREP broadcasting. No need to rebroadcast RREQ and then re- reply for select the routing path

Step 4: repeat steps 2 and 3 until it reaches to source node

Step 5: Source node will select all the possible paths for data transmissions to destination node.

Algorithm Section 3: Data Packet Transmission

Step 1: Select all the possible paths from source to destination to send the data packets.

Step 2: Distribute all the data packets on every event and send them equally through the selected paths at this event.

Step 3: source node receives overhearing from destination node after receiving data packets from all selected routes.

Step 4: If source node does not receives overhearing message for any selected route than this route will be discarded from the routing table and assume the presence of malicious node in this route. Step 5: After detecting the malicious nodes it is discarded from routing table and it will not be included in the selected route for further event.

4. Comparative Simulation Results Between AODV and Modi_AODV

The working of routing largely depends upon successful transmission of packets to the destination. This requires proper selection of Routing path and algorithm. AODV and Modi_AODV have been used for routing solutions. All the simulations have been performed using Network Simulator NS-2.32 on the platform Fedora 13. The traffic sources are CBR (Continuous Bit Rate). The source-destination pairs are spread randomly over the network. During the simulation, each node starts its journey from a random spot to a random chosen destination. Once the destination is reached, the node takes time to rest and than second and another random destination is chosen after that pause time. This process repeats throughout the simulation, causing continuous changes in the topology of the underlying network. Different network scenario for varying number of nodes and distinguished node transmission range are generated.

Table 1: Evaluation Parameters Simulation Parameters Parameter Value

Simulator NS-2.32

Routing Protocol AODV and Proposed Modified AODV

Communication Type CBR

Number of Nodes 10, 20, 50

Maximum mobility speed of nodes

0,4,6,8,10 m/sec

Simulation Area 1000m x 1000 m

Simulation Time 500 sec

Packet Size 512 bytes

Number of malicious nodes

1, 2, 4

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

2

Figure 1: Snapshot of output file which shows total

attackers and dropped packets with 10 Nodes scenario.

Figure 2: Snapshot of output file which shows total attackers and dropped packets with 20 Nodes

Figure 3: Snapshot of output file which shows total attackers and dropped packets with 50 Nodes

Figure 4: Snapshot of NAM file which shows movement of nodes and dropping packets with 50

Nodes

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

3

5. Statistical Evaluation of Simulation Results

Table 2: % Overhead

In Table 2, % Overhead shows. % Overhead in packet delivery ratio in all three scenarios, Modi_AODV with malicious nodes has very less i.e. 0.482 to 0.946068093 in comparison of AODV with and without malicious nodes i.e.19.704 to 20.74721302.

% Overhead in throughput in all three scenarios, Modi_AODV with malicious nodes has very less i.e. 0.405205372 to 1.71476344 in comparison of AODV with and without malicious nodes i.e.19.33919518 to 22.55081.

% Overhead in end to end delay ratio is less in 50 nodes scenario in Modi_AODV with malicious nodes i.e. 17.305314 in comparison of AODV with and without malicious nodes i.e.47.44427457. But in remaining two scenarios its very high in Modi_AODV with malicious nodes.

In Table 3, % Overhead shows of data packets sent and received. That table shows AODV without malicious attack gives very good results i.e. 0 to 0.483351235 But the results with malicious attacks in AODV are not good i.e. 19.82759 to 24. Again in Modi_AODV with malicious attacks results are good i.e. 0.477327 to 1.32231405.

In Table 4 shows Co-efficient of correlation. In packet delivery ratio correlation between AODV without and with malicious attacks in all three scenarios lies between low negative to high negative. On other hand correlation between AODV without malicious nodes and Modi_AODV with malicious nodes in all three scenarios lies between low positive to moderate positive.

10 Nodes 20 Nodes 50 Nodes

Para

met

ers

AOD

V w

ithou

t Mal

icio

us A

ttack

and

AO

DV

wi

th m

alic

ious

atta

ck

AOD

V w

ithou

t Mal

icio

us A

ttack

and

M

odi_

AOD

V w

ith m

alic

ious

atta

ck

AOD

V w

ithou

t Mal

icio

us A

ttack

and

AO

DV

wi

th m

alic

ious

atta

ck

AOD

V w

ithou

t Mal

icio

us A

ttack

and

M

odi_

AOD

V w

ith m

alic

ious

atta

ck

AOD

V w

ithou

t Mal

icio

us A

ttack

and

AO

DV

wi

th m

alic

ious

atta

ck

AOD

V w

ithou

t Mal

icio

us A

ttack

and

M

odi_

AOD

V w

ith m

alic

ious

atta

ck

Packet Delivery Ratio

20.536

0.482

19.704

0.706

20.74721302

0.946068093

Throughput 22.55081

1.4763

19.33919518

0.405205372

20.99772417

1.71476344

End to End Delay Ratio

0.000168 124.147

22.88067898

99.05142287

47.44427457

17.305314

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

4

Table 3: % Overhead

Table 4: Co-efficient of Correlation

Co-efficient of correlation in throughput between AODV without and with malicious attacks in all three scenarios lies between low negative to high positive. On other hand correlation between AODV without malicious nodes and Modi_AODV with malicious attack in all three scenarios between low positive to high positive.

Co-efficient of correlation in end to end delay ratio between AODV without and with malicious attacks in all three scenarios lies between low negative to high positive. On other hand correlation between AODV without malicious nodes and Modi_AODV with malicious attack in all three scenarios between moderate positive to high positive.

10 Nodes 20 Nodes 50 Nodes

Para

met

er

AOD

V wi

thou

t Mal

icio

us A

ttack

D

ata

Pack

et S

ent a

nd R

ecei

ved

AOD

V wi

th m

alic

ious

atta

ck

Dat

a Pa

cket

Sen

t and

Rec

eive

d

Mod

i_AO

DV

with

mal

icio

us

atta

ck D

ata

Pack

et S

ent a

nd

Rece

ived

AOD

V wi

thou

t Mal

icio

us A

ttack

D

ata

Pack

et S

ent a

nd R

ecei

ved

AOD

V wi

th m

alic

ious

atta

ck

Dat

a Pa

cket

Sen

t and

Rec

eive

d

Mod

i_AO

DV

with

mal

icio

us

atta

ck D

ata

Pack

et S

ent a

nd

Rece

ived

AOD

V wi

thou

t Mal

icio

us A

ttack

D

ata

Pack

et S

ent a

nd R

ecei

ved

AOD

V wi

th m

alic

ious

atta

ck

Dat

a Pa

cket

Sen

t and

Rec

eive

d

Mod

i_AO

DV

with

mal

icio

us

atta

ck D

ata

Pack

et S

ent a

nd

Rece

ived

Data Packets Sent and Received

0 24 0.477327 0

19.82759

0.719424

0.483351235

20.8652793

1.32231405

10 Nodes 20 Nodes 50 Nodes

Para

met

ers

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd A

OD

V w

ith

mal

icio

us a

ttack

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd M

odi_

AOD

V wi

th m

alic

ious

atta

ck

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd A

OD

V w

ith

mal

icio

us a

ttack

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd M

odi_

AOD

V wi

th m

alic

ious

atta

ck

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd A

OD

V w

ith

mal

icio

us a

ttack

AOD

V wi

thou

t Mal

icio

us

Atta

ck a

nd M

odi_

AOD

V wi

th m

alic

ious

atta

ck

Packet Delivery Ratio

-0.999398703 0.516905611 -0.112685559 0.508510726 -0.065082862 0.289847356

Throughput -0.082760129 0.465276591 0.919687004 0.998570886 0.967430467 0.982844094

End to End Delay Ratio

0.492062764 0.759453643 0.873056475 0.703326942 -0.350595376 0.877024364

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

5

Throughput (0.601452 and 0.815564) and end to end delay ratio (0.338175 and 0.779935).

6. Summary

• Mean of % overhead is very less in Modi_AODV in comparison of AODV in packet delivery ratio and throughput i.e. 0.711356, 1.198756 and 20.32907, 20.96258, but with end to end delay ratio %overhead is very high in Modi_AODV in comparison of AODV i.e. 80.016791 and 23.44171. It proves that end to end delay ratio is high in Modi_AODV.

• Mean of coefficient of correlation is better in Modi_AODV in comparison of AODV i.e. packet delivery ratio (-0.39239 and 0.438421),

References: [1]. Loay Abusalah, Ashfaq Khokhar, and Mohsen Guizani,

“A Survey of Secure Mobile Ad Hoc Routing Protocols”, IEEE communications surveys & tutorials, Vol. 10, no. 4, pp. 78- 93, 2008.

[2]. Arshad, J.; Azad, M.A.; , "Performance Evaluation of Secure on-Demand Routing Protocols for Mobile Ad-hoc Networks", Sensor and Ad Hoc Communications and Networks, SECON '06. 2006 3rd Annual IEEE Communications Society on, vol.3, no., pp.971-975, 28-28 Sept. 2006.

[3]. Ye Tung; Alkhatib, M.; Rahman, Q.S.,"Security Issues in Ad-Hoc on Demand Distance Vector Routing (AODV) in Mobile Ad-Hoc Networks", Proceedings of the IEEE , vol., no., pp.339-340, 2005.

[4]. Payal N. Raj, Prashant B. Swadas, “DPRAODV: A Dyanamic Learning System Against Blackhole Attack In Bodv Based Manet”, In: International Journal of Computer Science Issues, Vol.2, pp 54-59, 2009.

[5]. A.Kush, R.Chauhan,C.Hwang and P.Gupta, “Stable and Energy Efficient Routing for Mobile Adhoc Networks”, Proceedings of the Fifth International Conference on Information Technology: New Generations, ISBN:978-0-76953099-4 available at ACM Digital Portal, pp. 1028-1033, 2008.

[6]. C. Perkins, E. B. Royer, S. Das, “Ad hoc On-Demand Distance Vector (AODV) Routing Internet Draft”, RFC 3561, IETF Network Working Group, July 2003.

[7]. Harris Simaremare and Riri Fitri Sari, “Performance Evaluation of AODV variants on DDOS, Blackhole and Malicious Attacks”, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.6, June 2011.

[8]. Vijay Kumar and Ashwani Kush, “ Detection and Recovery of Malicious Node in Mobile Ad Hoc Networks”, International Journal of Advanced Research in Computer Science and Software Engineering , Volume 2, Issue 1, January 2012 ISSN: 2277 128X .

First Author: Vijay Kumar is employed as Assistant Professor in Department of Computer Science & Applications, I.E.C. University, Baddi (Solan) H.P. India. He has 9 years teaching experience. He has done MCA, M.Phil and Ph.D.(Pursuing) in Computer Science. He has 18 research papers to his credit in various International/National Journals and Conferences. His research interests are in Mobile Ad hoc Networks. Second Author : Dr. Ashwani Kush is employed as Head and Associate Professor in Department of Computer Science & Applications, University College, Kurukshetra University, Kurukshetra. He has done Ph.D. in Computer Science in association with Indian Institute of Technology, Kanpur, India and Kurukshetra University, Kurukshetra, India. He is professional Member of ACM, IEEE, SCRA, CSI INDIA and IACSIT Singapore, IAENG Hon Kong. He has more than 60 research papers to his credit in various International/National Journals and Conferences. He has authored 15 books in computer science for undergraduate and school students. His research interests are in Mobile Ad hoc Networks, E-Governance and Security. Dr. Kush has chaired many sessions in International Conferences in USA and Singapore. He is member of Syllabus Committee, Time table and Quiz Contests of Kurukshetra University, Kurukshetra, India. He is also on the panel of eminent resource persons in Computer Science for EDUSAT project, Department of Higher Education, Government of Haryana. His lectures are also broadcasted through satellite in Haryana, India.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

6

An Adaptive K-random Walks Method for Peer-to-Peer Networks

Mahdi Ghorbani

Electrical, Computer and IT Engineering Dept., Qazvin Branch, Islamic Azad University Qazvin, Iran

[email protected]

Abstract

Designing an intelligent search method in peer-to-peer networks will significantly affect efficiency of the network taking into account sending a search query to nodes which have more probably stored the desired object. Machine learning techniques such as learning automaton can be used as an appropriate tool for this purpose. This paper tries to present a search method based on the learning automaton for the peer-to-peer networks, in which each node is selected according to values stored in its memory for sending the search queries rather than being selected randomly. The probable values are stored in tables and they indicate history of the node in previous searches for finding the desired object. For evaluation, simulation is used to demonstrate that the proposed algorithm outperforms K-random walk method which randomly sends the search queries to the nodes. Keywords: Peer-to-peer, Search, K-random Walk, Learning Automata Theory.

1. Introduction

The main characteristic of peer-to-peer networks is that the nodes can be connected to the network or leave it at any time. In other words, there is no central control on behavior of the nodes in the network, with all the nodes acting similarly in terms of performance level. One of the important issues in these networks is searching stored objects in the nodes. Finding an object will be exposed to some challenges with respect to how the nodes are located in the network. The peer-to-peer networks are divided into structured and unstructured regarding their structures. In the former, location of the nodes is predefined by distributed hash tables (DHT) so finding an object is done readily using hash functions. On the other hand, the latter, distribute their contents in a completely random fashion and the nodes have no information about the network status and location of the stored objects in other nodes [1-3]. Therefore, it is not simple to find an object in this structure and search mechanisms must be utilized here. It is very important to design a search method for these

networks and it can further affect efficient of the network considerably. Based on information of the nodes from the contents, the search techniques are classified as informed and blind [4-5]. In informed methods, the nodes store a number of metadata from their neighborhood. By this kind of information, the nodes would be informed about the network status as well as the location of contents for the other nodes. In blind search methods, the nodes have no information about location of the objects, so they employ flooding algorithms to direct their search queries. One of these methods is random walk [7-8]. Once a search for finding an object is unsuccessful, random walk selects some of the neighbors randomly and sends the search queries to them through a flooding algorithm until the search time is terminated. Each of these two search methods (informed and blind) has its own advantages and disadvantages which can noticeably affect some network criteria such as search success, average response time, average number of objects found in the search process and network load. According to previous studies [6-9,13-18], using artificial intelligence methods such as reinforcement learning techniques [11-12], has improved search performance to a large extent. By these methods, each node in the networks learns to select which if the neighbor nodes for sending the search query. This paper proposes a search algorithm based on learning automaton [16], where the nodes select their neighbors intelligently and not randomly for sending the queries. Each node will store a number in its history table based on the feedback received from the neighbor nodes. During the next levels of search, the nodes which have assigned themselves a greater number in the current object search will probably contain that object. The suggested algorithm compares the search success, the number of discovered objects per each query and also the amount of overload due to messages generated for each search queries with those obtained from K-random walk [8-10]. Oversim [22] software has been used for the purpose of simulation and

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

7

the obtained results show advantage of the proposed algorithm. The following text is organized as follows. A short history of the relevant works conducted so far with the proposed protocol in presented in Section 2. In section 3, the learning automata briefly explained as the main learning strategy in the developed algorithm. The proposed algorithm is then introduced in Section 4, while the results of simulation are provided in Section 5. Finally, some conclusions are made in Section 6.

2. Background

Search strategies in the peer-to-peer networks are categorized into blind and informed search methods considering the amount of information available for the nodes from the network status. In the blind methods, each node directs the query to all its neighbors and the search is terminated once “success” or “failure” is occurred, or when TTL is over. The blind methods waste a lot of network bandwidth and cause a large overload [6]. In K-random walk method, the queries are sent just to “k” randomly selected neighbors instead of all neighbors. Complexity of the generated messages is small well, but the amount of success is variable due to random selection of the neighbors for direction of the search queries. On the other hand, random selection of the neighbor nodes can negatively affect response time of the search queries. In the informed method, each node stores some information about its neighbors and network status as well. Adaptive probabilistic search (APS) [9,10], SALA [17], LARW [18] are some examples of the informed methods which utilize experiences of the previous neighbors for searching the next levels. Adaptive probabilistic search (APS) woks based on using “k” independent steps and random direction of the queries. The probabilistic selection instead of random selection is accounted for an advantage of these techniques. Each intermediate node directs the queries toward its neighbor which has stored a probabilistic value in its local index. Values of the indices are updates by receiving feedback from the steps. APS enhanced reliability of the search and optimizes consumption of the bandwidth. In [17] a novel self-adaptive learning automata based search algorithm (SALA) is introduced. In this method, each node uses a learning automata algorithm to select the suitable neighbors. By applying three tables comprising of the previous experiences of nodes, the neighbors train and increase their chance to participate in search. Although SALA is an adaptive search to the network size but the learning rate is low because of updating three tables for each iteration.

LARW [18] is a new version of k-random walks algorithm which utilizes a new form of learning automata algorithm is called KSALA [16]. KASLA helps the nodes decide to select the suitable neighbors for search. Decision is according to the probability values of neighbors for participating in search in previous iterations. The performance of search is good enough but at the first steps of search, the performance is low because of slow learning rate of nodes.

3. Overview of Learning Automata Theory

Learning automaton [19-21] is a machine which can perform many finite actions. Each selected action is evaluated by a probabilistic environment and the result of evaluation is given to the automaton in the form of positive or negative signals. Thereby, the automaton is affected by this response in selection of its next action. The final goal is that the automaton learns to choose the best action among the available ones. This could be an action which maximizes the probability to receive reward from the environment. The environment can be represented by a triple like },,{ cE βα≡ , in which },...,,{ 21 rαααα ≡ is the set of inputs, },...,,{ 21 mββββ ≡ is the set of outputs, and

},...,,{ 21 rcccc ≡ is the set of penalty probabilities. The probability for an undesirable result of ic action equals

iα . In static environment, the values of ic remains unchanged, though these values change over time in non-static environment. The learning automata are divided into two groups with constant and variable structures. The following will introduce the learning automaton with variable structure. The learning automaton with variable structure can be shown by a quadruplet like },,,{ Tpβα , where

},...,,{ 21 rαααα ≡ and },...,,{ 21 mββββ ≡ represent the sets of actions and inputs of the automaton, respectively, while

},...,,{ 21 rpppp = gives the vector of selection probability for each of the actions and )](),(),([)1( npnnTnp βα=+ denotes the learning algorithm. The following algorithm is an example of the linear learning algorithms. Assume that the action

iα is selected at the thn level. Favorable response:

)()1()1()](1[)()1(

npanpnpanpnp

jj

iii

−=+−+=+

Unfavorable response:

)()1()1/()1()()1()1(

npbrbnpnpbnp

jj

ii

−+−=+−=+

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

8

In Equations (1) and (2), a is the reward parameter and b is the penalty parameter. Three following states can be considered according to the values of a and b . When a and b are equal, the algorithm is called LRP; when b is much smaller than a , the algorithm is called LRεP; and when b is zero the algorithm is called LRI.

4. Proposed Search Algorithm

In this paper, the learning automaton has been used for training the nodes in the network for selection of the desired neighbor nodes. At first, the data structure of the proposed algorithm is explained and then, the proposed algorithm is introduced.

4.1 Data Structure of Search Algorithm

Two tables are used in the developed search algorithm for learning the network status and the location of objects. Each row of these tables indicates an existing object in the network, whereas each column of these tables shows one neighbor of that node. For each neighbor node, there is a value called LA-value which denotes the extent of its history for finding the objects in the previous searches. The table called Query-LA-table is a table involving neighbor nodes which have experienced search before. During search and when a query is going to be sent, the values in this table are used. Selection of the neighbors is done randomly based on the learning algorithm. However, as much big as their LA-values are, the probability of their selection will be greater. Figure 1 shows a sample of these tables.

LA-values to neighbours

Nm …. N2 N1 Keywords

V1m …. V12 V11 Object 1 V2m …. V22 V22 Object 2 …. …. .... …. .....

Vnm …. Vn2 Vn1 Object n

Fig. 1 A sample Query-LA-table with n objects and m keywords.

The second table which is used in this algorithm is Neighbor-LA-table and contains those set of neighbor nodes which have not experienced a search with the current query before. The values in this table are indicative of the performance for each of these neighbors obtained in the previous searches. Figure 2 shows a sample Neighbor-LA table.

Nm …. N2 N1 V1m …. V12 V11

Fig. 2 A sample Neighbor-LA-table with m keywords.

4.2 How the Search Algorithm Works?

As mentioned before, two tables are used to learn the network status and the location of objects. These tables are empty at first. A row is added to them by sending each query with all the neighbors being initialized by a uniform probability distribution. The values of these tables are then updated based on the results obtained from the search. The neighbor nodes with a higher search performance are rewarded and their probability is increased. Value of this reward is determined by distance of the object discovered from its applier or in other words, number of the steps passed for reaching the desired object. The nodes which do not participate in the search process will neither receive reward nor penalized. The values in these tables are used in the next searches to select neighbors for sending intelligent queries. In sending a query to a node a hit is returned when the search is successful, otherwise the search continues by sending queries to a random number of the neighbor nodes with the highest LA-value in Query-LA-table. The search would resume in Neighbor-LA-table by choosing a random number of neighbors once the searched object is not found in this table. The search is terminated when the desired object is found or when TTL is over. At last, the existing values in the tables are updated in accordance with the feedback they get from their environment. Figure 3 depicts pseudo code of the search algorithm. // Query Keyword – Q; Query Source – S; Total number of walkers K// 1. User submits a query 2. Search source node for Q 3. If Q is not in S 3.1 Search for Q in the Query-LA-table 3.2 If Q is found Select K walkers from Query-LA-table

with automata algorithm Generate K query messages Search starts with K walkers Else

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

9

Select the walkers from Neighbor-LA-table

4. If a hit occurs, sent back result on the reverse path.

5. All node on the path, update appropriate LA-table.

Fig. 3 Pseudo code of the proposed search algorithm.

5. Simulations

In this section, hypotheses of simulation are stated first and result of the simulation are investigated then. Some network criteria such as overload due to the messages generated in the network are compared in addition to search quality criteria like success of the search and average number of the objects found per one query with the so-called K-random walk method.

5.1 Hypothesis of Simulation

Oversim simulation software is utilized for simulating the proposed algorithm. The network used here is a random graph [4] including 1000 nodes, where minimum output degree of each node is 6. In this network 200 different objects are randomly distributed between the nodes with each node containing a number of objects according to the memory capacity considered for it. Maximum time needed to deliver packages through the network (TTL) is taken 6. Dynamism of the network is addressed using failure rate of the nodes which is expected to be 30% in average. In other words, among all the existing nodes in the network 30% of them are either inactive or leaving the network. Maximum number of walkers is 15 in the simulations. A standard learning automaton is used with LRP algorithm, in which the initial values of and are both considered 0.5.

5.2 Evaluation of Proposed Search Algorithm

Performance of the proposed search algorithm is evaluated by implementing different simulations. The method developed here is compared with K-random walk method.

5.2.1 Success Rate

When a query is being sent to one node or several nodes, a hit message will be generated once the desired object is found, which indicates success of the search. The search is called successful when at least one hit message is received. Average number of these successes can be calculated as well. Figure 4 has compared success of the search between the proposed algorithm and K-random walk method. As shown in the diagram, success of the search is significantly

higher in the suggested method due to intelligent selection of the neighbors unlike the random selection in K-random walk.

Fig. 4 Comparing success rate for two algorithms.

5.2.2 Number of Discovered Objects

Figure 5 shows the number of objects found for each query. The greater this number is accuracy, speed and resistance of the algorithm is higher. Resistance of the algorithm is improved because whenever a node containing several objects fails in the network, other samples of that objects exist in the network. This can enhance resistance against node failures. Figure 5 demonstrates that the proposed algorithm has an excellent performance in finding the objects in the network, whereas random selection of the neighbors in K-random walk method leads to an increased generation of misses.

Fig. 5 Comparing number of objects found for two algorithms.

5.2.3 Overhead from Generated Messages

Figure 6 gives a comparison between the proposed algorithm and K-random walk algorithm taking into account the overload generated from messages of each query. It can be observed in this diagram that the proposed

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

10

algorithm provides a much smaller average number of messages generated for each query as compared with that of K-random walk, because no additional message and query is sent for this purpose into the network. The obtained overload from generation of the messages is very significant in K-random walk method since the queries are randomly sent.

Fig. 6 Comparing overhead from generated messages for two algorithms.

4. Conclusions

In the proposed method, each node stores some tables which keep histories of the neighbors in previous searches. These values are probabilistic and are updated considering the hit/miss message received from the neighbor nodes using learning automaton. The suggested search algorithm was compared with K-random walk method. The obtained results reveal that due to using the history tables for each node, those with a greater probability to contain the desired object are selected and thus, the rate of success is increased. On the other hand, intelligence of the proposed algorithm leads to send a smaller number of queries and as a result, fewer messages are generated within the network which reduces the overload. Meanwhile, number of the hops visited for reaching the chosen nodes is decreased. Therefore, the proposed algorithm outperforms K-random walk method in terms of success, network overload and average number of discovered objects per each query. References [1] S. Androutesellis, and D. Spinellis, “A survey of peer-to-

peer content distribution technologies,” ACM Computing Surveys, December 2004, vol. 36, no. 4, pp. 335-371.

[2] E. K. Lua, J. Crowcroft, M. Pias, R. Sharma, and S. Lim, “A survey and comparison of peer-to-peer overlay network scheme,” IEEE Communication Survey and Tutorial, March 2004.

[3] A. Saghiri, and A. Bagheri, “Enhance Your Search Engine Functionality with Peer-to-Peer Systems,” Proc. Of 2nd Int. Conf. on Computing and Automation Engeering, 2010, pp. 583-586.

[4] A. Saghiri, and A. Bagheri, “An adaptive architecture for personalized search engine in ubiquitous environment with peer-to-peer systems,” Proc. Int. Conf. on Information and multimedia technology, 2009, pp. 107-111.

[5] D. Tsoumakos, and N. Roussopoulis, “Analysis and comparison of p2p search methods,” 1st Int. Conf. Scalabale Information Systems, 2006, Article no. 25.

[6] S. M. Thampi, C. K. Sekaran, “Survey of search and replication schemes in unstructured p2p networks,” Network Protocols and Algorithms, 2010, vol 2, no. 1, pp. 93-131.

[7] R. Dorrigiv, A. L’opez-Ortiz and P. Pralat, “Search algorithms for unstructured peer-to-peer networks,” Proc. Of 32nd IEEE Conference on Local Computer Networks, 2007, pp. 343-349.

[8] C. Gkantsidis, M. Mihail, and A. Saberi, “Random walks in peer-to-peer networks,” In INFOCOM 2004, Hong Kong, , 2004, vol. 1, pp. 120-130.

[9] D. Tsoumakos and N. Roussopoulos, “Adaptive probabilistic search for peer-to-peer networks,” 3rd Int. Conf. P2P Computing ,2003, pp. 102-109.

[10] D. Tsoumakos ,and N. Rossopoulos, “Probabilistic knowledge discovery and management for p2p networks,” P2P Journal, 2003, pp. 134-141.

[11] R. S. Sutton, A. G. Barto, Reinforcement learning: introduction, Proc. of the MIT Press, 1996.

[12] E. Mance and S. H. Stephanie, Reinforcement learning: A tutorial, Proc. of the Wright Laboratory, 1996.

[13] S. M. Thampi, and C. K. Sekaran, “Collaborative load-balancing scheme for improving search performance in unstructured p2p networks,” Proc. Of the 1st Int. Conf. Contemporary Computing, August 2008, pp. 161-169.

[14] S. M. Thampi, and C. K. Sekaran, “An efficient distributed seach technique for unstructured peer-to-peer networks,” Int. Jou. Computer and Network Security, vol. 8, no. 1, January 2008, pp. 128-135.

[15] F. Torabmostaedi, and M. R. Meybodi, “An intelligent search algorithm for peer to-peer networks”, Int. Conf. Contemporary Issues in Computer and Information Sciences, June 2011, pp. 495-500.

[16] M. Ghorbani, A. M. Saghiri, and M. R. Meybodi, “A novel learning-based search algorithm for unstructured peer-to-peer networks,” Technical Journal of Engineering and Applied Sciences, January 2013, vol. 3, no. 2, pp. 145-149.

[17] M. Ghorbani, M. R. Meybodi, and A. M. Saghiri, ‘’A novel self-adaptive search algorithm for unstructured peer-to-peer Networks utilizing learning automata,“ Proc. of the 3rd Joint Conference of Robotics & AI and the 5th RoboCup IranOpen International Symposium, April 2013, pp.42-47.

[18] M. Ghorbani, M. R. Meybodi, and A. M. Saghiri, “A new version of k-random walks algorithm in peer-to-peer networks utilizing learning asutomata,” Proc. of the 5th Conference on Information and Knowledge Technology, May 2013.

[19] C. J. C. H. Watkins, P. Dayan, Machine learning, vol. 8, Springer, 1992, pp. 279-292.

[20] K. Najim, and A. S. Poznyak, “Learning automata: theory and application,” Proc. of the Tarrytown, 1994, New York, Elsevier Science Publishing Ltd.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

11

[21] K. S. Narenda, and M. Thathachar, Learning Automata: an introduction,”New York, Prince-Hall, 1989.

[22] I. Baumgart, and B. Heep, Oversim community site, [Online], Available: http://www.oversim.org/wiki

Mahdi Ghorbani He received his B.Sc. degree in computer science from Shahid Bahonar University of Kerman and M.Sc. degree in computer networks from Qazvin Azad University (QIAU) respectively in 2008 and 2013. He works as the network administrator and the researcher in the state organization of deeds and properties in Iran. His main research area includes peer-to-peer networks, learning systems and complex systems. He has published more than 10 papers in different international journals and conferences such as IEEE Xplore and Springer publications.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

12

Strategic Evaluation of Web-based E-learning; a review on 8 articles

1Shahriar Mohammadi, 2Sajad Homayoun

1 IT Group, Industrial Engineering, K. N. Toosi University of Technology, Tehran

E-mail: [email protected] 2 Graduated in IT Group, Industrial Engineering, K. N. Toosi University of Technology,

Tehran E-mail: [email protected]

Abstract

Today electronic learning is an important educational topic, and choosing an appropriate, applicable approach in education is of great importance. We may encounter the question: why should we evaluate systems? Our answer can be one of these: (1) Instantaneous management of pros and cons; (2) designing a long-term strategy; (3) evaluation of managers’ performance. Our main issue in this article is that we evaluate systems based on strategy, goals as well as objectives. According to 8 articles, we have made up a set of criteria to evaluate e-learning systems. The criteria are divided into four components or LISC: (1) Learning, (2) Interface, (3) Social, and (4) Content. Then we displayed the results as one visual stage diagram to demonstrate to managers. Keywords: E-learning, web-based learning, strategic evaluation. 1. Introduction1 Information technology has provided a wide window towards education. Its advantages are but not limited to (1) low expense, (2) educational justice, (3) distance education, (4) education repetition, etc. Most articles on evaluation circle around electronic business websites[1, 2, 3, 4, 5], while the number of papers on evaluation of electronic education systems is limited. The articles on the evaluation of e-learning systems have employed the current criteria existing in the same article. Like W.C. Chiou et al [6], we believe that for evaluating each system, we should consider that system’s objectives and strategies. In other words, it is not appropriate to compare two systems bearing different objectives and strategies with the same criteria. As such, we need a novel approach in the area of evaluation of electronic educational systems. Section 2 provides a set of suggested criteria. In section 3, research

proposal is given. Section 4 reports a case study , and section 5 remarks on conclusion.

2. Criteria for evaluation of electronic educational system; a review of study Most researchers take existing research resources on a topic as a proper starting point for a new study. It should be noted that there is no strong body of knowledge on the evaluation of electronic educational systems. Therefore, we decided to review the existing literature to provide an evaluation of simple electronic system. We start our review with a search on articles stored in two we-pages i.e. Google scholar, and ScienceDirect.com. Totally, 47 articles are found, and 8 articles are selected as the best ones once their abstract and introduction sections are read and analyzed. Table 1 displays selected articles.

Table 1: Selected articles for Search Process Reference Number

Authors No.

[8] Daniel Y. Shee, Yi-Shun Wang

1

[9] Sevgi Ozkan, Refika Koseler

2

[10] Rafael Andreu, Kety Jáuregui

3

[11] Kum Leng Chin, Patrice Ng Kon

4

[12] Ru-Jen Chao , Yueh-Hsiang Chen

5

[13] Yi-Shun Wang 6 [14] Gwo-Hshiung Tzeng,

Cheng-Hsin Chiang, Chung-Wei Li

7

[15] Yi-Shun Wang, Hsiu-Yuan Wang, Daniel Y. Shee

8

ext we classified introduced criteria for selecting articles. After our initial analysis, electronic educational system is divided into four main

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

13

dimensions called LISC: (1) Learning, (2) Interface, (3) Social, and (4) Content. The criteria are

classified in the related sub-dimension. Table 2 displays dimensions and criteria.

Table 2: Dimensions and criteria.3. Social

Learner Cognitive Process Environment facilities Ease of discussion with other learners Ease of discussion with teachers Ease of accessing shared data Ease of exchanging learning with the others Learning Community Personalization Student commitment IT support Protection of students’ details and privacy Intellectual property rights

1 . Learning Capability of controlling learning progress Capability of recording learning performance Learning Models Synchronous Learning Asynchronous Learning Learning Record Self Learning Participant Motivation and System Interaction Interactive course Learn from past performance Consideration for disabled students

4. Content Learning Models Course Design Up-to-date content Sufficient content Useful content E-Learning Material Self Learning Course Quality Instruction Materials Interactive course Up to date course information Offline/online resources Language support Intellectual property rights Qualified e-learning course designer Course materials prepared in advanced Library facilities/support Availability Content Personalization provides information you need at the right time easy to understand

2. Interface Ease of use User-friendliness Ease of understanding Operational stability Quality of Website Platform Personalization Webpage Connection Multimedia tools/technologies Download Speed

Now, there is a strong set of criteria, and as it was stated earlier, we want to evaluate based on the related strategy and goals of e-learning system. 3. Proposal We have divided e-learning process into 3 phases: Registration and before registration is a phase when a new user enters the environment or a recently registered user navigates the components of e-learning environment. Learning is the next phase when learning is conducted and Exam and Quiz is the last phase focusing on evaluation process (Figure1).

Fig. 1 Our Viewpoint on E-learning System Phases.

Registration and before

Registration

Learning Exam and Quiz

Strategies and Objectives

L: Learning I: Interface S: Social C: Content

I

S

L

C

I

S

L

C

I

S

L

C

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

14

The four introduced dimensions and criteria are effective in each section and require evaluation. Finally, we arrive at a diagram that visually displays the condition of each dimension in being near to the intended ideal by the manager. [6, 7] presented a five-stage model for evaluating e-commerce web-sites, displayed here in Figure 2. We have employed this model for final evaluation of an e-learning website based on our classified criteria. Fig. 2 Five-stage Model proposed by W.C.Chiou et

al. [6] to evaluate e-commerce web-sites. 4. Case Study We have selected an e-learning system to show the way our proposed method is implemented.

The evaluated web-site is called Z. This website in involved in teaching English conversation and writing to Persian speaking learners. First Stage: Identification of Web site strategy and criteria.

Step 1. Detecting e-learning system goals and objectives The manager of Z website has termed goals as Strong Resource and Easy Learning and objectives as below.

1. Variety of resource 2. Strong conversation 3. Strong writing 4. Ease of Use 5. Good Interface 6. Interactive Quiz

Step2. Choosing proper criteria considering goals and objectives. Step3. Constructing a hierarchical evaluation structure. Step4. Assigning Weight to each criterion by manager. We ask the manager to determine the importance of criteria by fuzzy linguistic terms: “very unimportant,” “unimportant,” “somewhat unimportant,” “neutral,” “somewhat important,” “important,” and “very important”. The fuzzy quantity of these terms is: 0.09, 0.23, 0.36, 0.50, 0.64, 0.78, and 0.91.

Fig. 3 Hierarchical evaluation structure and criteria

weights of Z site.

Second Stage: Web-based evaluation instrument development.

Step1. Changing criteria to questions which can be calculated.

1. List weight, score and gap of each criteria

2. Construct criterion performance matrix chart

3. 3. Construct a radar chart for dimensions

Stage Five – Data analysis

1. Identify website strategy 2. Determination criteria weights

Stage One – Web manager interview

1. List website intended goal, objectives and actions

2. Develop questionnaires from criteria

Stage Tow – Instrument development

Conduct website evaluation by panel of experts using fuzzy linguistic terms.

Stage Three – Website evaluation

1. Transform fuzzy terms into numbers 2. Normalize the criteria weights 3. Calculate weighted scores

Stage Four – Weights & Scores

Strong Resource and Easy Learning

Variety of resource

Strong conversation

Strong writing

Ease of Use

Good Interface

Interactive Quiz

Up-to-date Content (0.36) Sufficient Content (0.64)

Related Content (0.78) Conversation with others (0.78)

Learning community (0.50) Learning community (0.50)

recording performance (0.23) User friendliness (0.78)

Ease of resource use (0.91) Visual content (0.50)

Site style (0.78) Synchronous Quiz (0.78) Asynchronous Quiz (0.78)

Learning after Quiz (0.50)

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

15

Step2. Designing questionnaire with respect to selected criteria.

Third Stage: Execution of Web site evaluation. Step1: Choosing a panel of experts as evaluators. Step2: Evaluation by evaluators. Scoring by fuzzy linguistic terms. The fuzzy terms of this section include “ strongly disagree,” “disagree,” “some -what disagree,” “neutral,” “somewhat agree,” “agree,” and “strongly agree.” Where their quantities are: 0.09, 0.23, 0.36, 0.50, 0.64, 0.78, and 0.91.

Fourth Stage: Sijk, where i is an objective, j is its related criteria, and k is an evaluator.

Step1. Normalization of Criteria Weight. Normalization is performed using formula 1. W = ∑ (1)

Step2. Calculating average scores, weighted scores, and objective scores. AS = ∏ S (2)

where n is the number of evaluators. The weighted score of criterion j (WSij) and the weighted score of an objective (OWSi) are calculated using the following equations: = × (3)

OWS = ∑ WS (4) where n is the number of criteria j under an objective i.

Fifth Stage: Web strategy consistency analysis. Step1. Analysis of Gap Value for each criterion. The manager should take into account criteria or low average scores. Gap or the threshold announced by the manager determines strategy deviation. If the quantity of G is higher than threshold, that criterion is recognized as a criterion incompatible with the strategy and therefore it should be considered. Needless to say, the amount of threshold depends on the resources available for manager to manage. G = − (5) where i is an objective and j is its related criteria. Step 2. Constructing a criteria performance matrix chart. This diagram is constructed to graphically show the status of criteria to the managers, and they are able to set priority on their plans to remove inconsistencies of criteria or strategies. Step3. Analysis of LISC dimensions and efficiency of 3-stage process AW = ∑ (6)

AS = ∑ (7) where d is a LISC dimension (d = 1 to 4), j is a criterion number, and n is the number of criteria under the LISC dimension.

Table 3: Z site’s LISC dimensional average weights and scores in three phases. OWSi WSij NWij Gij ASij Wij Criteria (Cij) Objective (Oi) 0.32 0.06

0.12 0.14

0.20 0.36 0.44

-0.02 -0.29 -0.46

0.32 0.35 0.32

0.36 0.64 0.78

1. Up-to-date Content 2. Sufficient Content 3. Related Content

1. Variety of resource

0.38 0.27 0.11

0.61 0.39

-0.34 -0.22

0.44 0.28

0.78 0.50

1. Conversation with others 2. Learning community

2. Strong conversation

0.35 0.30 0.05

0.68 0.32

-0.06 -0.07

0.44 0.16

0.50 0.23

1. Learning community 2. Capabilities of recording

performance

3. Strong writing

0.41 0.19 0.22

0.46 0.54

-0.36 -0.49

0.42 0.42

0.78 0.91

1. User friendliness 2. Ease of resource use

4. Ease of Use

0.24 0.05 0.19

0.39 0.61

-0.36 -0.46

0.14 0.32

0.50 0.78

1. Visual content 2. Site style

5. Good Interface

0.35 0.16 0.15 0.04

0.38 0.38 0.24

-0.36 -0.38 -0.34

0.42 0.40 0.16

0.78 0.78 0.50

1. Synchronous Quiz 2. Asynchronous Quiz 3. Learning after Quiz

6. Interactive Quiz

The average LISC dimensional weight (AWtd) and average score (AStd ) in each phase can be calculated following formula (8) and (9), respectively . = ∑ (8)

AS = ∑ (9) where t is the transactional phase (t = 1 to 3), d is a LISC dimension (d = 1 to 4), j is the criterion number (j = 1 ~ n), n is the total criterion number under the LISC dimension in each phase, Wtdj is the

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

16

weight of criterion j under a dimension d in phase t, and AStdj is the average score of criterion j under a dimension d in phase t.

Table 4. Dimensions and related criteria ASd AWd Related Criteria

(Cij) Dimensions(d)

0.33 0.59 C11, C12, C13 1. Content 0.38 0.59 C21,C22,C31 2. Social 0.32 0.74 C41,C42, C51, C52 3. Interface 0.28 0.57 C32, C61, C62, C63 4. Learning

Fig. 4 Result in a radar chart.

Table 5: Status of Criteria of each dimension in different phases

Phase 3. Final Quiz Phase 2. Learning Phase 1. Registration

and before registration

AStd AWtd Criteria AStd AWtd Criteria AStd AWtd Criteria 0.28 0.64 C41, C51 0.32 0.74 C41, C42, C51,

C52 0.29 0.68 C41, C51,

C52

1. Interface

0.32 0.68 C61, C62, C63 0.29 0.50 C32, C51 N/A 2. Learning

N/A 0.33 0.59 C11, C12, C13 0.32 0.57 C11, C13 3. Content

N/A 0.38 0.59 C21, C22, C31 0.38 0.59 C21, C22, C31

4. Social

Fig. 5 Final format of each dimension compared with normal status for manager.

0

0.2

0.4

0.6

0.8Content

Social

Interface

Learning

Weight

Score

Registration and before

Registration

Learning Exam and Quiz

Strategies and Objectives

L: Learning I: Interface S: Social C: Content

I

S

L

C

I

S

L

C

I

S

L

C

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

17

5. Conclusion Considering the development of electronic services, attention towards web-site evaluation is of much importance. As there is a limited body of literature review on e-learning evaluation criteria, we selected 8 high-level articles. The audience of this article are managers who tend to increase service quality as well as researchers working on e-learning. It was stated that in evaluations, special attention should be given to website strategy. Employing the method proposed by W.C. Chiou et al. [6, 7], we have evaluated an e-learning website using criteria introduced in our review of literature. References [1] Developing an Evaluation Instrument for e-Commerce Web Sites from the First-Time Buyer’s Viewpoint, Wei-Hsi Hung, Robert J McQueen, Electronic Journal of Information Systems Evaluation, vol. 7(1), 2004, pp. 31-42. [2] Xiuli Cao, Yanhua Liu, Bing Shen, Min Wang, Research on Evaluation of B to C E-commerce Website Based on AHP and Grey Evaluation, IEEE Second International Symposium on Electronic Commerce and Security, 2009, pp. 405-408. [3] Bindu Madhuri .Ch, Padmaja.M, Srinivasa Rao.T Anand Chandulal.J , Evaluating Web Site based on Grey Clustering Theory combined with AHP, International Journal of Engineering and Technology, vol. 2(2), 2010, pp. 71-76. [4] Layla Hasan, Emad Abuelrub , Assessing the Quality of Web Sites, Journal of Applied Computing and Informatics, vol. 9, 2011, pp. 11-29. [5] R. Ufuk Bilsel, Gülçin Büyüközkan, Da Ruan , A Fuzzy Preference-Ranking Model for a Quality Evaluation of Hospital Web Sites, International Journal of Intelligent Systems, Published online in Wiley InterScience, vol. 21, 2006, pp.1181-1197. [6] Wen-Chih Chiou, Chin-Chao Lin, Chyuan Perng, A strategic framework for website evaluation based on a review of the literature from 1995–2006, Journal of Information & Management, vol. 47, 2010, pp. 282-290. [7] Wen-Chih Chiou, Chin-Chao Lin, Chyuan Perng, Case study: A strategic website evaluation of online travel agencies, Journal of Tourism Management, 2011, pp. 1-11 [8] Daniel Y. Shee, Yi-Shun Wang, Multi-criteria evaluation of the web-based e-learning system: A methodology based on learner satisfaction and its applications, Journal of Computer & Education, vol. 50, 2006, pp. 894-905. [9] Sevgi Ozkan, Refika Koseler, Multi-dimensional students’ evaluation of e-learning systems in the higher education context: An empirical investigation, Journal of Computer & Education, vol. 53, 2009, pp. 1285-1296.

[10] Rafael Andreu, Kety Jáuregui, Key Factors of e-Learning: A Case Study at a Spanish Bank, Journal of Information Technology Education, vol. 4, 2005, pp. 1-31.

[11] Kum Leng Chin, Patrice Ng Kon, Key factors for a gully online e-learning mode: a Delphi study, 20th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE), 2003, pp. 589-592. [12] Ru-Jen Chao , Yueh-Hsiang Chen, Evaluation

of the criteria and effectiveness of distance e-learning with consistent fuzzy preference relations, Journal of Expert Systems with Applications, vol. 36, 2009, pp. 10657-10662.

[13] Yi-Shun Wang, Assessment of learner satisfaction with asynchronous electronic learning systems, Journal of Information & Management, vol. 41, 2003, pp. 75-86.

[14] Gwo-Hshiung Tzeng, Cheng-Hsin Chiang, Chung-Wei Li, Evaluating intertwined effects in e-learning programs: A novel hybrid MCDM model based on factor analysis and DEMATEL, Journal of Expert Systems with Applications, vol. 32, 2006, pp. 1028-1044.

[15] Yi-Shun Wang, Hsiu-Yuan Wang, Daniel Y. Shee, Measuring e-learning systems success in an organizational context: Scale development and validation, Journal of Computer in Human Behavior, vol. 23, 2007, pp. 1792-1808.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

18

Distributed Data Storage Model for Cattle Health Monitoring Using WSN

Ankit R. Bhavsar1, Disha J. Shah2 and Harshal A. Arolkar3

1 PhD Scholar PACIFIC University/Assistant Professor, Gujarat University, GLSICA Ahmedabad, Gujarat 380006, India

[email protected]

2 PhD Scholar PACIFIC University/Assistant Professor, Gujarat University, GLSICA Ahmedabad, Gujarat 380006, India

[email protected]

3 Associate Professor, Gujarat Technological University, GLSICT Ahmedabad, Gujarat 380006, India

[email protected]

Abstract Now a day, wireless sensor networks (WSN) are being deployed in various applications like industrial, environmental, health care, societal monitoring. The sensor networks have tendency to generate huge amount of data. Hence data storage techniques become a critical issue for the success of these applications. In this paper, we have proposed a distributed data storage model used for WSN based cattle health monitoring. We have also defined the structure for the same. We have divided this model into two levels namely a local level and a central level. The main aim of storing data locally is to get quick response for any query raised by the user. The second level where the data is centralized is used to make long term decision, planning and policy for the cattle health monitoring. Key words: Wireless Sensor Network, Mobile Network, Internet, Cattle, Health, Data Storage Model.

1. Introduction

Wireless Sensor Networks (WSNs) are widely used for monitoring physical happenings of the environment. The data gathered using WSN is bulky, heterogeneous and distributed through the network. As the data would be massive, the data gathering process needs appropriate data storage.

In WSN, three data storage and retrieval methods namely External Storage (ES), Local Storage (LS) and Data-Centric Storage (DCS) [15] are generally used. The external storage is used when the data is to be stored on an external storage device. Here, the nodes send data to the base station without any query being generated by the user; also it does not perform any kind of aggregation. This results in high traffic that causes unbalanced energy consumption and delayed services. The local storage has an inbuilt database that keeps the data locally. The node is unaware of the target node to which the data is to be

transmitted. This results in more energy and resource consumption. The data centric storage is used where the data of same type is stored in the same geographic location. Hence, the query with a particular type of data will go to a specific location always and thus avoid data flooding.

Our aim is to suggest a data storage model for animal health monitoring that uses WSN. In this model data storage and manipulation would be done at two levels, local and central. The local level data storage responds immediately for any query raised by the animal owner or health worker. This response is transmitted using mobile network. The data stored at central level is used for decision making and for long term policy making to improve the health monitoring infrastructure. .

The section 2 of the paper introduces wireless sensor network, section 3 shows the related work, section 4 identifies the basic parameters that are used to monitor the health of the cattle, section 5 show how to spot the disease/s in cattle and lists different types of disease, section 6 shows the proposed data storage model followed by conclusion in section 7.

2. Wireless Sensor Network

The Wireless Sensor Network technology enables design and implementation of novel and intriguing applications that can be used to address numerous industrial, environmental, health care, societal and economical challenges [13][20]. The node consists of sensor interface, microcontroller, memory and battery unit

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

19

together with a radio module. The wireless sensor node thus are able to carry out distributed sensing and data processing as well as share the collected data using radio communication channel [7][14][15].

The ideal wireless sensor is networked, scalable and consumes very little power [8][9]. It is smart, software programmable, capable of fast data acquisition, reliable and accurate data transmission for extensive time-period [11][16][17].

In the early era, the development of wireless sensors was limited to military applications but the introduction of civilian wireless sensor systems has greatly diversified application domain which has further boosted research efforts in the field of WSN [3][5][31]. WSN technology is now implemented in variety of fields, which are able to monitor a wide variety of conditional like animal tracking, temperature, humidity and rain fall, forest, health, vehicular movement, fire detection, etc [18],[19],[23][24],[28],[29].

3. Related Work

Wireless Sensor Network (WSN) and monitoring environment require some effective solution to present the data in simple and efficient manner to user [4][10]. With the ability of broadcasting capability of mobile device, it is now possible to use the mobile device with collaboration with Wireless Sensor Network and provide a user interface in order to view all the information gathered by the Wireless Sensor Network[25][1]. This section shows some of the applications where different data storage mechanisms are used. [32][33]

Abhishek Ghose et.al proposed a Resilient Data-Centric Storage (R – DCS), as the method to achieve scalability by replicating data at strategic locations in the sensor network [2]. They show that this scheme leads to significant energy savings in reasonably large sized networks and scale well with increasing node – density and query rate.

Bo Sheng et.al considered the storage node placement problem aiming to minimize the total energy cost for gathering data to the storage nodes and reply to queries [6]. They examine deterministic placement of storage nodes and presented optimal algorithms based on dynamic programming.

Norbert Siegmund et.al presented an approach to provide robust data storage for Wireless Sensor Network [26]. They achieved this goal by providing FAME-DBMS, a customizable database management system which can be tailored according to the varying requirement of a sensor network.

4. Basic Parameters of Cattle Body

"Prevention is better than cure" is a true saying and worthy of being remembered by every animal owner. The first step to identify any disease/s in cattle is to measures the TPR (Temperature, Pulse, and Respiratory Rate) [W1][W2]. Cattle’s basic body parameters help animal owner or health worker to check the symptoms of any disease/s in cattle. Each category of animals has some standard parameters values which represent normal and healthy behaviors of animals. If some or all of the above parameters change it indicates a sign of disease in the cattle [W3][W4]. Table 1 shows the standard parameters of Cow and Buffalo that should be considered as threshold values in any health monitoring system.

Table 1: Normal Body Parameters for Cow and Buffalo Cattle Type Body

Temperature (in Fahrenheit)

Heart Beats/ Minute

Respiratory/Minute

Cow 101.5 ○F 50-60 20-25 Buffalo 98.3 ○F 40-50 15-20

5. Diseases in Cattle

Animal owner should develop keen eye to spot the ill animal/s. Any changes in its behavior or in its appearance should be identified by the animal owner. Some common signs of ill animals are like loss of appetite and stopping rumination, change in quality and quantity of milk yield, sunken eye with the fixed staring, change in consistency in dung, change in color or mixture of blood in urine, change in pulse rate and a coarse and dry skin have been identified in [W2][W5] which would be indicative of disease. Using the parameters mentioned in the above section the diseases can be identified as contagious or non contagious.

5.1 Contagious diseases in cattle The contagious diseases are generally caused by bacteria and viruses among the cattle. In India and particularly in Gujarat, the crossbred cows have increased in number. And these crossbred animals are highly subject to tropical diseases and variations in climate. Contagious

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

20

diseases usually take a heavier toll on crossbred cattle than native cattle.

The animal gets these diseases through any one or combination of things like other ill animals, unhygienic food, polluted grass, water, air, soil infected with bacteria, insanitary conditions of the cattle shed, infected dung, or even by the hand of the cattleman. Table 2 shows few commonly occurring bacterial and viral diseases in cattle.

Table 2: Commonly occurring Contagious Diseases in Cattle Bacterial Diseases Viral Disease

Anthrax Cow Pox Black Quarter Foot and

Mouth Disease

Haemorrhagic Rinderpest Septicaemia

Mastitis Tuberculosis

5.2 Non-contagious diseases in cattle In cattle, non-contagious diseases occur due to improper care, dietetic issues or sometimes even due to toxic substances. The most common non-contagious diseases of cattle are listed in Table 3.

Table 3: Most Common Non- contagious Diseases in Cattle Non Contagious Diseases

Milk fever Metritis Tympanities Mammits

Diarrhoea Constipation

6. Proposed Data Storage Model

In [4], we have proposed a health monitoring and reporting system that uses WSN architecture. Using this architecture we intended to monitor the health and environmental scenario of animals located in rural area of the State of Gujarat. The system consists of heterogeneous wireless sensor devices capable of sensing and transferring data. The devices when combined will form a network. This network would be capable of collecting, aggregating, processing the data collected on occurrence of various events. Once the data is available, proper analysis of the data can be done and the stakeholders can be informed about the health status of the animals if required.

In the scenario of animal health monitoring using WSN, the data storage model should be capable enough to store large amount of data and speedy execution of query or problems send by health worker or animal owner. The main goal of designing a data storage model is that it

should be scalable, able to balance load, consume less power and robust.

For cattle health monitoring, we will require different types of sensor nodes connected with database server using WSN. Some of the sensors will be fitted on the body of the cattle and some of the sensors will be fitted in surrounding environment where cattle will roam around. Body sensors will send the data like body temperature, plus rate and respiratory rate of the cattle at a specific pre configured time interval. The occurrence of data transmission of body sensors data will be very high. The second type of sensory data will be the environmental data; it will send the data like water pollution level, soil infection level, dust level in air, humidity in air. This data transmission will be comparatively low.

For this type of data transmission, we propose two level distributed data storage model. In the first level, data accumulated at a local database server that maybe kept in nearest Gram Panchayat’s office. To transfer this data to the required stakeholder locally as well as globally this database server should be connected with internet. Figure 1 shows the architecture of the proposed local data storage and response model.

Fig. 1 Proposed Local Data Storage Model

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

21

In the proposed data storage model, data will be transmitted by different sensors in WSN and would be received by nearest database server. When any data that does not match the threshold is detected by sensors this database server will give intimation to the animal owner as well as the health worker. To send this intimation we will use mobile network. Animal owner can also send a query to local system; the local system will then process it and would give an immediate response. As the stakeholders will be more comfortable using regional language, at present the response will be based on regional language (Gujarati). Figure 2 shows the architecture of the proposed central data storage and response model.

Fig. 2 Proposed Centralize Data Storage Model

In the second level of data storage model, data will be copied to central database server through Internet using batch processing. It is also important to keep track of the data pertaining to animal’s health for future references. To achieve this, the data should be gathered at a common location. Therefore, the second level of data storage should be a centralized database server, which may be situated at a veterinary hospital or any other location as required. Here, the veterinary doctor may review the data using interface in the form of different types of reports. He can make appropriate decisions and policy for animal health care and can also get the annual statistics

regarding the animal census, death ratio, illness ratio and other similar things.

7. Conclusion

This paper suggests a distributed data storage model for storing and analyzing the data used in animal health monitoring. The data storage model will be of help to many users like the animal owner, health worker in that village and even to nearby veterinary hospitals. Implementation of this data storage model will help the user to take appropriate and immediate action in case of any eventuality. By using this system, we would get information and symptoms of the possible illness and disease of the animal on runtime. Since monitoring is done in the live space the animals travel less often, which is safer and more convenient. Thus an overall improvement in the betterment of healthcare can be provided, which further will generate increase in annual yield of products and improve the quality of life of rural area of state of Gujarat.

References

[1] Abdalkarim Awad, Reinhard Germany, Falko Dressler, “Data - Centric Cooperative Storage in Wireless Senosr Network” , 2nd International Symposium on Applied Sciences in Biomedical and Communication Technologies, IEEE, 2009, pp. 1-6, E-ISBN : 978-1-4244-4641-4 [2] Abhishek Ghose, Jens Grossklags, John Chuang, “Resilient Data-Centric Storage in Wireless Ad-Hoc Sensor Networks” , Proceedings the 4th International Conference on Mobile Data Management, 2003, 45-62 [3] Al-Sakib Khan Pathan, Choong Seon Hong, Hyung-Woo Lee, “Smartening the Environment using Wireless Sensor Networks in a Developing Country”, ICACT, 2006, pp. 705-709, ISSN: 89-5519-129-4. [4] Ankit Bhavsar, Harshal Arolkar, “Wireless Sensor Networks: A possible solution or Animal Health Issues in Rural Area of Gujarat”, IJECBS, 2012, Vol. 2, Issue. 2, ISSN: 2230-8849. [5] Ashraf Darwish, Aboul Ella Hassanien, “Wearable and Implantable Wireless Sensor Network solutions for Healthcare Monitoring”, Sensors, 2011, Vol. 11 (6), pp. 5561-5595, ISSN: 1424-8220. [6] Bo Sheng, Qun Li, Weizhen Mao, “Data Storage Placement in Sensor Network”, MobiHoc’06 Proceedings of the 7th ACM international symposium on Mobile ad hoc networking and computing, 2006, pp. 344-355, ISBN:1-59593-368-9. [7] Chris Townsend, Stenven Arms, “Wireless Sensors Networks: Principal and Applications”, Wilson, 2005, pp. 439-450 [8] Christof Rohrig, Sarah Spieker, “Tracking of Transport Vehicle for Warehouse Management using a Wireless Sensors Network”, International Conference on Intelligent Robots and

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

22

Systems, 2008. IROS 2008. IEEE/RSJ, pp. 3260 – 3265, ISBN: 978-1-4244-2057-5. [9] D.D.Chaudhary, S.P.Nayse, L.M.Waghmare, “Application of Wireless Sensor Networks for Greenhouse Parameter Control in Precision Agriculture”, IJWMN, 2011, Vol. 3, No. 1, pp. 140-149, ISSN: 0975-3834. [10] Deepak Ganesan, Ben Greenstein, Deborah Estrin, John Heideman, Ramesh Govindan, “Multi-resolution Storage and Search in Sensor Networks”, ACM Transactions on Storage, 2005. [11] Gang Zhao, “Wireless Sensor Networks for Industrial Process Monitoring and Control: A Survey”, Macrothink Institute, 2011, Vol. 3, No. 1, pp. 46-63, ISSN: 1943-3581. [12] Guillermo Barrenetxea, Francois Ingelrest, Gunnar Schaefer, Matin Vetterli, “Wireless Sensor Network for Environmental Monitoring: The SensorScope Experience”, The 20th IEEE International Zurich Seminar on Communications (IZS 2008). [13] Hong Duc Chinh, Yen Kheng Tan, “Smart Wireless Sensor Networks”, Intech, 2010, ISBN: 978-953-307-261-6. [14] I.F.Akyildiz, W.Su, Y. Sankarasubramaniam, E.Cayirci, “Wireless Sensors Networks: Survey”, Computer Networks, 2002, pp. 393-422, ISSN: 1389-1286. [15] I. F. Akyildiz, Tommaso Melodia, Kaushik R. Chowdhury, “A Survey on Wireless Multimedia Sensors Networks”, Computer Networks, 2006, pp. 921-960, ISSN: 1389-1286. [16] Ijaz M. Khan, Nafaa Jabeur, Muhammad Zhid Khan, Hala Mokhtar, “An Overview of the Impact of Wireless Sensor Networks in Medical Health Care”¸ICCIT, 2012, ISSN: 2070-1918. [17] James Agajo, Alumona Theophilus, Inyiama H.C, “Wireless Sensor Networks Application for Industrial Monitoring”, IJRRCS, 2011, Vol. 2, Issue. 4, pp. 1069-1074, ISSN: 2079-2557. [18] JeongGil Ko, Chenyang Lu, Mani B. Srivastava, John A. Stankovic, “Wireless Sensor Networks for HealthCare”, IEEE, 2010, Vol. 98, No. 11, pp. 1947-1960, ISSN: 0018-9219. [19] Kavi K. Khedo, Rajiv Perseedoss, Avinash Mungur, “A Wireless Sensor Network Air Pollution Monitoring System”, IJWMN, 2010, Vol.2, No. 2, pp. 31-45, ISSN: 0975-3834. [20] Kazem Sohraby, Daniel Minol, Taieb Znati, “Wireless Sensor Networks Technology, Protocols and Applications”, Wiley, 2007, ISBN: 978-0-471-74300-2. [21] Kewei Sha, Weisong Shi, “Modeling Data Consistency in Wireless Sensor Network”, Proceedings of the 27th International Conference on Distributed Computing Systems Workshops, 2007, pp. 16, ISBN: 0-7695-2838-4. [22] Khandakar Ahmed, Mark A. Gregory, “Techniques and Challenges of Data Centric Storage Scheme in Wireless Sensor Network”, Journal of Senor and Actuator Networks, 2012, pp. 59-85, ISSN: 2224-2708. [23] Louise Lamont, Mylene Toulgoat, Mathieu Deziel, Glenn Patterson, “Tiered Wireless Sensor Network Architecture of Military Surveillance Application”, Fifth International Conference on Sensor Technology and Applications, 2011, pp. 288-294, ISBN: 978-1-61208-144-1. [24] Michel Winkler, Klaus-Dieter Tuchs, Kester Hughes, Graeme Barclay, “Theoretical and Practical aspects of military wireless sensor networks”, Journal of Telecommunications and Information Technology, 2008.

[25] Michele Albano, Stefano Chessa, “Distributed Erasure Coding in Data Centric Storage for Wireless Sensor Networks”, IEEE Symposium on Computers and Communications, 2009, pp. 22-27, ISBN: 978-1-4244-4672-8. [26] Norbert Siegmund, Marko Rosenmuller, Guido Morti, Gunter Saake, Dirk Timmermann, “Towards Robust Data Storage in Wireless Sensor Network”, IETE Technical Review, 2009, Vol. 26, pp. 335-340, ISSN: 0256-4602. [27] Sanjeev Gupta, Mayank Dave, “Model of Real Time Architecture for Data Placement in Wireless Sensor Network”, Scientific Research, 2010, pp. 53-61. [28] Satish V. Reve, Sonal Choudhri, “Management of Car Parking System Using Wireless Sensor Network”, IJETAE, 2012, Vol. 2, No. 7, pp. 262-268, ISSN: 2250-2459. [29] Tareq Alhmiedat, Anas Abu Taleb, Mohammad Bsoul, “A Study on Threats Detection and Tracking Systems for Military Application using WSNs”, IJCA, 2012, Vol.40, No.15, pp. 12-18, ISSN: 0975-8887. [30] T.H.Arampatizis, J.Lygeros, S.Manesis, “A Survey of Applications of Wireless Sensors and Wireless Sensors Networks”, Mediterranean Conference on Control and Automation, 2005, ISBN: 0-7803-8936-0. [31] V R Singh, “Smart Sensors: Physics, technology, and applications”, Indian Journal of Pure & Applied Physics, 2005, Vol. 43, pp. 7-16, ISSN: 0975-1041. [32] Xu Li, Kaiyuan Lu, Nicola Santoro, “Alternative Data Gathering Schemes for Wireless Sensor Networks”, In Proceedings of International Conference on Relations, Orders and Graphs: Interaction with Computer Science (ROGICS), pp. 577-586, 2008. [33] Y. Li, M. Thai and W. Wu, “Modeling Data Gathering in Wireless Sensor Networks”, Springer, 2005, pp. 572-591.

Web References

[W1] “Animal Longevity and Scale” , http://www.sjsu.edu/faculty/watkins/longevity.htm [W2] “Cattle Diseases, Animal Husbandry- Cattle: CAS – 6” , http://ebookbrowse.com/ca/cattle-disease [W3] “Common Diseases in Cattle” , http://www.cherokeeanimalclinic.com/cattlediseases.htm [W4] “Diseases in cattle” , http://www.ikisan.com/AnimalHusbandary/dairy/DiseasesCattle.htm [W5] “Physical Examination of a Dairy Cow” , www.mosesorganic.org Ankit R. Bhavsar is Assistant Professor, at GLS (I & R K Desai) Institute of Computer Application (BCA), Ahmedabad, India. He has earned Masters in Computer Applications from Gujarat Vidyapith and Bachelors Degree in Mathematics from Gujarat University. At present he is pursuing PhD in computer Science from PACIFIC University, Udaipur. Having an experience of 8 years in academics he has authored 2 books and published 2 research papers. Disha J. Shah is Assistant Professor, at GLS (I & R K Desai) Institute of Computer Application (BCA), Ahmedabad, India. She has earned Masters and Bachelors Degree in Computer Applications from Gujarat University. At present she is pursuing PhD in computer Science from PACIFIC University, Udaipur.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

23

Having an experience of 8 years in academics she has co-authored 1 book and published 2 research papers. Harshal A. Arolkar is Associate Professor, at GLS Institute of Computer Technology (MCA), Ahmedabad, India. He has earned PhD in Computer Science and Masters in Computer Applications from Bhavnagar University and Bachelors in Electronics from Gujarat University. Life member of Computer Society of India,

he is registered as PhD guide at Gujarat Technological University and PACIFIC University. An ardent practitioner and teacher he possesses more than 13 years of teaching experience in Computer Science. He has published and presented several research papers in international and national conferences and journals. He has co-authored 9 books. His area of interest includes Wireless Sensor Network, Cloud Security, Assistive Technologies, and Technology for Education

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

24

Presentation of an approach for adapting software production process based ISO/IEC 12207 to ITIL Service

Samira Haghighatfar1, Nasser Modiri 2 and Amir Houshang Tajfar3

1 Department of Computer Engineering, Payam Noor University Tehran, Iran

[email protected]

2 Department of Computer Engineering, Zanjan Branch, Islamic Azad University Zanjan, Iran

[email protected]

3 Department of Information Technology, Payame Noor University Tehran, Iran

[email protected]

Abstract The standard ISO/IEC 12207 is software life cycle standard that not only provides a framework for executable effective method for production and development software, but also can ensure that organizational goals are realized properly. In this paper, ITIL standard shall be used for better process control and management and providing a common language and syntax between stakeholders. In addition, the process of mapping between these two standards shall be considered. Keywords: ISO/IEC 12207, Information Technology Infrastructure Library (ITIL).

1. Introduction

Currently, the competition is a key factor for ensuring the survival of firms in the market. Information and communication technology is an essential element, which will improve competition among companies. Strategic management as a discipline is the key element allowing companies to achieve their competitive advantages. In many organizations, the role of Information and communication technology in achieving business goals, is often limited to the operational level. However, effective management and improvement of internal processes in development, operation and maintenance of software will help to improve the competition in the information technology market [1]. There are several factors that can impact the software life cycle that make difficult planning and operating processes. Recently, economic strategies and market forces have added new complexities to software life cycle. So, organizations need an explicit framework based on process management principles. In this paper, in order to control the software life cycle, ISO/IEC 12207 and in order to

manage and Provide the same language and syntax, IT Infrastructure Library (ITIL) are recommended. This paper is organized as follows: Sect. 1, Sect.2 reviews ISO/IEC 12207 standard and it’s processes. Section 3 reviews ITIL process. Section 4 presents a mapping between ITIL processes and ISO/IEC 12207. Finally, we will conclude the discussion.

2. ISO/IEC 12207 standard

International Organization for Standardization (ISO) associated with IEC, with foundation the Joint Technical Committee (JTC1), began to develop international standards for production and documentation of software products. ISO/IEC 12207 was published in 1995, Presented recommendations for whole life cycle and Construction of a software product [2]. ISO/IEC 12207 is a central standard for process of software engineering and establishing a common framework for software life cycle processes which are used to provide a common language between buyers, suppliers, developers, maintainers, operators, managers and technicians involved in the development of the Software. This International Standard contains processes, activities and tasks during the software life cycle that is described for access to systems, software products and services, supply, development, operation, maintenance and disposal of software products and software components of a system, whether inside or outside an organization has defined. Also, this standard defines a process for controlling and improving the software life cycle processes to be handled [3], [4]. ISO/IEC 12207 is established to define the software life cycle processes (SLPs) classification. This standard is flexible, modular and compatible with whole software life cycles.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

25

This software life cycle standard is composed of tasks and activities, in other view, it is classified into two sub processes, system context processes (SCPs) and software specific processes (SSPs). System context processes are divided into four groups of processes:

• Agreement Processes: This process includes operational activities to establish and maintain collaboration and agreement between the two organizations

• Project Processes: Including processes related to planning, evaluation and control that can be used in the field of management.

• Technical Processes: These processes include activities ranging from the definition of system requirements to the disposal of the product when it is withdrawn from service.

• Organizational Support Processes: These types of processes have been designed to manage the capability of acquiring and supplying products or services.

Software specific processes are also divided into three groups:

• Software Implementation Processes: These processes are defined for producing and implementation of specific elements in the software.

• Software Support Processes: These processes provide support activities and implemented processes by defining processes as document management or configuration management.

• Software Reuse Processes: These processes are designed to support the ability of organization for reuse of software items all over the software project [5].

ISO/IEC 12207 Processes are shown Table 1.

Table 1: ISO/IEC 12207 Processes [6]

Process Group Process 6 System Life Cycle Processes

6.1 Agreement Processes 6.1.1 Acquisition Process 6.1.2 Supply Process

6.2 Organizational Project-Enabling Processes 6.2.1 Life Cycle Model Management Process 6.2.2 Infrastructure Management Process 6.2.3 Project Portfolio Management Process 6.2.4 Human Resource Management Process 6.2.5 Quality Management Process

6.3 Project Processes 6.3.1 Project Planning Process 6.3.2 Project Assessment and Control Process 6.3.3 Decision Management Process 6.3.4 Risk Management Process 6.3.5 Configuration Management Process 6.3.6 Information Management Process 6.3.7 Measurement Process

6.4 Technical Processes 6.4.1 Stakeholder Requirements Definition Process 6.4.2 System Requirements Analysis 6.4.3 System Architectural Design 6.4.4 Implementation Process 6.4.5 System Integration Process 6.4.6 System Qualification Testing Process 6.4.7 Software Installation 6.4.8 Software Acceptance Support 6.4.9 Software Operation Process 6.4.10 Software Maintenance Process 6.4.11 Software Disposal Process

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

26

3. Information Technology Infrastructure Library (ITIL)

ITIL was developed by the Office of Government Commerce Great Britain, based on the collection of experiences of commercial and governmental experts all over the world, In order to promote a suitable approach by use of information systems for achieving business effectiveness and efficiency [7]. Today, organizations are highly dependent on information technology. One of the key factors for ITIL success, is to have suitable processes, which should not only being implemented, but also should be tracked and maintained [8], [9]. ITIL version 3 has divided its processes and procedures into five sections:

• Service strategy: Guidance on how to design, develop and implement Service Management as an organizational capability and a strategic asset provides.

• Service design: Guidelines provided for design and development service and management services.

• Service Transition: Guidelines provided for development and improvement of existing capabilities to deliver new services and changes in the operating environment.

• Service Operation: Include achieving effectiveness and efficiency in the delivery and support services.

• Continual Service Improvement: A tool for creating and maintaining value of customers

through better design and exploit of the services [10].

ITIL processes are shown in Figure 1.

4. Mapping between ITIL processes and ISO/IEC 12207:2008 process

Today, software companies becoming more specialized on various scopes (e.g. some companies to design, analysis, requirements gathering, software testing, etc.) to develop and produce their software products. In other aspects, outsourcing software development and global software development (GCD) has extended, which requires the use of methods and standards for the management and control of software development processes. Lack of integration between process of software engineering and stockholders cause widespread inefficiencies in software operational processes, especially in maintenance phase of a software product [11]. In order to effectively management and control software production process, the process should be clear and visible. Sometimes the process of software production is subtle and different with physical activities, so the only suitable way for project management and people involved in project, is use of a common language in all phases of software production. Besides, this common language minimizes the problems and issues that may be occurred during analysis, design, implementation and maintenance of system. It is essential for organizations to coordinate development processes and software production taken by different standards.

7 Software Life Cycle Processes

7.1 Software Implementation Processes

7.1.1 Software Implementation Process 7.1.2 Software Requirements Analysis Process 7.1.3 Software Architectural Design Process 7.1.4 Software Detailed Design Process 7.1.5 Software Construction Process 7.1.6 Software Integration Process 7.1.7 Software Qualification Testing Process

7.2 Software Support Processes

7.2.1 Software Documentation Management Process 7.2.2 Software Configuration Management Process 7.2.3 Software Quality Assurance Process 7.2.4 Software Verification Process 7.2.5 Software Validation Process 7.2.6 Software Review Process 7.2.7 Software Audit Process 7.2.8 Software Problem Resolution Process

7.3 Software Reuse Processes 7.3.1 Domain Engineering Process 7.3.2 Reuse Asset Management Process 7.3.3 Reuse Program Management Process

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

27

For this purpose, we need the same language and common understanding between developers, this common Literature is presented in ITIL framework including ways to solve issues and difficulties in the whole software life cycle and for software process life cycle standard

ISO/IEC 12207 will be used. In Table 2, the mapping between the ISO/IEC 12207 and ITIL process is shown.

Table 2: the mapping between the ISO/IEC 12207 and ITIL

ITIL Process ISO/IEC 12207

Service Strategy

Service Portfolio Management 6.2.3 Project Portfolio Management Process

Demand management -

Financial management 6.1.2 Supply Process

Service Design

Service Catalog management

6.1.1 Acquisition Process 6.1.2 Supply Process 6.2.4 Human Resource Management Process 6.4.1 Stakeholder Requirements Definition Process 6.4.9 Software Operation Process

Text

Text

Service Design

Service Transition

Serv

ice

Ope

ratio

nSe

rvic

e St

rate

gy

Service Portfolio ManagementDemand management Financial management

Continual Service Improvement

Service Catalog management

Service Level management

Capacity management

Availability management

IT service continuity management

Information security management

Supplier Management

Change management

Service asset & configuration management

Release & deployment management

Service validation & testing

Evaluation

Risk management

Knowledge management

Event management

Incident management

Request management

Problem management

Access management

7-Step Improvement

Service Reporting & Measurement

ITIL

Fig. 1 ITIL processes

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

28

Service Level management

6.1.1 Acquisition Process 6.1.2 Supply Process 6.4.1 Stakeholder Requirements Definition Process 6.4.2 System Requirements Analysis 7.1.3 Software Architectural Design Process 7.1.4 Software Detailed Design Process

Capacity management

6.2.2 Infrastructure Management Process 6.3.1 Project Planning Process 6.4.9 Software Operation Process

Availability management 6.4.9 Software Operation Process

IT service continuity management

6.2.2 Infrastructure Management Process 6.4.8 Software Acceptance Support 6.4.10 Software Maintenance Process

Information security management 6.3.6 Information Management Process

Supplier Management 6.1.1 Acquisition Process 6.1.2 Supply Process

Service Transition

Change management 6.3.5 Configuration Management Process 6.4.11 Software Disposal Process

Service asset & configuration management 6.3.5 Configuration Management Process

Release & deployment management 6.4.7 Software Installation 6.4.11 Software Disposal Process

Service validation & testing

6.4.6 System Qualification Testing Process 7.1.7 Software Qualification Testing Process 7.2.4 Software Verification Process 7.2.5 Software Validation Process

Evaluation 7.2.4 Software Verification Process 7.2.5 Software Validation Process

Risk management 6.3.4 Risk Management Process

Knowledge management 6.2.4 Human Resource Management Process

Service Operation

Event management 6.4.9 Software Operation Process 7.3.2 Reuse Asset Management Process

Incident management 6.3.3 Decision Management Process 6.4.9 Software Operation Process

Request management

6.1.1 Acquisition Process 6.4.1 Stakeholder Requirements Definition Process 6.4.9 Software Operation Process 7.3.2 Reuse Asset Management Process

Problem management 7.2.8 Software Problem Resolution Process 6.4.9 Software Operation Process

Access management 6.4.9 Software Operation Process

Continual Service Improvement

7-Step Improvement

6.4.11 Software Disposal Process 7.3.2 Reuse Asset Management Process 7.3.3 Reuse Program Management Process

Service Reporting & Measurement 6.3.1 Project Planning Process 6.3.7 Measurement Process

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

29

4. Conclusions

In this paper, a method for the production of software products based on mapping and communicating between ITIL process and software engineering process through the standard ISO/IEC 12207 was presented. In general, ITIL is a broad framework for the control and management IEC/ISO 12207 processes and this framework suffers from the lack of accurate process for management of software life cycle singly, therefore the Mapping of these two standards was considered. In the process of developing of software product, we requires the use of a defined and targeted framework, that specifies categorization and prioritization the steps of Software development from the beginning, that is contract phase, to ending phase, that is software disposal and replacement of other product. In addition to standard benefits, process-oriented benefit of this standard is very significant. Also ITIL is a comprehensive, consistent and coherent standard that is set of best practices for service management processes and promoting a qualitative approach to achieve business effectiveness and efficiency in the production of software products. The use of ITIL in the production of software products leads simplification, Organization and process management, and establishment of common language through reduction of costs and increase of quality. also Companies need to plan, develop, manage and improve their infrastructure, products and services, including marketing strategies for presentation new products and services based on customer needs, unknown or unforeseen, That companies can Efficient use of ITIL to provide most of the needs of a company. References [1] J. G. Guzmán, H. A. Mitre, A. Amescua, and M. Velasco, “Integration of strategic management, process improvement and quantitative measurement for managing the competitiveness of software engineering organizations,” Software Quality Journal, vol. 18, no. 3, pp. 341–359, 2010. [2] O. Marbán, J. Segovia, E. Menasalvas, and C. Fernández-Baizán, “Toward data mining engineering: A software engineering approach,” Information systems, vol. 34, no. 1, pp. 87–107, 2009. [3]B. Jereb, “Software describing attributes,” Computer Standards & Interfaces, vol. 31, no. 4, pp. 653–660, 2009. [4] R. S. Pressman, “A Practitioner’s Approach,” European Adaptation, 2005. [5]J. Portillo-Rodriguez, A. Vizcaino, C. Ebert, and M. Piattini, “Tools to support global software development processes: a survey,” in Global Software Engineering (ICGSE), 2010 5th IEEE International Conference on, 2010, pp. 13–22. [6] “Systems and software engineering – Software life cycle processes - Redline,” ISO/IEC 12207:2008 (E) IEEE Std 12207-2008 - Redline, pp. 1–195, 2008. [7] S. Zhang, Z. Ding, and Y. Zong, “ITIL process integration in the context of organization environment,” in Computer Science

and Information Engineering, 2009 WRI World Congress on, 2008, vol. 7, pp. 682–686. [8] T. Lucio-Nieto, R. Colomo-Palacios, P. Soto-Acosta, S. Popa, and A. Amescua-Seco, “Implementing an IT service information management framework: The case of COTEMAR,” International Journal of Information Management, vol. 32, no. 6, pp. 589–594, Dec. 2012. [9] B. Barafort, B. Di Renzo, and O. Merlan, “Benefits resulting from the combined use of ISO/IEC 15504 with the Information Technology Infrastructure Library (ITIL),” Product Focused Software Process Improvement, pp. 314–325, 2002. [10] A. Nabiollahi, R. A. Alias, and S. Sahibuddin, “A service based framework for integration of ITIL V3 and enterprise architecture,” in Information Technology (ITSim), 2010 International Symposium in, 2010, vol. 1, pp. 1–5. [11] R. Oberhauser and R. Schmidt, “Improving the Integration of the Software Supply Chain via the Semantic Web,” in Software Engineering Advances, 2007. ICSEA 2007. International Conference on, 2007, p. 79–79. Samira Haghighatfar received the BS degree in software engineering in 2010. She is currently working toward the MS degree in software engineering from payam noor University of Iran. Her research interests include software development methodologies, quality and improvement in software and IT Governance. Nasser Modiri received the MS degree in MicroElectronics from university of Southampton, UK in 1986. He received PHD degree in Computer Networks from Sussex university of UK in 1989. He is a lecture at department of computer engineering at Islamic Azad University of Zanjan, Iran. His research interests include Network Operation Centres, Framework for Securing Networks, Virtual Organizations, RFID, Product Life Cycle Development and Framework for Securing Networks. Amir Houshang Tajfar Over 18 years experience in the IT industry.Working as an international IT consultant in US, Europe and Middle East and teaching various IT subject in universities.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

30

Applying a natural intelligence pattern in cognitive robots

Seyedeh Negar Jafari1, Jafar Jafari Amirbandi2 and Amir-Masoud Rahmani3

1 Department of computer engineering, Science and Research branch,

Islamic Azad university, Tehran, Iran. [email protected]

2 Department of computer engineering, Science and Research branch,

Islamic Azad university, Tehran, Iran. [email protected]

3 Department of computer engineering, Science and Research branch,

Islamic Azad university, Tehran, Iran. [email protected]

Abstract

Human brain was always a mysterious subject to explore, as it has still got lots to be discovered, and a good topic to be studied in many aspects, by different branches of science. In other hand, one of the biggest concerns of the future generation of Artificial Intelligence (AI) is to build robots who can think like human. To achieve this AI engineers used the theories inspired by human intelligent, which were suggested by well-known psychologists, to improve the intelligence systems. To control this complicated system they can gain a lot of benefits from studying how human mind works. In this article, cognitive robots, which were equipped to a system that was built based on human brain’s function, searched in a virtual environment and tried to survive for longer. To build the cognitive system for these robots, the psychoanalysis theory of Sigmund Freud (id, ego, and super-ego) was used. And at the end, the surviving period of cognitive robots and normal robots in similar environments were compared. The results of these simulations proved that cognitive robots had more chances of surviving. Keywords: Cognitive robotic, Artificial Intelligence, Natural intelligence.

1. Introduction

O get benefit from technology in our lives, we need to understand and learn all aspects of human life, specifically the needs and problems. To adapt technology with human needs, study of a wide range of different branches of science is necessary. Cognitive science is a collection of a wide range of knowledge which is aimed to describe and assemble the cognitive abilities of all living things and the mechanisms of their brains’ functions. To solve a problem and make a decision, human should understand their surroundings, first. Then, with this information and the

experience(s) or skill(s) gained prior to this, he could act properly towards the situation. It’s assumed that the structure of an automated system must be designed with thousands of sensors to be able to process new data and make a suitable decision. ’Error! Reference source not found [1]. A collection of Epistemology, Cognitive Neuro Science, Cognitive Psychology, and Artificial Intelligent create Cognitive Science, which is one of this generation scientific approaches and very useful for human needs. In other hand, Robotics as a technologic dependant of AI, is a perception which consists of mechanics, computer sciences, and electronics control. Robotics plus cognitive sciences together make a new branch of science called cognitive robotics. Cognitive Robotics, uses living organisms’ and human brains function in its calculation algorithms. One of the main applications of Cognitive sciences is in AI (building of human-like computers). From their point of view, human mind is a kind of computer that sends the information received from its sensors (e.g. vision) to its processing centre (mind) and as the result of this process we talk or walk, and so on. The behaving patterns that had been inspired from human intelligence, would be a great help for engineers to improve the systems. Specifically, the way that human mind could control complicated matters suggests new ways to scientists. In articles [2], [3] shown that a wide range of sciences could come to help engineers to understand how human brain works such as psychology, psychotherapy, cognitive sciences, and neuro sciences. Many researchers in AI are investigating the patterns of natural intelligence, in order to apply them in automated robots [5].

T

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

31

In article [4], a Memory-based theory is used to discover brain functions and prove that memory is the base and root of any kind of intelligence either natural or artificial. They also believe that most of the unsolved problems in computer fields, software engineering, informatics and AI are because of not understanding the mechanisms of natural intelligence, and cognitive functions of brain. In [6] has shown the theoretical improvements in the mechanisms of AI and Natural Intelligence (NI). As well, the classification of intelligence and the role of information in development of brain functions were studied. They’ve expanded a general intelligence model which will help describing the developed mechanisms of natural intelligence.

2. Topic Plan

The simulation of human and other living organism brain pattern and its behaviour model in a virtual environment has been always a topic of interest for scientists and their researches, as much that we can see some of those results in modernising our industries and their affects in our lives, too. In order to follow human cognitive structure, our research was based on a psychological framework. Pursuing this, we chose a suitable model of NI among the suggested ones, and designed our cognitive robots in a virtual environment accordingly. Our NI model is from a cognitive field of psychology in which the different aspect of personality, perception, excitement, intension, and physical reactions of a person and his adaption with the environment, was studied. Sigmund Freud, the founder of psychoanalysis, says that personality is consisted three elements which together control human behaviour. His theory would be discussed in other sections of this article. The plan of this article was simulation of the behaviour of cognitive robots, with NI pattern, in a virtual environment. The designed cognitive robots searched their surroundings and in order to survive for longer, they needed to use the energy supplies from their virtual environment, otherwise their energy level reduced too much and they would be eliminated. The cognitive robots were equipped with Learning Automata (LA), which helped them making decision by recognizing their situation in their surroundings in each episode. But in other hand, normal robots had no LA available, so they moved randomly and tried to survive for longer. Later, the virtual environment, LAs, decision making algorithms (based on Freud’s theory) would be discussed in details. 2.1 The Design of Virtual Ecosystem

The simulated surroundings were sectioned in two zones, Rocky zone (or dangerous zone), and Flat zone (or safe zone). These two zones were exactly the same size, and the two zones had been separated by a hazardous boarder. Since the Rocky zone was dangerous, the amount of energy used in it, was twice the energy level used in a safe zone. The other difference between these two zones was the number of energy pack spread in them, which was more in the dangerous zone. The robots are built randomly, with the same level of energy (the highest level of energy), in the Safe zone. They are obliged to search in the zone and try to survive for longer in the environment. Eventually, with each movement, the robot will lose one energy unit in Safe zone, and two energy units in Rocky (dangerous) zone. When a robot gets 1 energy pack, its energy level reaches to its highest level. But, if its energy level drops to zero, it will get eliminated. Two groups of robots (cognitive and normal), with similar situation, search the virtual environment. At the end, the average of their (the two groups) surviving time will be compared together. The first group, which involves normal robots that have no decision making ability, move randomly in the virtual environment and never enter Rocky zone. This means that they constantly use/lose less energy in each episode. In other hand, they have less chance of using energy packs, since number of energy packs are more in the dangerous/Rocky zone. The other group, that has cognitive robots, with the mechanism of decision making (according to Freud’s theory), will act suitably to their surroundings. Opposite of the first group, this group of robots (cognitive) based on their ability of decision making, try to learn about their surroundings and which action’s best to apply in each situation, so, they could survive longer.

Fig. 1 Virtual ecosystem

2.2 Learning Automata (LA)

The Learning Automata are abstract models. They act randomly in the environment, and are able to up to date their actions based on the information they’ve got from outside (environment). This feature helps them to improve their functions. A LA can do limited number of actions.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

32

Each chosen action, is studied by the virtual environment, and responded with a reward (for correct action), or a fine (for wrong action). LA uses this respond, and chooses its next action [7], [8]. In simpler words, a learning automata is an abstract model that randomly chooses one of its limited actions and enforce it to the environment. And the environment studies this action, and sends the result with a relayed signal to LA. Then, with this data, LA up to dates its information, and chooses its next action. Following diagram shows this relationship between LA and the environment.

Fig. 2 Relationship between LA and environment

There are two types of LAs: Fixed structure, and Variable structure. We have used the variable type in our research. To get more detailed information about these two types, refer to the articles of [9], [10], [11], please.

2.3 Robots’ Decision Making, Based on Freud’s Theory

A natural Intelligence (NI) model is actually a pattern of human or any other living organisms’ natural behaviours, and the consequences of cognitive behaviour follows this model, as well. As mentioned before Freud’s psychoanalysis theory was used as an outline for our designed cognitive robots actions. According to what has been suggested in this theory, the structure of human personality is built up three elements, the ‘id’, the ‘ego’, and the ‘super-ego’. These three aspects of personality in dealing together, create the complicated human behaviours (refer to the fig. 3).

Fig. 3.Information flow between world modules [12] ‘id’ is presented from birth, ‘ego’ is responsible for dealing with reality, and ‘super-ego’ is all of our inner

moral standards and ideals that we acquire from both our parents and our society. ‘id’ is driven by the pleasure principle, which strives for immediate gratification of all desires, wants, and needs. If these needs are not satisfied immediately, the result is a state anxiety or tension. So, ‘id’ tries to resolve this tension created by the pleasure principle through the primary process, which means learning to find a suitable method to satisfy the needs. Based on this description, cognitive robots start moving in their surroundings at first, and as their energy level drops due to each movement, and getting fined by the environment time to time, they learn that their movements should be in a way that by using the available energy packs, when necessary, be able to survive for longer. In this article, ‘needs’ of robots are defined as ‘wanting to use the energy packs’, and if this needs weren’t satisfied, the robots would go to their tense mode/state. ‘super-ego’ is the aspect of civilization of personality, and holds all the ideals and moral standards which are gained from parents and society. It also provides guidelines for making judgements (our sense of right and wrong). The robots, also, based on this description, and through their consecutive episodes, by the Learning Automata, will learn that their energy level shouldn’t drop drastically, otherwise they will get eliminated. However, to follow the moral standards, as soon as they receive an energy pack , they shouldn’t use it, and they should keep it for when they are in tense/excitement mode/state (it’s not ideal for robots to use energy in their normal/steady mode). Ideally, robots should not go for risky movements when they are in their normal/steady mode. According to the description of ‘super-ego’ in Freud’s theory, the robots could get advantage from other robots’ experiences as well. Also, Freud explained ‘ego’ is resulted from personal experiences, and is the executive centre as well as decision making one. The ‘ego’ strives to satisfy ‘id’s desires, and ensures that the impulses of ‘id’ can be expressed in a manner acceptable in the real world (super-ego). It plays a middle person role and makes the final decision. Following this pattern, our designed robots learn that from its experiences from its surroundings, choose suitable actions; this is done by LAs. So, robots based on available LAs and their actions’ probability ratios, decide what action to choose next. A cognitive robot, in its surviving period (from highest level of energy to zero energy), experiences two conditions, according to Freud’s theory. We called them Normal and Excited/Tensed conditions. When, a robot’s energy level goes lower than a certain amount, its normal condition changes to tensed condition. This is shown with a cognitive index. In tense condition, a robot gets more risky, and is able to make better decisions in critical conditions.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

33

3. Proposed Solution

In this article, the robots start searching/ valuating their surroundings synchronously, and they are able to 5 actions in each episode. They can choose to be steady (fixed) in any condition that they think it’s giving a better result, but they will lose one or two units of their energy level depends on which zone they are in. There is a noticeable difference between functioning of a cognitive robot and a normal robot’s ; a normal robot will never risk its (energy level) in order to survive longer, so it never enters an unsafe/unsecure zone. But, in other hand, a cognitive robot, because of being equipped to LA(which follows NI and Freud’s theory) , risks its life so it can reduce the tension occurred in its system, and enters the Rocky zone (dangerous zone) . This means instead of choosing the ideals by their ‘super-ego’ , the decision making element (ego), will go more for whatever needs ‘id’ element asking for. The cognitive robots, can up to date the probability of their actions (up/down, right/left, fix) by the help of this LA. So, can gradually improve their functions and increase their surviving chance in the virtual environment. Based on Freud’s theory, the robots get rewarded, or fined, depends on the movement (action). We’ll go in more details later. When the robots are built, the probability of any of these 5 actions is the same as others, because they have not experienced the reaction of the environment, yet (reward, or fine). As you can observe the following table the addition of the probabilities of these 5 actions, is equal to 1 in the beginning of their movement, Please refer to (1).

Table 1: probability of each action in the beginning Probability Action

0.2 Up 0.2 Right 0.2 Left 0.2 Down 0.2 Fix

Sum of Probability (action) = P (up) + P (right) + P (down) + P (left) + P (fix) = 1; (1)

A cognitive robot will choose an action randomly, an action with a higher probability, has a higher chance to be chosen next, but in the beginning of the movement (search) it will go for an action randomly and investigate its surroundings. As the result a robot receives back a reward or fine for each action chosen, and this information and the new probability (sum of these new probabilities will be equal to 1, at all time) will be recorded for later use. This will be continuously done in each iteration, until all the robots are eliminated from the virtual environment.

3.1 The Algorithm of Up to Dating the Probability of the Actions in Learning Automata

* If an action got a reward (=0) as its result, in each LA, its probability ratio would be increased, in other words the rest of actions ratios would get decreased. * If the result was a fine (=1), then the probability ratio would be decreased in that LA for that action, and wise versa for the rest of actions. Please refer (2):

In this algorithm, the variables are as follow: P(n) : the probability of an action in n times P(n+1) : the probability of an action in n+1 times (or the probability of an action after up to dating) β : the reaction or reply of the environment to the robot’s action* β = 0 means reward β = 1 means fine a: reward index b: fine index r: number of defined actions for a robot * Environment rewards or fines based on some rules which follow Freud’s theory of psychoanalysis.

3.2 How Cognitive Robots Acquire Information from their environment

Each robot depended on its vision boundary (radius) was equipped to sensors, that enabled it to recognize the contents (information) from their neighbourhood surroundings. So, each robot in its presence in the virtual environment could recognise: • The zone it was in (Safe/Rocky) • Its state based on its energy level (normal/excited) • The zones its neighbours were in (Safe/Rocky) • The contents of its neighbourhood (a robot/an energy

pack/a barrier/or nothing) These abilities together were what made a Learning Automata for a robot. Robots made many LAs in their surviving time, and save them in their memories. They could up to date their LAs in each episode. There were 5 actions (up/down, right/left, and fix) available for each LA, and each action was allocated with a probability ratio. As a robot faced an experienced situation, it referred to its list of LAs, and chose a suitable LA, then followed that

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

34

LA actions based on their probabilities. So, this way they made a suitable movement/action, and learned gradually how to increase their chance of surviving. In each episode, instead of any action the robot has chosen, it has experienced a situation in its surrounding, so, one LA would be produced. If robot has not experienced such situation before, there would be a new LA built, and added to the list of LAs in robot’s memory. As well, the respond of the environment to such experience (based on the rules introduced) would provide up to dating of probability ratio of actions for the LA. But if a robot has already got that specific LA in its list, It wouldn’t add it again, instead, its experiences would be used in deciding what action to take, and this way robot has got the chance of increasing its experience ( knowledge of decision making ), and as the result increasing its surviving time. That’s how Freud believed in the feedbacks a person has received from the society, which would effect his/her actions later on, and help his/her to adapt better to the society. He also believed that parents’ rewards or punishment has the same affects on their child’s behaviour. For example a child who has been praised for a good behaviour, by his parents; his super-ego has recorded it as an ideal reaction, so in a similar situation, it would be much easier for him to decide what to do, based on his experience (ego). That’s exactly the same for our designed cognitive robots in the virtual environment.

3.3 How LAs Built in Robots

There are many different of ways to build LAs in robots. In this article, three methods were discussed and experienced. In all the three methods the same zones and the same actions were introduced. They (based on their functions) were called: State based, Model based, and State based with vision radius. Method 1: State based LA was designed based on the robot’s state, situation and the four surrounding neighbours of the robot. Robot could sense its surroundings with its sensors. Method 2: Model based In this method, plus the sate of the robot and its neighbourhood information, the axial positions of the robot ,in each episode, were recorded as one of the fields of LA. So, when robot didn’t know much about its surroundings but had its axial positions, tried to collect other information about that environment gradually, which resulted in better decision making, and action. One of this method’s defects was being slow due to its massive number of LAs.

Method 3: State based with vision radius In this method the proposed algorithm was improved when the vision radius of the robots increased. Here, as well as the first method, the state of robot and its neighbouring information (but not the axial positions) were available for the robots with the difference of giving the four neighbours, 4 points, that helped the robots to choose an action based on those points. This method has provided a better result at the end. This pointing system worked this way: The highest point was given to energy supply packs, and the rest got points based on the how far they’ve been from the energy pack. That meant the points were reduced if they were further from the energy pack. It was calculated by the following three equations (3,4,5): Max Point = 2 r+1 (3) Distance = |(NodeX – ResourceX)| + |(NodeY – ResourceY)| (4) Node Point = Max Point / 2 Distance (5) In which: r : vision radius Max Point : highest point Node Point :the point given to each neighbouring side Distance : the distance of the side from energy pack (NodeX, NodeY) : the axial position of each side that has earned a point (ResourceX, ResourceY) : the axial of an energy pack The interesting thing was that each side’s point was sum of all other points it had earned from different supplies. And there would be no points for the sides which were away from the vision radius. So, the robot could find an energy pack faster. In this method, in each episode, a Learning Automata was built with the above features. Its privilege to the other two methods was that the robots had a wider view of their surroundings, which helped them in decision making (ego), and having better results. If a vision radius of 1, 2, or 3 was available for the robots, they could recognise the contents of their neighbouring units with the help of their sensors. (fig.4)

Fig. 4 vision radius of 1, 2, and 3

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

35

0

5000

10000

15000

20000

25000

30000

0 20 40 60 80

Life

Tim

e

Iteration

Common

CognitiveLA1

CognitiveLA2

CognitiveLA3

3.4 The Rules of Giving Rewards or Fine to a Robot by Its Surroundings

Rule 1: If a wrong action was chosen, the robot could have got fine e.g. went up, where was a barrier. (Super-ego (the social aspect of behaviour) has built up, and ego recorded the experiences gained from the surrounding.)

Rule 2: If an energy supply was used where the robot was in its normal state, there would be fine applied. (id was trying to fulfil its pleasure aspects where it contradicted the ideal of super-ego, so ego took control over id and balances achieved )

Rule 3: If the robot chose a movement that leaded to an energy pack, a reward would be given to the robot. (The robot has learned that being next to an energy pack could satisfy its needs faster at the time of tension, which meant ego was collecting information from the environment)

Rule 4: If a robot in its normal state, went to Rocky zone from Safe zone, would have got fine. (Risking when it was not necessary, leaded to fine. Ego was learning about the motives.)

Rule 5: If a robot in its normal state, chose to move to Safe zone, Would have earned rewards. (In Safe zone, less energy would be lost, so ego has learned how to adapt or compromise.)

Rule 6: If a robot was in tense/excited state, and used its energy supply pack, would have got rewarded. (Ego must have reduced the tension of id by using the available sources.)

Rule 7: If a robot was in tense state and though having energy supply available, not use it, would have got fined. (ego neglected id’s needs and as the result recorded the information about this incident/ experience.)

3.5 The Results of the Simulation

To study the suggested algorithms, five robots with maximum level of energy (200 units) were put in an environment with 25 energy supply packs, with ratio of 80% (energy packs in Rocky zone)to 20% (energy packs in Safe zone) plus reward index of 0.8 , fine index of 0.7, and excitement index of 0.5. In 4 methods and 60 iterations, the average of robots’ surviving time was calculated. In the fourth method, a vision radius of 3 was provided for robots, which meant a better view to the environment and to the energy supply packs, and also a better decision making. The results of these 60 continuous iterations and robots’ life time in the environment are shown in fig. 5.

Fig. 5 Total of robots’ life time in test

As it was shown in the graph, after 60 iterations, cognitive robot had a better chance of surviving comparing to a normal robot, which meant a robot with the privilege of decision making ability (LA) would act more suitable and acceptable to its surroundings. Among cognitive robots, those used third method had better results, and that was because of having a wider vision radius which leaded to better decision makings and better actions.

In this test, cognitive robots with method one(state-based), showed less efficiency comparing to those used method two (model-base), where had more robots available in the environment, and as the result more information recorded and more experiences gained to help decision making process.

4. Conclusions

This article tried Freud’s psychoanalysis theory (id, ego, super-ego) to suggest a suitable behaviour model for cognitive robots. And this theory was simulated in a way that robots could make decisions to choose an action (which look better and closer to reality) based on their experiences from the environment. The cognitive robots used these algorithms to search their surroundings and gain some information so that they could survive for longer. To achieve this knowledge and make a decision, cognitive robots used LAs and the responds (fine, or reward) of their environment based on Freud’s theory.

For simulation, three methods were suggested and robots’ decision making power were studied and compared together. The results showed that cognitive robots (which were given the factor of sensing and decision making, to look closer to human) adapted themselves easier to the environment, and after a few iterations, they learned (by making better decisions in critical moments) how to survive longer comparing to normal robots. Though their (cognitive robots) risking approaches to critical situation, sometimes, caused trouble for them, and didn’t result well, overall, comparing to normal robots, they produced much better outcomes according to the observations done.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

36

References [1] T.Deutsch, H.Zeilinger, R.Lang, “Simulation Result for the

ARS-PA Model”, IEEE, pp. 995-1000, 2011. [2] R. Lang, S. kohlhauser, G. Zucker, T. Deutsch, “Integration

internal performance measures into the decision making process of autonomous agents,” 3rd International Conference on Human System Interactions (HSI), Rzeszow, pp. 715-721, 2010.

[3] Wang Y., Kinsner W., and Zhang D. , “Contemporary

Cybernetics and Its Facets of Cognitive Informatics and Computational Intelligence,” IEEE Transactions on Systems, Man and Cybernetics-part B: Cybernetics, Vol.39, No.4, pp.823-833, 2009.

[4] Wang Y., “Cognitive Informatics Models of the Brain”, IEEE

tranzactions on systems, man, and cybernetics-part C: Application and reviews, pp.203-207, 2006.

[5] S.I. Ahson, Andrzej Buller, “Toward Machines that Can

Daydream”, IEEE Conference on Human System Interactions , Krakow, Poland, pp. 609-614, May 25-27, 2008.

[6] Wang Y., “Cognitive Informatics foundation of nature and

machine intelligence”, Proc. 6th IEEE Intetnational Conference on Cognitive Informatics (ICCI'07), 2007.

[7] K. S. Narendra and M. A. L. Thathachar, “Learning

Automata: An Introduction”, Prentice-Hall Inc, 1989. [8] Mars, P., Chen, J. R. and Nambir, R, “Learning Algorithms:

Theory and Applications, in Signal Processing”, Control and Communications, CRC Press, Inc., pp. 5-24, 1996.

[9] Lakshmivarahan, S., “Learning Algorithms: Theory and

Applications”, New York, Springer Verlag, 1981. [10] Meybodi, M. R. and S. Lakshmivarahan, “Optimality of a

Generalized Class of Learning Algorithm”, Information Science, pp. 1-20, 1982.

[11] Meybodi, M. R. and S. Lakshmivarahan, “On a Class of

Learning Algorithms which have a Symmetric Bahavior under Success and Failure”, Lecture Notes in Statistics, Springer Verlag 1984, pp. 145-155, 1984.

[12] H.Zeilinger, T.Deutsch, B.Muller, R.Lang, “Bionic

Inspired Decision Making Unit Model for Autonomous Agent”, IEEE Vienna Uneversity of Technology/Institute of computer Technology, Vienna, Austria, pp. 259-264, 2008.

.

Seyedeh Negar Jafari was born in August 1986 in Tehran. After graduating at the computer engineering in Islamic Azad University North Branch of Tehran, Sep. 2009 she began studying Artificial Intelligence at the Islamic Azad University Science and Research branch, Tehran in Feb. 2010 and completed her degree in Sep. 2012. Her research interests are devoted to the topic of Cognitive Intelligence, Cognitive Robotic, Learning Automata, Neural Networks and Artificial Perception. Her paper published in 10th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC ), 2011. Jafar Jafari Amirbandi was born in Sep. 1983 in Kalachay. After graduating at the computer engineering in Islamic Azad University of Lahijan, Feb. 2006 he began studying Artificial Intelligence at the Islamic Azad University Science and Research branch, Tehran in Oct. 2009 and completed her degree in Feb. 2012. Her research interests are Neural Networks, Machine Learning and Learning Automata. Amir Masoud Rahmani degrees: B.S. and M.S. in computer engineering from AmirKabir University of Technology (Tehran Polytechnic), Tehran, 1992-1998. His PhD is in computer engineering from Islamic Azad University Science and Research branch, Tehran, 1999-2005. Post doctoral researcher at the University of Algarve, Portugal, 2012-2013. His research interests are Distributed systems, Grid computing, Cloud computing, Ad hoc and wireless sensor networks and Evolutionary computing. Many of his articles have been published in prestigious conferences and journals.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

37

A Multi Hop Clustering Algorithm For Reduce Energy Consumption in Wireless Sensor Networks

Mohammad ahmadi1, Mahdi darvishi2

1 Sama technical and vocational traning college, Islamic azad university,malayerbranch,Malayer,Iran [email protected]

2 Sama technical and vocational traning college, Islamic azad university,malayerbranch,Malayer,Iran

[email protected]

Abstract

This paper introduces an energy efficient clustering algorithm for sensor network based on the leach protocol. The proposed algorithm adds features to leach and aims to reduce power consumption of the network resources in each round of data gathering and communicating. The proposed algorithm is a cluster_ based routing algorithm that exploits the redundancy properties of the sensor networks in order to address the traditional problem of load balancing and energy efficiency in the wireless sensor network. The algorithm then forms two layers of multi hop communication. The bottom layer which involves intra cluster communication and the top layer which involves inter cluster communication. Keywords: Energy based clustering, Efficient inter cluster routing, bacterial algorithm, wireless sensor network

1. Introduction

The most important different between wireless sensor networks and other wireless networks is in the restrictions in their resources. The most important restriction is the energy, which is resulted from the small size of the sensors and their batteries. Monitoring is from environments to which human access is too difficult and impossible. The probability of replacement or recharge of dead nodes is very low. One of the important challenges in these networks is the need for the constant control of the network life-time and the coverage of the network. Therefore, even increasing the lifetime of the network without considering the network coverage is not desirable. Due to the above-mentioned reasons, the most important challenges in wireless sensor network s are the rate of consumed energy and the balanced distribution of energy all over the network. The balanced distribution of energy all over the network results in the balance of dead nodes rate all over the network. One of the ways of reducing energy consumption is the management of energy resources. In [1], several algorithms are suggested for

reducing energy consumption. Routing in wireless sensor network is challenging task, firstly because of the absence of global addressing schemes, secondly data source from multiple paths to single source, thirdly because of data redundancy and also because of energy and computation constraint of network [2]. The conventional routing algorithms are not effective in wireless sensor networks. The efficiency of the existing routing algorithms for wireless sensor network different from one to another. There for, there is an urgent need for developing routing algorithm which can be used in a wide range of applications. Routing algorithms are divided into two groups. Group one is based on protocol performance. Cluster-based routing in wireless sensor networks is a typical instance of hierarchical routing. Hierarchical routing is a combination of clusters in which the nodes with less energy are used for performing sensing operation, and the nodes with more energy are used for performing transmission action. Cluster heads perform calculation operation such as data gathering and data compression for reducing the number of transmissions to the sink. This causes the decrease of energy consumption. Leach [3] is one of the first hierarchical routing algorithms for wireless sensor networks. The routing algorithm in leach algorithm includes two phases: setup phase and steady state phase. In setup phase, the cluster heads are selected randomly. And in steady state phase transmission of packets is done. In leach-f [4], the cluster is kept constant and the nodes of each cluster are selected as the cluster head rotationally. This results in saving energy and the increase of the wireless operation strength. The weak point of this algorithm is the reduction of network scalability. The algorithm teen [5] is suitable for time-critical application. This algorithm is able to respond to the sudden changes in the sensed data. In this algorithm, cluster_ heads make use of two thresholds: hard threshold and soft threshold. Hard thresholds the minimum value of the attribute that triggers the transmission from node to the cluster head and soft

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

38

threshold is small change by an amount equal to or greater than the soft threshold. In this algorithm, if there is no considerable change in the sensed data, the number of transmissions by the soft threshold will be reduced. Apteen protocol [6] is an extension to teen which is a hybrid protocol for both periodic data collection and also for time critical data collection. The protocol in [7] presents multi gateway architecture to cover large area of interest without regarding the service of the system. This algorithm balanced the density of clusters at the time of clustering. In this algorithm, two kinds of nodes are used. In this algorithm, the gateways keep the sensor nodes in an optimal condition using multi-hop routs. The weak point of is the static of the cluster heads. This causes the energy of the nodes closer to the cluster head to finish sooner than the energy of the other nodes. In the group of location-based routing, the sensor nodes are determined based on their location. In this kind of algorithm, the environment is divided into virtual girds. The nodes existing in the same grid have the same value in routing. And only one active node is needed at each time. The most famous protocol in this regard is [8]. Hierarchical or cluster based routing protocols, as potentially the, most energy efficient organization, have shown wide application in the past few years [9, 10] and numerous clustering algorithm have been proposed for energy conservation such as [11, 12]. The rest of the paper is organized as follows. Section 2 illustrates the leach protocol. Section 3 illustrates the bacterial algorithm that is used for finding shortest path. In section 4 provide the proposed algorithm and the simulations and results are described in section 5. .

2. Leach protocol

Leach is an algorithm for clustering and saving energy for wireless sensor networks [13]. The basic features of this protocol are as follows: ü The base station is away from the sensor

nodes. ü The base station is fixed. ü All the sensor nodes have the same initial

energy.

Leach has a dynamic mechanism for clustering. In this algorithm, time is divided into different rounds. In each round the cluster heads are produced again, and the clusters are formed again. At the beginning of each round, each sensor nodes produces a random number in [0, 1] and compare it with a pre-determined threshold Ti. If random< Ti, then the sensor node will be selected as a member of

cluster. Suppose that p percent of the cluster heads are in the network. We define: H=1/p m=current round the network is running G= set of nodes that have not been cluster head in the last n rounds. According to [15], the value of threshold for a sensor is:

T (i) = )1(

0

)1mod(1

∈×−

otherwise

GVif

prP

P

When a node is selected as a cluster head, it broadcast a message for its neighbors. And the nodes receiving this message decide about joining one of the cluster heads based on the respective signal strength. Then the sensed data are sent to the related cluster heads according. After gathering and compression of the data, the cluster heads send them toward the sink. 2.1 Problems of leach

Leach algorithm assumes a homogeneous distribution of nodes. The selected cluster heads are also assumed to be away from one another. This scenario is not possible in the real world. For instance, sensors distribution is considered to be according to figure 1, and most of the nodes are located near one or two cluster heads. (Cluster head A and B)

Fig. 1 sensor distribution in leach algorithm

In this scenario, cluster heads A and B send a message to their neighbors. And many of the nodes will receive this message. This will result in cluster with a great number of members, and this causes the energy to finish faster in cluster heads A and B. as a result, a part of the network loses its connection with the rest of the network. We will introduce a multi hop routing algorithm for internal and external connections of clusters. In this proposed algorithm, bacterial algorithm is used to find the shortest path for sending the data.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

39

3. Bacterial based routing algorithm

Bacterial optimization algorithm is a random procedure for optimal solution of combinational problems such as routing. This algorithm is obtained from the behavior of bacteria during their life. The application of bacterial algorithm in sensor networks for getting the shortest path is as follows: In the proposed algorithm, bacterial algorithm is used to find the shortest path between source and destination. At first the source node produces some bacteria and spread them in it’ radio range. The nodes existing in the radio range of the source node, each will receive a bacterium. Each bacterium send sits new location id (the current node at with it is located) along with its distance from the destination to the source node. After receiving these packets, the source node saves the distance parameter existing in each packet, and identifies the bacterium which is the closest to the destination based on distance parameter, and spreads the id existing in the selected bacterium in its radio range again. The sensor node existing in the radio range of the source node receives the spread id, and then they compare the receiving id with their own id. If the two ids are the same, the node reproduce its bacterium and the bacterium existing in the other nodes are eliminated. (In other words, the closest node to the destination is selected as the next source, and the operation of bacterium proliferation is performed). This process is repeated until the node selected as the next source is the destination itself. After the arrival of the first bacterium at the destination, the other bacteria are eliminated. The rout taken by this bacterium is considered as the shortest path. This path is saved in the memory table of the bacterium in the form of a series of ids. Finally, the source sends its data using this table.

4. Proposed algorithm

The purpose of this proposed algorithm is to solve the problems existing in leach algorithm. The proposed algorithm has the following capabilities. ü Applying the parameters of the remaining energy

rate and the number of members of a cluster head for joining a node to a cluster.

ü Applying routing algorithm base on bacterial algorithm for finding the shortest path for sending data in internal and external connection in cluster.

We assume that nodes are aware of the physical location of their nodes. The node have a processor, a memory , and hardware needed for perfuming sensing operations, data gathering and establishing connections. The clustering mechanism in the proposed algorithm is similar to the one in leach algorithm. However, it has some basic differences.

In leach algorithm, nodes select their cluster heads based the received signal. This issue causes the cluster heads that are located in areas with high density to have high overhead. The proposed algorithm operates based on the confidence value broadcasted by cluster heads. The confidence value broadcasted by the cluster heads is based on the following parameters. ü The distance between cluster head and node ü The number of member nodes of cluster head ü The current remaining energy of cluster head

The cluster head with the highest confidence value has highest probability for new nodes to join it. Confidence value has a direct relationship with the rate of the remaining energy of the number of current members of cluster, and the distance between the node and cluster head. At the start of each round, each node saves the message received from the cluster heads in its memory. The cluster head with the highest confidence value is selected as the desirable cluster head. R 1/cluster-ration. Th threshold B-T battery-threshold MCM max- cluster-members While (current-round < total-rounds) do For (I=0 to total-nodes) do If (nod ie ! = head in last r rounds) then

If (random < Th and nod ie .battery > B.T) then

Nod ie Head End-if End-if End-for For (I, k=0 to total-nodes) do If (nod ke ! = head and nod ie =head) then

D = bacterial-dist (nod ie no d ke )

B= battery (nod ie )

CM= Cluster-members (nod ie ) If (CM>MCM) OR B < Battery (to support CM+ (node)) then Confidence value=0 Else Confidence value B/ (MC*D) End-if End –i f End –for End –while. Therefore, this algorithm can be summarized as follows: ü Random distribution of the sensors ü Cluster formation phase Ø selection of cluster heads according to leach

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

40

Ø selection of cluster head for typical nodes. The nodes select the optimal cluster head based on the parameter of confidence value.

ü Data transmission phase Ø transmission of data from typical nodes to the

related cluster heads (by means of bacterial algorithm)

Ø gathering the data and sending the packets from cluster heads to the sink (by means of bacterial algorithm)

ü Repetition of phases 2 and 3 until the energy of all nodes is finished.

5. Results and simulation In this section, the efficiency of the proposed algorithm is estimated. After the selection of cluster heads, the clusters are formed and data transmission phase starts. Figure 3 shows the main phases of the algorithm in both states of internal and external connections.

Fig. 2 Data transmission intra cluster and inter cluster

The parameters used in the algorithm are shown in table.1

Table 1: Parameters of simulation

The efficiency of the proposed algorithm is compared with that of leach in two scenarios mentioned in table2. The comparison is made based on the following three parameters: 1) the number of round when first node dies. 2) The number of round when half of nodes die 3) the number of round when last node dies. The results are shown in table2.

Table 2: Comparison of Algorithms Results.

In figure 3 and 4, the proposed algorithm and leach algorithm is compared base on the parameter of the rate of energy existing in the network. As shown in the figures, the proposed algorithm has more efficiency than leach.

Fig. 3 Energy Remaining Comparison of the two Algorithms in First scene.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

41

Fig. 4 Energy Remaining Comparison of the two Algorithms in Second scene.

In figure 5 and 6 the proposed algorithm and leach algorithm are compared based on the parameter of the number of alive nodes. As it is observed, the proposed algorithm is more efficiency than leach algorithm.

Fig. 5 Alive Nodes Comparison of the two Algorithms in First scene.

Fig. 6 Alive Nodes Comparison of the two Algorithms in Second scene.

6. Conclusions Energy consumption in wireless sensor network is of such a great importance that can lead to the increase of network life time. In this paper, new algorithm is proposed, that using the parameters of remaining energy rate and the number of members of a cluster head for joining a node to a cluster and Applying routing algorithm base on bacterial algorithm for finding the shortest path for sending data in internal and external connection in cluster. As it can be seen the proposed method compared with LEACH method has a good result in number of alive nodes and energy remaining in the network. References [1]. Ana stasi G, conti M, Passarella A. "energy conservation in

wireless sensor networks": a survey. In: adhoc networks, volume 7, issue 3, Elsevier, 2009.

[2]. Akkaya k. and younis M.."A survey on routing protocols for wireless sensor networks ", journal of adhoc networks, vol3. 2005.

[3]. Heinzelman W.R, chandraksan A, and balakrishnan H., "energy efficient communication protocol for wireless microsensor networks," proc.33rd Hawaii int’ l.conf.sys.sci. 2000.

[4]. Heinzelman W.B, chandrakasan A.P, and balakrishnan H." application specific protocol architecture for wireless sensor network." Vol.phd: Massachusetts institute of technology. 2002.

[5]. Menjeshwar A.agrawal D.P." TEEN: A protocol for enhanced efficiency in wireless sensor network ."proceeding of 1st international workshop on parallel and distributed computing issues in wireless networks and mobile computing.

[6]. Manjeshwar A. Agrawal D.P. "A Teen: A Hybrid protocol for efficient routing and comprehensive information retrieval in wireless sensor networks." Proceeding of 2nd international work shop on parallel and distributed computing issues in wireless networks and mobile computing. 2002.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

42

[7]. Gupta, G.Younis, M., 2003" load-balanced clustering of wireless sensor networks. "Anchorage , Ak. United States.

[8]. XUY, Heideman J, EStrin D. " Geography informed energy conservation for ad-hoc routing" . In: proc. 7th annual ACM/IEEE .Int’1.conf. mobile comp and net, 2001.

[9]. Vlajic N, Xia D. "wireless sensor networks: To cluster or not cluster?" In: proc.of WOWMOM, 2006.

[10]. Wei D. "clustering algorithm for sensor networks and mobile ad-hoc networks to improve energy efficiency". PhD thesis, university of cape town. 2007,

[11]. Lindsey S, Raghavendra C.PEGASIS: "Power-Efficient gathering in sensor information systems". In: IEEE Aerospace conf.proc, vol.3-9-16, 2002.

[12]. Ye M, Lic.F, Chen G.H, WUJ." EECS: An Enegy Efficient clustering scheme in wireless sensor Networks". In: proceeding of IEEE Int’1 performance computing and communications conference (Ipccc), 2005.

[13]. Wendi Rabiner Hienzelman, Anantha chand rakasan, and Hari balakrishnan."Energy –efficient communication protocol for wirelees micro sensor networks". In HIcss, 2000.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

1743

Enterprise Microblogging tool to increase employee participation in organizational knowledge management

Jalal Rezaeenour 1, Mahdi Niknam2

1Department of Industrial Engineering, Qom University Qom, Iran

[email protected]

2Student of masters in Ecommerce, Qom University Qom, Iran

[email protected]

Abstract

In this paper, we introduce and study the microblogging tool as one of the tools of Web 2.0 for increase participation of users in knowledge sharing and transmission of experiences. Case study data from 10 months activity of system and the results of the survey from employees who use the system show that the use of these systems in order to increase awareness, increase employee participation in organizational criticism and suggestions are very efficient. Keywords: Microblogging, Enterprise Microblogging, Web 2.0, Knowledge Management, Knowledge Sharing.

1. Introduction

For decades, researchers in fields like knowledge management were trying to produce a tool for allocating or explicating knowledge. One of the key features of this tool allows users to participate with each other. Participation as a key feature of the concept is considered as the Web 2.0. By equipping users to social technologies which are easy to use, Web 2.0 makes them to main producers of content in web. For instance we can point some context as Facebook, Twitter, MySpace, YouTube, Wikipedia and Flickr. Such contexts caused decreasing barriers to sharing knowledge in the internet. One of web 2.0 context is microblogging system. The system provides a new and simple type of communication in which users can share a little bit information about activities, interests, ideas or anything else with others. According to novelty of this phenomenon, a few academic researches has done yet. Reference [6] by analyzing Twitters posts realized that the subject of most of users posts is with their own focus and a few numbers of them publish information. Reference [7] by analyzing geographic and placement information discovered various

types of user’s intentions for using the system. Recently microblogging has been lionized as a system for scientific and education contexts. As regards internet users publish their knowledge via web 2.0 tools such as wikis, weblogs and social networks only with real intentions, some efforts has been accomplished for using these tools in commercial and organizational contexts. Making web 2.0 approach compatible in commercial contexts provides this inestimable opportunity that implicit knowledge and best practices propagate all along the organization. For intra organization usage, these tools are configured according to organizational environment and naturally experiments of extra organization tools also are used for more richness. References [3, 4, 5] have shown that programs and technologies of web 2.0 consisting of Wikis, weblogs and social networks services, sooner or later, will find their way to organizations and we also predict that this happened for microblogging too. Reference [8] showed that using microblogging in an organization cause many advantages for organization and its staff. According to technology acceptance model [10], simplicity bosomed in microblogging has a positive effect on the acceptance of the system by users. Also, the length of messages limitation in this system limitation inhibits from unnecessary information redundancy and this will lead to more user share than other web 2.0 tools. According to paper [9] one of major challenges in knowledge management researches and one of vital factors in knowledge management processes is simplification of knowledge sharing. With regards to the high potential of feasibility of transfer, share and obtaining knowledge and personal experiences of people, the Information Technology department of the Computer Research Center of Islamic Sciences (CRCIS) decided to implement microblogging service.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

44

In this research we will first introduce microblogging tool and in the next part we will introduce Computer Research Center of Islamic Sciences and the tool that has used in this organization. Then we will offer usage statistics of this system and the result of survey that has accomplished from users of this system. Finally in chapter 4 the paper will end up with conclusion and offering a summary of findings.

2. Microblogging

The microblogging can be defined as the smaller version of blogs which are rich with features of social networks and have placed greater emphasis on mobility. Each user has his own public microblogging so that he can send his short posts in it. There is the ability to "follow" other users and add them to your private network. Like blogs, message subjects are displayed in chronological order on the main page of the person. Compared with traditional blogs, microblogging tool has a completely different function. In this system, each person can create public or private messages, and it is addressed to one or more other users, and also insert tag in his message subject to identifies the posts subject. Microblogging services usually support a broad range of communication tools. For example there is the facility of sending message via SMS, desktop programs or third-party programs to twitter. One of key features of microblogging tool is raising awareness. Reference [1] define awareness as “an understanding of the activities of others, which provides a context for your own activity.” Reference [2] divides awareness information to four different types:

• Informal awareness: awareness of others' activities and interests.

• Social awareness: information about status of other peoples feeling, which is usually obtained in an interactive social environment

• Knowledge of group structures: information about the group members, their roles and responsibilities.

• Workplace awareness: information about others interaction with shared workplace

The abovementioned items show that how microblogging can help to increase consciousness in a firm. This consciousness helps to have a better communication, cooperation and coordination. Microblogging is a brand new phenomenon and so a few academic researches have done in it. Most of researches focus on describing and explaining Twitter or they have introduced microblogging tool as a tool for learning. Few researches have done for offering steps to design tool or introducing this tool as a program based on mobile.

3. A Case Study

Computer research center of Islamic sciences is working with the aim of providing necessary infrastructure to easy access to resources and Islamic contents and religious culture using information and communication technology. According to the organization's IT department’ knowledge and awareness of its employee of the benefits of social networking, microblogging systems in the organization launched. The primary goal of launching this system was raising the level of information from different internal departments’ activities and sharing experience of staff in an informal environment and without organizational procedures. To launch this system in the organization an open source software named Sharetronix has used. Sharetronix platform is a powerful web based social network that users can send a post with maximum 160 characters along with picture, video, link and file and also share their ideas and point of views with others by following other users. Groups, employees, colleagues and forums are able to communicate with each other via creating a private network and they can pursue newest posts by RSS technology. The most important features of this system are as follows:

• The facility of sending public and private messages

• Allocating a personal page to each user • The facility of attaching files, links and pictures

(like Facebook) • The facility of viewing site via mobile (mobile

version) • The facility of pursuing users’ posts via RSS • The Facility of entering contents via RSS feed • The facility of making public and private groups • The facility of simple membership and

membership with activation email • The facility of advanced search and supporting

TAGs • Search in posts, views and groups and saving them • The facility of following users and pursuing posts • API programming Interface (like Twitter) • The facility of LDAP and PubSubHubBub merge • The facility of republishing posts by users

Staff can login to the system by using the username and password which is defined in the central network and contribute in producing content. At first there was a fear of inappropriate and negative posts and main abuse of the system. But as regards all users’ posts are shown with their full name, so the possibility of abuse of the system was resolved. There is no limitation in the subject of posts for users and users can send working and non-working posts. With this approach it prohibit the confine and as a result ill treatment

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

45

of users and the combination of working and non-working posts, the succulence and dynamic state of the system is preserved.

3.1. Statistical data from using the system

Data stored in the system from different aspects and different targets can be analysis. A summary of the statistical data from the activity of system in past 10 months are presented in Table 1. This information shows the dynamics of the system.

Table 1: Statistical summary of using the system

Count Topic

254 Total number of registered users

174 Number of users who have post

41988 Number of Posts

3865 Total number of tags

1674 The number of non-repetitive tags

8218 Number of response to posts In order to encourage the users to leave organizational posts, the tags #suggestion, #criticism, #current-status and #experience were placed beneath the post-sharing section; so that the users could paste the tags on their posts by a click and thus become motivated to share posts on such subjects. The survey results from the tags used in this period indicate that among 1674 non-repetitive tags only 128 tags were repeated more than 4 times in different posts. The tags #suggestion, #current-status, #criticism and #experience having occurred respectively 142, 101, 59 and 50 times take first to fifth places according to their frequency. These data show the potential of the system for get organizational criticism and suggestions and also improve organizational information.

3.2. Survey results on the use of the system

After 10 months of operating system, the survey was conducted electronically from users to provide their opinion on various aspects of the system. 49 people participated in the survey, and the questions and results are presented in below. The first question was introduced to identify users' intrinsic motivation as "More willing to post with which background material?" Results of user opinion are presented in the Table 2 in the order of priority:

Table 2: More willing to post with which background material?

Priority Subject

1 Offer new idea and suggestion

2 Technical information

3 Organizational criticism

4 Scientific news

5 Work experience

6 Technical information

7 Interesting content and related to event Statistical information derived from the common tags that are explained in the previous section is confirmed by the results of question one, so that potential for organizational activities can be detected in the system. In order to identify users' interesting topics a question as "What content in the system are more attractive for you?" was introduced. Results of user opinion are presented in the Table 3 in the order of priority:

Table 3: What content in the system are more attractive for you?

Priority Subject

1 Organizational criticism

2 suggestion

3 Scientific news

4 Organizational information

5 Introduction to Personality and Thoughts

6 Work experience

7 Irony

8 Miscellaneous The results of Table 3 with Statistical information obtained from common tags are consistent and reflect the strength of this system in to achieve organizational goals. In order to continue planning for the next, two questions were raised. The first question, as "How much do you agree to continue operating the system?" more than 70% of users agreed to continue operating the system. In response to the question "If you agree to limit this system to certain subjects, such as science content, work experience or organizational ideas?" about 82% of users expressed their opposition to this limitation. One of the ongoing challenges and concerns with regard to the management of such systems, the length of time that users spend using the system they are telling. For this

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

46

reason, the question as "On average, how many minutes a week do you use the system?". The result of this query in Figure 1 is shown.

Figure 1: On average, how many minutes a week do you use the system? As Figure 1 shows the time spend to operating with system during the week due to the remarkable achievements it is not so.

4. Conclusions

Increase employee participation in problem solving and improvement of processes is one of the perennial concerns of managers. For this purpose, Employees shall be given the necessary tools and conditions to away from the pressure and with internal motivation participate in the dissemination of experiences and improve processes. This paper introduces a microblogging tool as one of the Web 2.0 tools. These tools can used to increase employee participation in the production and sharing of knowledge and experience and the level of information. Statistical analysis of the survey shows that members with intrinsic motivation used this system and this will increases employee satisfaction and organizational growth. Enterprise Microblogging is a fascinating field of research and we anticipate Future work in this area will be published soon. The research was presented at the preliminary stage and after use of the system, the next question will be. Future studies can be in microblogging performance (knowledge regarding the time and effort) and compare it with other exist systems so the results could be generalized and achieved to better performance. Providing conditions of publishing user content both in terms of speed and convenience for this tool has a high level of success. Continued use of microblogging in an organization provides the possibility of rapid spread of knowledge.

Furthermore, the use of Web 2.0 applications, provide the ability to prevent employees from visiting social networking sites on the Internet. Therefore we can use Web 2.0 tools to support better utilization of business processes. References [1] Dourish, P. & Bellotti, V. “Awareness and Coordination

in Shared Workspaces” . Proceedings of CSCW, 107-114, 1992.

[2] Gutwin, C., Greenberg, S. & Roseman, M. “Workspace Awareness in Real-Time Distributed Groupware: Framework, Widgets, and Evaluation” . Proceeding of HCI, 281–298, 1996.

[3] Efimova, L. & Grudin, J. “Crossing Boundaries: A Case Study of Employee Blogging” . Proceedings of the Fortieth Hawaii International Conference on System Sciences (HICSS-40). Los Alamitos, IEEE Press, 2008.

[4] Stocker, A. and Tochtermann, K. “Exploring the Value of Enterprise Wikis: A Multiple-Case Study” . International Conference on Knowledge Management and Information Sharing (KMIS), Madeira, Portugal, 2009.

[5] Koch, M.; Richter, A. “Functions of Social Networking Services” . Proceedings of COOP 2008, 8th International Conference on the Design of Cooperative Systems (Carry-le-Rouet, France, 2008).

[6] Naaman, M.; Boase, J.; Lai, C.-H. “ Is It Really About Me? Message Content in Social Awareness Streams.” Proceedings of CSCW 2010.

[7] Java, A.; Song, X.D.; Finin, T.; Tseng, B. “Why We Twitter: Understanding Microblogging Usage and Communities.” Proceedings of the Joint 9th WebKDD and 1st SNAKDD Workshop. pp. 56-65. (San Jose, United States, 2007).

[8] Ehrlich, K.; Shami, N.S. “Microblogging Inside and Outside the Workplace” . Association for the Advancement of Artificial Intelligence, 2010.

[9] Strohmaier, M.; Yu, E.; Horkoff, J.; Aranda, J.; Easterbrook, S, “Analyzing Knowledge Transfer Effectiveness – An Agent-Oriented Approach.” Proceedings of the 40th Hawaii International Conference on System Sciences (HICSS-40 2007), January 3-9,IEEE Computer Society, Hawaii, USA, 2007.

[10] Davis, F. D. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly, 13(3), 319-340, 1989.

02468

10121416

Less Than10 minutes

Between10 to 30minutes

Between30 to 60minutes

Between30 to 60

hours

More than2 hours

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

47

Security-Aware Dispatching of Virtual Machines in Cloud Environment

Mohammad Amin Keshtkar1, Seyed Mohammad Ghoreyshi2, Saman Zad Tootaghaj3

Electrical and Computer Engineering Department, University of Tehran, Tehran, Iran [email protected]

[email protected] [email protected]

Abstract

The cloud computing as a ubiquitous paradigm could provide different services for internet users and Information Technology (IT) companies through datacenters located around the world. However, cloud provider faces several problems such as security and privacy issues in cloud datacenters. Hence, cloud provider has to handle security challenges to gain more profit. In this paper, a Security-Aware Dispatching and Migration model for the virtual machines is proposed in order to prevent Service Level Agreement (SLA) violation. The approach considers the lowest violation in required security preservation as the most significant factor for execution of VMs. The results show that lower SLA violation is achieved in our method compared to other common methods. It is also shown that the SLA Violation has an exponential relationship with the VM computational capacity which is defined by Million Instructions Per Second (MIPS). Keywords: Cloud computing, Security Provisioning, Dispatching, Online Migration

1. Introduction

Cloud computing as a ubiquitous paradigm offers scalable on-demand services to users through various datacenters (DCs) located around the world with greater flexibility and lesser infrastructure investment [1,2]. Users anywhere in the world can access anything as services such as infrastructure, platform, and software from the cloud and pay for only the service they use. Also IT companies can benefit from this new paradigm by eliminating the need to maintain an in-house datacenter by migration of their own data to a cloud datacenter [3].

On the other hands, security and privacy issues are the greatest concern for cloud providers and the biggest challenge for the adoption of Cloud. To solve these problems, applying the security-aware techniques is vital for the cloud providers. Users have different concerns about security and privacy of their data. As a result, the public adoption of cloud services will be depended on the desirable security needs of companies and users; and abilities of cloud provider to satisfy them [4,5].

Service Level Agreements (SLA) is a contract negotiated and established among users and cloud provider that formalize performance metrics and QoS parameters such as deadline, throughput, response time or latency. Also, Cloud service providers often establish a Service Level Agreement to highlight security and privacy issues of the submitted services.

For instance, when users submit their application with different security level constraint to the system, they expect that system is able to meet their security constraints with maximum success probability [6]. For this reason, we proposed a security-aware algorithm in cloud environment to reduce security violation in dispatching of submitted VMs. Moreover, security-aware migration of virtual machines is another way to reallocation of VMs based on the security constraints in our proposed method. Another application of VMs migration is that VMs can be consolidated to minimize number of physical machines (PMs) and idle PMs can be turned off subsequently [7].

In this paper, in order to satisfy security requirements of submitted VMs in cloud environment, we propose a Security-Aware Dispatching model and security-aware migration of virtual machines between physical machines by considering variation in security levels of VMs. We consider SLA Violation when a submitted VM cannot run on a desirable physical machine from security aspects.

The remainder of this paper is organized as follows: we address the related work in section 2. The system model including Datacenters model, virtual machines model, security model are described in Section 3. The Heuristic algorithm is presented in section 4. Then Experimental results are provided in Section 5. Finally, Section 6 concludes the paper.

2. Related Work

In [8], authors explain the new risks that face administrators and users of a cloud's image repository. To address these risks, they propose an image management system that controls access to images. Ning Cao in [9], defined and solved the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Also in [10], a large amount of research work has dealt with the characterization of cloud computing and an efficient privacy preserving keyword search scheme in cloud computing is proposed.

In [11], authors describe how the combination of existing research thrusts has the potential to alleviate many of the concerns impeding adoption to cloud. The paper [12] introduces a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

48

In [13], PasS (Privacy as a Service) has been presented as a set of security protocols for ensuring the privacy and legal compliance of customer data in cloud computing architectures. S.pearson in [14], describes a privacy manager for cloud computing, which reduces the risk to the cloud computing user of their private data being stolen or misused, and assists the cloud computing provider to conform to privacy law.

In [15], authors seek to begin the process of designing data protection controls into clouds from the outset so as to avoid the costs associated with bolting on security as an afterthought. In [16], authors investigate the complex security challenges that are introduced by the trend towards Infrastructure as a Service (IaaS) based cloud computing.

3. System Model

In this paper, several physical machines distributed in the one cloud datacenter are described. As depicted in Fig (1), the Security-Aware Dispatcher acts as an interface between physical machines and cloud users.

Fig.1 Security-Aware Dispatcher Model in cloud datacenter

The Security-Aware Dispatcher must distribute VMs among physical machines based on parameters like Security and Performance. It is assumed that physical machines are homogeneous with respect to the computational capacity. Each physical machine has a local manager in order to handle the running and migration of VMs. The local manager monitors the execution of VMs and issue orders for VM migrations. A submitted VM could be easily migrated between different physical machines by assuming that there is network connection between for each pair of physical machines. Also, the approach of this paper considers the time delay caused by the network in the used VM migration timing model.

3.1 Jobs and Virtual Machines Model

Users submit their VMs to the Security-Aware Dispatcher with a security requirement parameter. In this model, each

VM has a specific security-level that should be satisfied by the cloud provider. Furthermore, each job has a specific execution time and computational capacity such that each VM is modelled as the following:

where ti and mi indicate the execution time of ith VM and computational capacity (MIPS) of submitted VM, and in this model ri and si represent the amount of requested RAM and the Security-Level of ith VM, respectively. These virtual machines must meet the Security-Level constraints of the cloud users.

3.2 Security Model Since security attacks frequently occur in a cloud

environment, the continuous monitoring of VMs is indispensable. After the first allocation of VMs between physical machines, the local manager should check out the status of VMs for their security conditions. In this paper, we consider that each server configured with different security characteristics. The dispatching that is unaware about security could lead to disastrous results in cloud datacenters. The local managers must monitor the security of VMs continuously and manage the VMs according to their security needs in order to prevent critical information being attacked by malicious insiders. Moreover, when some processors are released by their VMs, local managers should inform the Security-Aware Dispatcher that their physical machines are ready to accept new VMs. Then, Security-Aware Dispatcher is able to migrate VMs between physical machines in order to gain higher security level for running VMS. Consequently, VMs could be migrated to another available physical machine to enhance the total security of cloud datacenter.

4. Heuristic Algorithm

Our goal in this paper is introducing an approach for allocating VMs to the physical machines and then monitoring their running status and handling security attacks that may be occurred for them. First, users submit their VMs with their requirements to the Security-Aware Dispatcher. The Security-Aware Dispatcher sort VMs and physical machines based on their security-level into decreasing order in separate lists. Then, Dispatcher takes the VMs from the VM's lists and assign each one to the first server in the server's list with adequate computational capacity.

This algorithm could provide a fast solution, involving placing each VM into the first physical machine in which it will fit. It requires Θ(n log n) time, where n is the number of VMs to be assigned. The algorithm was improved by first sorting the list of VMs into decreasing order (sometimes known as the first-fit decreasing algorithm).

The Dispatcher could distribute virtual machines among physical machines based on their required MIPS and security-levels. The main goal of the Security-Aware Dispatcher is

),,,,( iiiii rsmtVM )1(

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

49

allocating each virtual machine to a physical machine in the cloud environment that could schedule assigned VM and has minimal violation in the security provisioning. In order to schedule virtual machines on each physical machine, the following conditions should be met [6]:

where PMcap represents processing capacity of each physical machine. This means that multiple virtual machines could be allocated to each physical machine if their total required MIPS be less than physical machine capacity.

Unfortunately, due to insufficient available resources in the cloud datacenters, the initial allocation through the Dispatcher would not be efficient during the running time. Hence, local managers must constantly monitor the completion time of virtual machines to be able to act an appropriate response for efficient reallocation. After completion of VMs, their computational capacity would be released and be available to be used by another VMs. For this reason, in proposed model, VMs could migrate to the physical machine with higher security level which has adequate released computational capacity. The migration time of each virtual machine is calculated according to the amount of RAM of each VM and available bandwidth between physical machines. Hence, VM migration time could be calculated as following:

where rvm represents the amount of RAM used by the

virtual machine. Also, bwik indicate the amount of available bandwidth between ith and kth physical machines.

5. Experimental Result

5.1 Simulation Setup

In this paper, the CloudSim simulator is used for the simulation of physical machines and cloud environment. To accomplish this, the following components were added to our simulation model:

• Migration of virtual machines between physical machines.

• Physical machines equipped with different security levels.

Several physical machines with different security level included Very Low security levels (1), Low security levels (2), Medium security levels (3), High security levels (4), Very High security levels (5), have been simulated in CloudSim Toolkit. Physical machines are homogeneous with respect to the computational capacity. However the security levels of physical machines are heterogeneous. In this study, five security requirements for each VM are considered. Minimum and maximum security requirement of each VM are considered similar to security levels of physical machines. Also, it is assumed that SLA Violation time is equivalent to

the time intervals which VMs are executing on a server that could not satisfy user's expected security level.

The additional assumption is that there is a network connection between each pair of physical machines to migrate virtual machines. Available bandwidth between each of them is selected from the range of 1 Gbit/s to 4 Gbit/s. The cloud provider revenue was set to the subtraction submitted VM profit and SLA Violation Cost in order to compare the proposed algorithms.

The VMs have different security levels, such that, a VM is profitable for the provider when it is executed on the expected secure physical machine in its execution time period. It is assumed that 500 VMs arrive at time 0. The MIPS of each VM is selected randomly from 250 to 1000 and its security level is determined from 1 to 5 randomly. The amount of RAM for each virtual machine is selected between 1000 MB to 4000 MB based on the amount of MIPS that must be supported by them. The computational capacity in physical machines is equal to 1000 MIPS. In this model, after completion some VMs, the Dispatcher compare the security levels between VMs which are running on undesirable physical machines and then select VMs with higher security needs in comparison to others. Afterwards, Dispatcher migrate selected VMs to their appropriate physical machines. We consider that monitoring of VMs completion is accomplished every 5 seconds. In addition, we compared the proposed algorithms based on the SLA violation on different number of submitted VMs.

5.2 Simulation Results

In this paper, we consider two assumptions for evaluation of different proposed methods. First assumption is related to distribution model of VMs. The algorithm employs Security-Aware Dispatching (SAD) if VMs distributed according to their security needs; otherwise, virtual machines are distributed based on the Simple Consolidation Method (SCM) which is non-aware about security. In consolidation method, we sort VMs only based on their computational capacity, and then distribute VMs on physical machines in order to use minimal physical machines. The second assumption is about having or not having VMs migration policy between physical machines. For example, we call Security-Migration (SM) for our policy to migrate VMs based on the security policy and Consolidation-Migration (CM) for migration based on the consolidation policy. Therefore, we name our method SAD-SM. This means that virtual machines are distributed based on the security level of each VM and physical machine and they could migrate between physical machines in accordance with release of occupied resources.

By using several simulations we have shown that SLA Violation of our method significantly is lower than other methods. Comparing our algorithm (SAD-SM) with SCM-CM and SAD without migration is depicted in Fig (2). The vertical and horizontal vectors represent Normalized SLA Violation and VM Numbers, respectively.

SLA Violation in the SCM-CM is higher than our migration method. Because, it is assumed SCM-CM method is non-

,∑ ≤ capk PMm

,ik

vmmig bw

rt = )3(

)2(

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

50

aware about security issue and only consolidate VMs to the physical machines. But in our method, assigning one VM to each physical machine and migration of VM try to prevent security violation and subsequently reduces SLA Violation. Moreover, it is clear that Security-Aware Dispatching without migration lead to increase inflexibility of the model and hence increase SLA Violation.

Fig. 2 SLA Violation comparison between different models

In addition, we have evaluated and drawn impact of VM MIPS on SLA Violation in Fig (3). Generally, we changed VM MIPS in order to evaluate impact of VM computational capacity requirements on SLA Violation.

Fig.3 Impact of VM MIPS on SLA Violation

Subsequently, we examined proposed methods in different VM MIPS including Very Low (MIPS=250-500), Low (MIPS=500-750), Medium (MIPS=750-1000), High (MIPS=1000-1250), Very High (MIPS=1250-1500). Clearly, the increase of MIPS lead to decline of available resources in physical machines. As a result, the SLA Violation is increased due to the reducing capacity of high security physical machines and they do not have enough capacity for submitted VMs. As seen in Figure (3), the SLA Violation of proposed methods is increased exponentially with the increase of VM MIPS.

6. Conclusion

The cloud computing in its development way has faced many problems like Security and Privacy issues. Ignoring these problems could lead to disaster in results for cloud provider. So, we proposed a Security-Aware Dispatching and Migration model for virtual machines to manage security needs to prevent SLA violation by considering variation in security requirements. In this paper, we consider the lowest increase in SLA Violation for desirable security of users as the most significant factor for management of each VM. Our method could achieve lower SLA violation compared to other proposed methods. Moreover, we have shown that the increase in VM MIPS has an exponential relationship with the increase in SLA Violation of security in our method.

REFERENCES 1. Garg, Saurabh Kumar, Chee Shin Yeo, Arun Anandasivam, and

Rajkumar Buyya. "Environment-conscious scheduling of HPC applications on distributed cloud-oriented data centers." Journal of Parallel and Distributed Computing vol. 71, no. 6 (2011): 732-749.

2. S. Subashini and V. Kavitha, “A survey on security issues in service delivery models of cloud computing,” Journal of Network and Computer Applications, vol. 34, no. 1, pp. 1–11, Jan. 2011.

3. H. Takabi, J. B. D. Joshi, and G.-J. Ahn, “Security and Privacy Challenges in Cloud Computing Environments,” IEEE Security & Privacy Magazine, vol. 8, no. 6, pp. 24–31, Nov. 2010.

4. Wu, Hanqian, Yi Ding, Chuck Winer, and Li Yao. "Network security for virtual machine in cloud computing." In Computer Sciences and Convergence Information Technology (ICCIT), 2010 5th International Conference on, pp. 18-21. IEEE, 2010.

5. C. Modi, D. Patel, B. Borisaniya, A. Patel, and M. Rajarajan, “A survey on security issues and solutions at different layers of Cloud computing,” The Journal of Supercomputing, vol. 63, no. 2, pp. 561–592, Oct. 2012.

6. D. Svantesson and R. Clarke, “Privacy and consumer risks in cloud computing,” Computer Law & Security Review, vol. 26, no. 4, pp. 391–397, Jul. 2010.

7. B. Hay, K. Nance, and M. Bishop, “Storm Clouds Rising: Security Challenges for IaaS Cloud Computing,” 2011 44th Hawaii International Conference on System Sciences, pp. 1–7, Jan. 2011.

8. Wei, Jinpeng, Xiaolan Zhang, Glenn Ammons, Vasanth Bala, and Peng Ning. "Managing security of virtual machine images in a cloud environment." In Proceedings of the 2009 ACM workshop on Cloud computing security, pp. 91-96. ACM, 2009.

9. N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, “Privacy-preserving multi-keyword ranked search over encrypted cloud data,” 2011 Proceedings IEEE INFOCOM, pp. 829–837, Apr. 2011.

10. Q. L. Q. Liu, G. W. G. Wang, and J. W. J. Wu, “An Efficient Privacy Preserving Keyword Search Scheme in Cloud Computing,” 2009 International Conference on Computational Science and Engineering, vol. 2, pp. 715–720, 2009.

11. Chow, Richard, Philippe Golle, Markus Jakobsson, Elaine Shi, Jessica Staddon, Ryusuke Masuoka, and Jesus Molina. "Controlling data in the cloud: outsourcing computation without

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

51

outsourcing control." In Proceedings of the 2009 ACM workshop on Cloud computing security, pp. 85-90. ACM, 2009.

12. D. Zissis and D. Lekkas, “Addressing cloud computing security issues,” Future Generation Computer Systems, vol. 28, no. 3, pp. 583–592, Mar. 2012.

13. W. Itani, A. Kayssi, and A. Chehab, “Privacy as a Service: Privacy-Aware Data Storage and Processing in Cloud Computing Architectures,” 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing, pp. 711–716, Dec. 2009.

14. Pearson, Siani, Yun Shen, and Miranda Mowbray. "A privacy manager for cloud computing." In Cloud Computing, pp. 90-106. Springer Berlin Heidelberg, 2009.

15. S. Creese, P. Hopkins, S. Pearson, and Y. Shen, “Data Protection-Aware Design for Cloud Computing Abstract : The Cloud is a relatively new concept and so it is unsurprising that the information assurance , data Data Protection-Aware Design for Cloud Services,” no. December, 2009.

16. B. Hay, K. Nance, and M. Bishop, “Storm Clouds Rising: Security Challenges for IaaS Cloud Computing,” 2011 44th Hawaii International Conference on System Sciences, pp. 1–7, Jan. 2011.

ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 2, No. , 2013 www.ACSIJ.org

3 May

52