pissn: 2519-5395pastic.gov.pk/downloads/pjcis/pjcis_v3_1.pdfrecognition or cv (c omputer vision)...

73

Upload: others

Post on 06-Jan-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body
Page 2: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PISSN: 2519-5395EISSN: 2519-5409

Pakistan Journal of Computer andInformation Systems

Volume. 3 No. 1 2018

Published by:Pakistan Scientific and Technological Information Centre (PASTIC), Islamabad, Pakistan

Tel: +92-51-9248103 & 9248104; Fax: +92-51-9248113Website:www.pastic.gov.pk

Page 3: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Pakistan Journal of Computer and Information Systems

Pakistan Journal of Computer and Information Systems (PJCIS) is an open access, peer reviewedonline journal published twice a year by Pakistan Scientific and Technological InformationCentre (PASTIC) for providing a forum for researchers to debate and discuss interdisciplinaryissues on Computer Science, Computer System Engineering and Information Systems. PJCISwelcomes the contributions in all such areas for its forthcoming issues.

Acknowledgement

Pakistan Scientific and Technological Information Centre (PASTIC) highly acknowledges thecontributions of following subject experts for their critical review of the manuscripts and valuablesuggestions on the research papers published in the current issue.

Dr. Muhammad Imtiaz HusainSindh Madresatul Islam University,Karachi, Pakistan

Dr. Nasrullah MemonUniversity of Southern Denmark,Odense, Denmark

Dr. Moazam JawaidMehran University of Engineering andTechnology, Jamshoro, Pakistan

Dr. Muhammad AkbarEffat University, Jeddah,Kingdom of Saudi Arabia

Dr. Bushra NazMehran University of Engineering andTechnology, Jamshoro, Pakistan

Dr. Ali DaudKing Abdul Aziz University,Jeddah, Kingdom of Saudi Arabia

Dr. Muhammad MalookSindh Madresatul Islam University,Karachi, Pakistan

Dr. Gule SamanUniversity of Bath, UK

Dr. Tehmina AmjadInternational Islamic University,

Islamabad, Pakistan

Page 4: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Pakistan Journal of Computer and Information SystemsEditor-in-Chief

Prof. Dr. Muhammad Akram ShaikhDirector General, PASTIC

EditorsNageen Ainuddin & Dr. Saima Huma Tanveer

Managing EditorsSaifullah Azim & Dr. Maryum Ibrar Shinwari

EDITORIAL ADVISORY BOARD

Dr. Muhammad IlyasFlorida Atlantic University, Florida, USA

Dr. Khalid MehmoodUniversity of the Punjab, Pakistan

Dr. Muhammad Yakoob SiyalNanyang Technological University, Singapore

Dr. Khalid SaleemQuaid-i-Azam University, Islamabad, Pakistan

Dr. Waseem ShahzadNational University of Computer and Emerging Sciences,

Islamabad, Pakistan.Dr. Farooque Azam

National University of Sciences & Technology, Islamabad,Pakistan

Dr. Laiq HassanUniversity of Engineering & Technology, Peshawar,

PakistanDr. M. Haroon Yousaf

University of Engineering & Technology, Taxila, PakistanDr. Kanwal Ameen

University of the Punjab, Lahore, Pakistan

Dr. Muhammad YounasOxford Brookes University, Oxford, United KingdomDr. Anne LaurentUniversity of Montpellier-LIRMM, Montpellier, FranceDr. M. Shaheen MajidNanyang Technological University, SingaporeDr. Amjad FarooqUniversity of Engineering & Technology, Lahore, PakistanDr. Nadeem AhmadNational University of Sciences & Technology, Islamabad,PakistanDr. B.S. ChowdhryMehran University of Engineering & Technology, Jamshoro,PakistanDr. Jan MuhammadBaluchistan University of Information Technology, Engineering& Management Sciences, Quetta, PakistanDr. Muhammad Riaz MughalMirpur University of Science & Technology, Mirpur (AJ & K)Dr. Tehmina AmjadInternational Islamic University, Islamabad, Pakistan

Pakistan Scientific and Technological Information Centre (PASTIC), Islamabad, PakistanTel: +92-51-9248103 & 9248104 ; Fax: +92-51-9248113

www.pastic.gov.pk : E-mail: [email protected]

SubscriptionAnnual: Rs. 1000/- US$ 50.00 Single Issue: Rs. 500/- US$ 25.00

Articles published in Pakistan Journal of Computer and Information Systems (PJCIS) do not represent the views of the Editors orAdvisory Board. Authors are solely responsible for the opinion expressed and accuracy of data.

Assistant Managing EditorMuhammad Usman

ComposersNaeem Sajid & Shazia Perveen

Page 5: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Aims and Scope

Pakistan Journal of Computer and Information Systems (PJCIS) is a peerreviewed open access bi-annual publication of Pakistan Scientific andTechnological Information Centre (PASTIC), Pakistan ScienceFoundation. It is aimed at providing a platform to researchers andprofessionals of Computer Science, Information and CommunicationTechnologies (ICTs), Library & Information Science and InformationSystems for sharing and disseminating their research findings and topublish high quality papers on theoretical development as well as practicalapplications in related disciplines. The journal also aims to publish newattempts on emerging topics /areas, reviews and short communications.The scope encompasses research areas dealing with all aspects ofcomputing and networking hardware, numerical analysis and mathematics,software programming, databases, solid and geometric modeling,computational geometry, reverse engineering, virtual environments andhepatics, tolerance modeling and computational metrology, rapidprototyping, internet-aided design, information models, use of informationand communication technologies for managing information, study ofinformation systems, information processing, storage and retrieval,management information systems, etc.

Page 6: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Pakistan Journal of Computer and Information Systems

Contents

Volume 3 Number 1 2018

1 Towards Recognition of Human Activities and in depth Analysis ofEvaluation Datasets

Muhammad Usman Ghani Khan, Omer Irshad, Salman Roubil ….……………….…………… 1

2 A New Accelerated Method for Solving Nonlinear Application Problemswith Convergence of Order Two

Umair Khalid Qureshi, Abdul Ghafoor Shaikh, Wajid Ali Shaikh…….………………………… 21

3 Mining the Twitter Data Stream: A Review

Muhammad Usman, Muhammad Akram Shaikh…………………………………………………… 27

4 Proposed Migration Approach to implement IPv6 Network over IPv4

Jawad Hussain Awan, Raza Hussain Shah, Shahzad Ahmed Memon, Muhammad JurialSangi, Rahat Ali Khan……………………………………………………………………………………………………………………………… 41

5 Real time Face Recognition and Detection for Industrial Application

Muhammad Usman Ghani Khan, Ali Farooq, Omer Irshad……………………………………… 51

Page 7: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

1

Towards Recognition of Human Activities andin depth Analysis of Evaluation Datasets

MUHAMMAD USMAN GHANI KHAN1,2*, OMER IRSHAD1,2, SALMAN ROUBIL1

1Al-Khawarizmi Institute of Computer Sciences2Department of Computer Science and Engineering, University of Engineering and Technology, Lahore,

Pakistan

*Corresponding author’s e-mail: [email protected]

Abstract

Human activity detection is one of the key areas of interest in all surveillanceapplications. This work has earlier been performed manually but recent technologies havesuccessfully converted into automated systems to carry out this task. A variety of onlinedataset repositories are available for training the systems and testing them later. Some datasetsare solely inclusive of single human actions, however, in some others, the interactions ofhumans with objects and other humans have also been accounted for. This paper focuses onsingle human actions alone and a survey of datasets with the dominance of single humanactions.

Keywords: Human-object interaction; Human activity detection; Surveillance;dataset evaluation

INTRODUCTION

Human activity recognition and human to object interaction [1], [2] are considered tobe the key objectives of surveillance systems to watch over any suspicious activity takingplace. The purpose can be achieved by using human resources for monitoring the CCTVfootage. However, human beings cannot be exempted from errors due to their inability tofocus on a task for longer uninterrupted durations of time. Therefore, there is a need todevelop automated vision-based computer systems to get the desired functionality moreaccurately and efficiently. A variety of online dataset repositories are available for trainingand testing such systems.

The increased rate of terror attacks coupled with crime activities in Pakistan demandsdevelopment of comprehensive automated systems capable of detecting the human presence,identifying and recognizing their actions; detecting their emotions, age and gender andcompletely describing the scenario under observation. On the contrary, there is a need topredict these doubtful events prior to occurrence in order to prevent them from happening

Page 8: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

2

altogether. System requires training for predicting an activity based on unit human actionsleading to the complex activities.

In the context of single human action recognition, many online datasets are available,however, most of them consist of human to object interaction along with basic single humanactions. This review has its main focus on datasets with the dominance of single humanactions out of which KTH [3],[4] is one of the most frequently used datasets. This datasetmay not be inclusive of larger set of actions but due to its uniform background, it is generallyemployed for training purposes. HMDB51[15] dataset has its significance in catering real lifescenarios and is commonly used for testing of trained system. To account for actionscaptured from different view angles, datasets such as MuHAVi [6]-[7] dataset play their part.IXMAS [8-10] dataset is vital for training the system for dealing with both occluded andnon-occluded actions. UT Tower action dataset [11] presents the searchers with actionrecorded from an aerial view.

Distinction in the dataset is made on the following parameters in the field of activityrecognition or CV (Computer Vision) domain:

i. Presence of Humanii. Human Skeleton/Structureiii. Inner parts of bodyiv. Arm Movementsv. Leg Movementsvi. Bending Curves of Human Skeletonvii. Position of Human with respect to objectsviii. Indoor/Outdoor Environmentsix. Camera Angle

DATASET REVIEW

Following eleven datasets have been reviewed in this paper.

1. KTH2. MuHAVi-MAS3. UT Towers4. IXMAS Action5. UIUC6. HMDB517. WVU8. MPII9. WEIZMANN10. CASIA11. Berkeley MHAD

Page 9: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

3

KTH Dataset

The KTH dataset had been generated in 2004 by Royal Institute of Technology [12].six human action classes have been focused in KTH dataset [4, 13]. It consists of 4 differentconditions for recording actions i.e. outdoors, outdoors with scale variation, outdoors withdifferent clothes and indoors. Each of the 25 subjects was made to perform these 6 actions, 4times each. There are a total of 2391 video sequences in KTH dataset [13,3]. The resolutionof these videos is 160x120 [13],[14]. It is considered to be one of the largest datasetsavailable for single human action recognition [12]. The dataset has been divided into trainingset (8 subjects), validation set (8 subjects) and test set (9 actors) [13].

The action classes [1] included in this dataset are;

(i) Walking(ii) Jogging(iii) Running

(iv) Boxing(v) Waving(vi) Hands clapping

Apart from videos, the dataset also provides ground truths. The dataset consists ofsingle action labels assigned to each action class [4]. This dataset is limited to clean,homogeneous and uniform backgrounds [2],[3]. This trait of KTH proves good for trainingreasons but some researchers demand rather complex and realistic backgrounds for testingpurposes [2]. It is for this reason that KTH is mostly used for training purposes with actionvideos from other datasets used as testing data. Frames from a few instances of KTH dataseti.e. boxing, jogging and waving have been shown in Figure I

(a) Boxing (b): Jogging (c): WavingFigure 1: Few instances of KTH Dataset

MuHAVi-MAS DatasetThis dataset has been originally generated by a team at Digital Imaging Research

Centre, Kingston University in the year 2010. Each of the 17 different action classes,performed by 4 actors has been recorded from 8 different angles using eight CCTV cameras.Each action has been performed several times by actors in the action zone [6]. The resolutionof these videos is 720x256.

Page 10: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

4

The actions in this dataset are:

(i) Punch(ii) Kick(iii) Walk-fall(iv) Walk-turn-back(v) Run-stop(vi) Shot gun-collapse(vii) Pull heavy object(viii) Pick up/Throw object(xi) Walk-fall

(x) Look in car(xi) Crawl on knees(xii) Wave arms(xiii) Raw graffiti(xiv) Jump over fence(xv) Drunk walk(xvi) Climb ladder(xvii)Smash object

The action videos with uneven backgrounds have been recorded in night street lightillumination [6] and have been manually synchronized. Videos from 5 action classes havebeen manually annotated. The silhouettes of human presence and actions in various framesof videos has been provided for this subset of action classes [7]. It comprises of both unitactions and composite actions. For training purposes, often these composite actions havebeen cropped to obtain unit actions videos [7] e.g. walk-fall is a composite action and can befurther divided into two unit actions i.e. walking and falling. Figure 2 (A) and Figure 2 (B)display a few samples of MuHAVi dataset and silhouettes of some actions of the samedataset respectively.

(i) Punch (ii) Kick (iii) Jump

(iv) Fall (v) Crawl (vi) Collapse

Figure 2A: A few instances of MuHAVi dataset

Page 11: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

5

(i) Collapse (ii) Punch (iii) KickFigure 2B: Few instances of MAS silhouettes

UT Tower Dataset

Nine action classes have been targeted in UT tower dataset (pointing, standing,digging and others) [11]. This aerial dataset has been recorded using static camera setup atthe top of the main tower of University of Austin. Each action has been performed 12 timesby 6 individuals with 2 backgrounds i.e. floor and grassy lawn [15]. Five actions recorded inthis dataset are:

(i) Carry(ii) Run(iii) Wave with one hand

(iv) Wave with two hands(v) Jump

The remaining 4 actions i.e. point, stand, dig and walk have been recorded with floorbackground. There are 108 videos in total [16]. All videos have low resolution since theactions have been recorded from a distance. In these video frames, the person height is nomore than 40 pixels. The figure centralization detector has been applied to provide figure-centric patches in the video frames [15] in order to focus on the person of interest only.These poses are common for recognizing multiple games action also [11]. Figure 3Apresents a few cases of actions showing the person of interest only, however, Figure 3Bshows the complete frames of actions and associated background taken from an aerial view.

(i) Pointing (ii) Walking (iii) Jumping (iv) RunningFigure 3A: Few instances of UT Tower dataset

Page 12: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

6

(i) Bounding box (ii) Corresponding FrameFigure 3B: UT Tower Dataset

IXMAS Action Dataset

IXMAS action dataset was first published in the year 2007. It comprises of 1148videos, most of which are without occlusions except 698 video sequences in which objectspartially occluding the actor. Eleven actions have been included in this dataset. These actionswere performed by 10 actors; 5 males & 5 females. Each of these action instances has beenrecorded from five different view angles using five standard fire wire cameras [8].The eleven actions include:

(i) Walking(ii) Waving(iii) Punching(iv) Kicking(v) Picking up(vi) Sitting down

(vii) Checking Watch(viii) Crossing arms(ix) Scratching head(x) Getting up(xi) Turning around

Resolution of these videos is 390x291. Ground truths and bounding boxes have alsobeen provided along with the videos. It consists of both un-occluded and partially occludedsingle human actions [9]. It aids in the development of more robust and realistic systemswhere the human actions are expected to be partially occluded in most general cases.Volumetric voxel representations and silhouette of human actions obtained by eliminatingbackgrounds have been included in the dataset. [10]. A few examples of occluded actionsand some of un-occluded actions have been displayed in Figure 4A and Figure 4Brespectively.

(i) Turn Around (ii) Get Up (iii) PunchFigure 4A: IXMAS (without occlusion)

Page 13: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

7

(i) Punch (ii) Turn Around (iii) Get UpFigure 4B: IXMAS (with occlusion)

UIUC Dataset

UIUC dataset includes video sequences of 14 general human actions and 4 badmintonsequences [17]-[18]. Eight actors have cooperated in the generation of this dataset. Itcomprises of 532 videos in total.

The action classes included in this dataset are very diverse which are listed below:

(i) Crawling(ii) Turning(iii) Clapping(iv) Walking(v) Running(vi) Jumping(vii) Waving

(viii) Jumping jack(ix) Jump from sit-up(x) Raise one hand(xi) Stretch out(xii) Sit to stand(xiii) Pushing up(xiv) Stand to sit

Apart from these action classes, badminton sequences have also been included in thisdataset. These videos have been recorded from a single camera view. All the videorecordings have a uniform and clean background [18]. Foreground masks are also provided.Figure 5 shows selected collection of actions from this dataset.

(i) Clapping (ii) Waving (iii) Pushing Up

Page 14: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

8

(iv) Turning (v) Jumping (vi) CrawlingFigure 5: UIUC dataset

HMDB51 Dataset

HMDB dataset has been collected from various public sources such as YouTube andGoogle videos. A total of 51 actions have been added in this dataset out of which nineteenare single human actions and remaining include interactions of human beings with objects orwith other human beings. Each category has 101 video clips on an average [5]. It consists ofimages taken from a static camera and in some cases, taken from a moving camera. Thisdataset contains 6766 video sequences. Thirty percent of the data is used for testing while 70percent for training against each action class [18]. The dataset has both the originallygenerated videos and the stabilized version of these videos i.e. standard image stitchingtechniques were used to align frames of the video [5], [18]. A few sample instances of thedataset have been revealed in Figure 6.

(i) Standing (ii) Kicking (iii) ClappingFigure 6: HMDB51 Dataset

Meta information for each video is also provided in the dataset along with videosequences. The validation of the dataset was manually checked. This vast dataset has fivesubcategories [5].

1) General facial actions2) Facial actions with object interaction3) Single human actions4) Human to object interaction5) Multiple human actions

Page 15: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

9

WVU Dataset

This dataset consists of 2 main divisions i.e. single human unit actions and singlehuman prolonged composite actions. There are 12 videos for unit actions and 8 videos forcomposite actions available in this dataset. Jogging, clapping and nodding head are a few ofthese many action classes. To help understand the dataset better, WVU dataset also containsmeta information and description of the videos. The camera resolution is 640x480 pixels forunit actions and 960x720 pixels for composite actions. Twenty frames per second are presentin each video sequence. The data has been recorded from 8 cameras that are synchronized intime [19]. All eight camera views of nodding head have been exhibited in Figure 7.

(i) (ii) (iii)

(iv) (v) (vi)

(vii) (viii)Figure 7: WVU dataset (8 camera views of nodding head)

MPII Dataset

This dataset comprises of 21 main categories (e.g. running, home activities, sportsand others) which are further subdivided into 823 individual human action classes [20] suchas running, jogging, sitting, exercising and so forth. These image frames and short video aretaken mainly from YouTube. The dataset contains 24,920 videos in total. The datasetconsists of a huge range of videos among which, some may consist of occluded humanactions, and the camera viewpoint of these videos may also vary from one video to another.It is also possible that most videos may not show the complete silhouette of humans present[22]-[23], rather some portion of human body may be truncated in some or all of the framesin any video [21].

Page 16: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

10

Weizmann Dataset

The dataset is composed of ninety videos with low resolution of 180x144[22]-[23].Ten natural action classes are focused in this dataset which include running, walking,skipping, bending and others. Each of the ten actions was performed once by all nine actors[22]-[23]. This dataset includes videos without occlusions for most actions, however,walking action has been covered with object occlusions too, such as walking occluded by abag, a log or a pole. In addition to this, moon-walk, limp-walk and normal walk capturedfrom 10 different angles is also part of this dataset. Frames of 3 actions, randomly selectedfor illustration, have been shown in Figure 8. However, these varying walking styles havebeen included in the test data where training will be carried out by video sequences ofnormal walk. Extracted masks and background sequences are also part of this dataset.

(i) Jumping (ii) Idling (iii) SkippingFigure 8: WEIZMANN dataset

CASIA Dataset

CASIA action database [24]-[26] for recognition includes 8 single person actions and7 two-people interactions. The single human actions consist of:

(i) Bending(ii) Crouching(iii) Walking(iv) Running

(v) Jumping(vi) Fainting(vii) Wandering(viii) Punching a car

Two persons interaction instances consist of:

(i) Rob(ii) Follow(iii) Follow and gather

(iv) Meet and part(v) Meet and gather(vi) Fight and overtake

The resolution of video sequences is 320x240. All actions have been recorded withthree camera views i.e. top-down view, angle view and horizontal view. The recordings havebeen made in a parking lot, an outdoor cluttered background, where frame to human ratio islarge. All 3 views of five actions from CASIA dataset have been displayed in Figure 9.

Page 17: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

11

(i) Walking

(ii) Running

(iii) Bending

(iv) Jumping

(v) CrouchingFigure 9: Three views of a few instances of CASIA Action dataset

Page 18: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

12

Berkeley MHAD Dataset

The dataset consists of 11 actions performed 5 time by each of the 12 actors [27]-[29]. Theactions include:

(i) Jumping(ii) Jumping jacks(iii) Throwing(iv) Waving

(v) Clapping hands(vi) Sitting(vii) Standing(viii) Bending

As an example, a sample frame of each of the eleven actions have been depicted inFigure 10. The dataset consists of diversity since no specifications were provided on how toperform a particular action [27]. The actions have been recorded in indoor environment. Inaddition to these actions, a T-pose of every subject is also provided in the dataset to estimatethe structure of the subject. Moreover, background information is also available.

Figure 10: Berkeley MHAD dataset(Some actions along with their point clouds obtained using a Kinect camera)

DISCUSSION AND SUMMARY

The collection of single human actions obtained from the datasets mentioned in thisreview can be categorized into 3 subdivisions i.e. sports and fights, body gestures and somecommon actions including both unit and composite actions. Sports and fights categoryinclude a summary of all those actions in these datasets that are related to either sports orfights and disputes e.g. climbing, punching, running etc. Gestures category is meant tofamiliarize the reader of all the instances in this collection which represents bodily gesturesin one way or the other. This category includes crossing arms, nodding head, bending thebody etc. The third category i.e. common actions include all those examples which cannot belinked to any particular class of actions, rather include commonly performed actions in dailyroutine e.g. walking, sitting, standing etc.

Following fact and figures are determined by comparing all datasets in Table 4.

From 2004 to 2010; KTH, MuHA Vi-MAS, IXMAS Action, WEIZM ANN, CASIA, UIUCand UT Tower were compiled.

Page 19: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

13

- KTH dataset’s background consists of indoor as well as outdoor; IXMAS Action &UIUC was captured indoor and in uniform environment and outdoor only for WEIZMANN and CASIA. However MuHA Vi-MAS and UT tower were recorded outdoor.

- Illumination in KTH is grey-scale, MuHA Vi-MAS is Night Street Lights, IXMASAction is Dim Light, UIUC is Moderate Light, WEIZM ANN, CASIA and UT Towers isDaylight.

- All of them have no occlusions except IXMAS Action, it contains occluded and non-occluded both videos.

- Resolution of KTH and UT Tower is very low; IXMAS Action and UIUC have normalresolution. However, MuHA Vi-MAS has high quality resolution with 720 pixels.Camera views are different for all of these datsets.

- KTH and WEIZM ANN have single camera View only; UT Tower has aerial view,MuHA Vi-MAS consists of 8 camera views, IXMAS Action is recorded with 5 differentangles, CASIA has 3 and UIUC has only frontal view.

- KTH is sorted into 6 Classes with 25 subjects, MuHA Vi-MAS is in 17 classes with 4subjects, UT tower is in 9 classes with 6 subjects, IXMAS action is in 11 classes with 10subjects, WEIZM ANN is in 10 classes with 9 subjects, CASIA is with 8+7 with 24classes and UIUC is in 14 classes with 8 subjects.

From 2011 to 2013; HMDB 51, WVU, MPII and Berkeley MHAD were collected.

- HMDB 51 and MPII were recorded in varying lightning condition while XVU andBerkeley MHAD in moderate light.

- Background conditions are varying for HMDB 51 and MPII whereas indoor for WVUand Berkeley MHAD.

- WVU and Berkeley MHAD has no occlusions but other 2 has different occlusions.

- Camera view for MPII and HMDB 51 is single. WVU has 8, and Berkeley MHAD has12 views of camera.

- Class division of HMDB 51 is 51, WVU is of 12+8 classes with 1 subject, MPII of 823,and Berkeley MHAD of 11 classes with 12 subjects.

Moreover an additional row of “others” has been added in the tables to compensateall those actions that occur in a single dataset or are found to have rare occurrences in thediscussed datasets. HMDB51 and MPII action datasets are very versatile and cover a largenumber of actions. The details of action events in datasets other than these two have beensummarized in Tables 1-3. Moreover, a detailed tabular summary and analysis of thesedatasets have been presented in Tables 4 and 5 respectively.

Page 20: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

14

Table 1: Presence of action instances related to sports category in datasets

Sports and DisputesKTH

MuHAVi-Mas

UTTower

IXMAS UIUCBerkeleyMHAD

CASIA WVU MPIIWEIZMANN

Climbing YesPunching Yes Yes YesKicking Yes Yes YesRunning Yes Yes Yes Yes Yes Yes Yes YesJumping Yes Yes Yes Yes Yes

Jogging Yes Yes YesOthers (dive,somersault

etc.)Yes

Table 2: Body gesture instances in datasets

GesturesKTH MuHAVi

-MasUT

TowerIXMAS UIUC Berkeley

MHADCASIA WVU WEIZ

MANNCross arms Yes

Scratch head Yes

Nod head Yes

Bend Yes Yes YesWaving Yes Yes Yes Yes Yes Yes Yes YesPointing Yes

Raising hand Yes

Table 3: Some common action instances in datasets

Common Actions

KTHMuHAVi

-MasUT

TowerIXMAS UIUC

BerkeleyMHAD

CASIA WVU MPIIWEIZMANN

Walking Yes Yes Yes Yes Yes Yes Yes

Crawling Yes

Clapping Yes Yes

Sitting Yes Yes Yes Yes

Standing Yes Yes Yes Yes Yes

Turning

Pick up/ carry Yes Yes Yes Yes

Drop/throw Yes YesTwo peopleInteractions

Yes

Others(crouching,

fainting etc.)Yes Yes Yes Yes

Page 21: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

15

Table 4: A summary of datasets and their specifications

Dat

aset

s

Bac

kgro

und

Illu

min

atio

n

Occ

lusi

ons

Res

olut

ion

Cam

era

view

s

Scal

e va

riat

ions

No.

of

Vid

eos

Size

Cla

sses

Subj

ects

For

mat

Yea

r of

Pub

licat

ion

Ref

KT

H

Indoors,Outdoors,Uniform

Grey-scaleimages

No 160x120 1 Y 23911.14GB

6 25 AVI 2004[1-4],[12],[13]

MuH

AV

iM

AS Outdoor,

complex

Nightstreetlights

No 720x256 8 N 952171.8GB

17 4 AVI 2010[6],[7]

UT

Tow

ers

Outdoor,uniform

Daylight NoPerson heightno more than

40 pixelsAriel view N 108

670MB

9 6 BMP 2010[15],[16]

IXM

AS

Act

ion

indoors,Uniform

Dim light Both 390x2915 different

anglesN 1148

7.4GB

11 10 AVI 2006[8],[9],[10]

UIU

C Indoor,uniform

Moderatelight

No 400 pixels Front view N 53257.7GB

14 8 TIP 2008[17],[18]

HM

DB

51

VaryingVarying

lightconditions

Rarely Varying 1 Y 67665.5GB

51 - AVI 2011[5],[18]

WV

U Indoor,Complex

Moderatelight

No 640x480 8 N 624023.1GB

12+8

1 JPEG 2011 [19]

MP

II

Varying Varying Both Varying 1 Y 24920438GB

823 - - 2014[20].[21]

WE

IZM

AN

N

Outdoor,uniform

Daylightonly inwalkingactions

180x144Single withexception of

'walking'N 90

340MB

10 9 AVI 2005[22][23]

CA

SIA

Outdoor,complex

Daylight No 320x240 3 N - - 8+7 24 - 2007[24]-[26]

Ber

kele

yM

HA

D

IndoorModerate

lightNo 640x480 12 N 56

822.8GB

11 12GRB

G2013

[27]-[29]

Page 22: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

16

Table 5: A brief analysis of datasets

Datasets Strengths Weaknesses

KTH Varying background Poor resolution

MuHAVi MASMulti-camera views, silhouettes ofhuman body are provided in MAS

datasetFrame to human height ratio is higher

UT Tower Recorded from an aerial viewLarge frame to human ratio results in poor

resolution of the area of interest

IXMAS actionOccluded instances of videos havealso been made part of the dataset

Poor resolution and blurred videos

UIUC High resolutionSlight edges appear in background which

might cause trouble in training

HMDB51 Versatility in actions Inconsistent recording conditions

WVU8 different camera views for each

action, neat backgroundEdges in background might become

problematic in training

MPII Versatility in actions Inconsistent recording conditions

WEIZMANN Uniform background Low resolution

CASIAUnique classes have also been

includedFrame to person ratio is large

BerkeleyMHAD

Versatility in style of performingactions is a plus in testing

Versatility in style of performing actionsbecomes challenging in training

CONCLUSION

Most of the 11 datasets discussed in this review have managed to provide uniformbackgrounds in their recordings, which is vital for training the system. On the contrast, someother datasets provide complex and real backgrounds, aiding in testing purposes. Relativelylesser number of instances of occluded actions is provided in this collection. Majority ofthese datasets provide annotations or bounding boxes around humans.

The mentioned datasets have a dominance of common actions performed in dailyroutine. However, it is observed that there is a lack of instances in context of surveillance forsecurity systems in these online repositories. UT Tower and WEIZMANN datasets have lowframe to person height ratio and KTH dataset comes with low resolution. Due to this reason,it may become tedious to utilize these datasets in body parts segmentation. HMDB51 andMPII comprise of videos based on real life human actions with multiple backgrounds,different resolutions, varying camera views and scale variation. These traits are significantwhen it comes to testing the accuracy of a well-trained system; however, they provide littlebenefit in training stage.

Page 23: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

17

Analysis of these datasets reveals that there are negligible instances of locomotion-based actions that have been captured from a frontal or backside view i.e. approaching ormoving away from a static camera. Moreover, no datasets of Asian people are available whoare likely to have varying clothes and styles of performing chores.

REFERENCES[1] G. Khan et al., “Egocentric visual scene description based on human-object

interaction and deep spatial relations among objects,” Multimed. Tools Appl., pp. 1–22, 2018.

[2] H. Naveed et al., “Human activity recognition using mixture of heterogeneousfeatures and sequential minimal optimization,” Int. J. Mach. Learn. Cybern., pp. 1–12, 2018.

[3] A. Klaser et al., “A spatio-temporal descriptor based on 3d-gradients,” in Proc.BMVC 19th British Machine Vision Conference, 2008, pp. 271–275.

[4] A. Fathi and G. Mori, “Action recognition by learning mid-level motion features,”IEEE Conf. Computer Vision and Pattern Recognition (CVPR ), 2008, pp. 1–8.

[5] H. Kuehne et al., “HMDB: A large video database for human motion recognition,”IEEE Int. Conf. Computer Vision (ICCV), 2011, pp. 2556–2563.

[6] S. Singh et al., “Muhavi: A multicamera human action video dataset for theevaluation of action recognition methods,” in 7th IEEE Int. Conf. Advanced Video andSignal Based Surveillance (AVSS), 2010, pp. 48–55.

[7] M. A. Khan et al., “An implementation of optimized framework for actionclassification using multilayers neural network on selected fused features,” PatternAnal. Appl., pp. 1–21, 2018.

[8] D. Weinland et al., “Free viewpoint action recognition using motion historyvolumes,” Comput. Vis. Image Underst., vol. 104, no. 2–3, pp. 249–257, 2006

[9] D. Weinland et al., “Making action recognition robust to occlusions and viewpointchanges,” in European Conf. Computer Vision, 2010, pp. 635–648.

[10] R. Pope, “A survey on vision-based human action recognition,” in Image and VisionComputing, 2010, vol. 28, no. 6, pp. 976–990.

[11] E. C. Alp and H. Y. Keles, “A Comparative Study of HMMs and LSTMs on ActionClassification with Limited Training Data,” in Proc. SAI Intelligent Systems Conf.,2018, pp. 1102–1115.

[12] J. M. Chaquet et al., “A survey of video datasets for human action and activityrecognition,” Comput. Vis. Image Underst., vol. 117, no. 6, pp. 633–659, 2013.

Page 24: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

18

[13] C. Schüldt et al., “Recognizing human actions: A local SVM approach,” in Proc. Int.Conf. Pattern Recognition, 2004, vol. 3, no. September 2004, pp. 32–36..

[14] Y. Tian et al., “Hierarchical filtered motion for action recognition in crowdedvideos,” IEEE Trans. Syst. Man Cybern. C-Applications Rev., vol. 42, no. 3, p. 313,2012.

[15] C. Chen and J. K. Aggarwal, “Recognizing human action from a far field of view,” inWorkshop Motion and Video Computing (WMVC), 2009, pp. 1–7.

[16] M. Z. Khan et al., “Deep CNN Based Data-Driven Recognition of Cricket BattingShots,” in Int. Conf. Applied Engineering Mathematics (ICAEM), 2018, pp. 67–71

[17] J. Liu et al., “Recognizing human actions by attributes,” in IEEE Conf. ComputerVision and Pattern Recognition (CVPR), 2011, pp. 3337–3344.

[18] F. Naiar et al., “A fixed-point estimation algorithm for learning the multivariateGGMM: application to human action recognition,” in Proc. IEEE Canadian Conf.Electrical & Computer Engineering (CCECE), 2018, pp. 1–4.

[19] V. Kulthumani et al., “WVU multi-view action recognition dataset”, 2012. Available:http//csee.wvu.edu/`vkkulathumani/wvu-action.html

[20] M. Andriluka et al., “2D human pose estimation: New benchmark and state of the artanalysis,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2014, pp.3686–3693.

[21] L. Gorelick et al., “Actions as space-time shapes,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 29, no. 12, pp. 2247–2253, 2007.

[22] M. Blank et al., “Actions as space-time shapes,” in 10th IEEE Int. Conf. ComputerVision (ICCV’05), 2005, vol. 2, p. 1395–1402.

[23] Y. Wang et al., “Human activity recognition based on r transform,” in IEEE Conf.Computer Vision and Pattern Recognition, (CVPR’07), 2007, pp. 1–8.

[24] Z. Zhang et al., “Multi-thread parsing for recognizing complex events in videos,” inEuropean Conf. Computer vision, 2008, pp. 738–751.

[25] Z. Zhang et al., “Trajectory series analysis based event rule induction for visualsurveillance,” in IEEE Conf. Computer Vision and Pattern Recognition (CVPR’07),2007, pp. 1–8.

[26] F. Ofli et al., “Berkeley MHAD: A comprehensive multimodal human actiondatabase,” in IEEE Workshop Applications Computer Vision (WACV), 2013, pp. 53–60.

Page 25: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 1-19 Towards Recognition of Human

19

[27] F. Ofli et al., “Sequence of the most informative joints (SMIJ): A new representationfor human skeletal action recognition” J. Vis. Commun. Image Represent., vol. 25,no. 1, pp. 24–38, 2014.

[28] R. Chaudhry et al., “Bio-inspired dynamic 3d discriminative skeletal features forhuman action recognition,” in Proc. IEEE Conf. Computer Vision and PatternRecognition Workshops, 2013, pp. 471–478.

[29] L. Pishchulin et al., “Fine-grained activity recognition with holistic and pose basedfeatures,” in German Conf. Pattern Recognition, 2014, pp. 678–689.

Page 26: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

21

A New Accelerated Method for SolvingNonlinear Application Problems with

Convergence of Order Two

UMAIR KHALID QURESHI1, ABDUL GHAFOOR SHAIKH2, WAJID ALI SHAIKH3

1Department of Business Administration, Shaheed Benazir Bhutto University, Sanghar, Sindh, Pakistan2Department of Basic Sciences & Related Studies, QUEST, Nawabshah, Sindh, Pakistan

3Department of Mathematics and Statistics, QUEST, Nawabshah, Sindh, Pakistan*Corresponding author’s e-mail: [email protected]

AbstractThe problem of locating roots of nonlinear equations occurs frequently in previous

work. This study suggests a new accelerated iterative method for solving nonlinearequations. The new iterative method is converging quadratically. Some numerical problemsillustrate that this new method can compete with Newton Raphson Method. Henceforth, ithas been observed from the results and comparisons of developed method that the newaccelerated Iterated Method is loftier than Newton Raphson Method.

Keywords: Taylor series, Adomian’s method, Nonlinear equation; Newton Raphsonmethod; Order of convergence

INTRODUCTION

The most significant and challenging problems is to find real roots of nonlinearequations, such as ( ) = 0

There exist many applications that give thousands of the nonlinear problems. Theseapplications rise in an extensive assortment of real-world applications in Applied Science andEngineering [1]-[3]. For this purpose, lots of numerical techniques had been suggested byusing different techniques including quadrature formula, homotopy perturbation method andits variant forms, Taylor series, divided difference and decomposition method [4]–[10]Correspondingly, in this study, we have suggested an improvement of new iterated methodby using [11, 12] and numerical condition for solving a real root of nonlinear equations. Fromresults, it is shown that the recommended method converges quickly and is more competentwith the assessment of Newton Raphson method. C++ programming is used to justify theresults of proposed method. Numerical fallouts supports this theory as compared withNewton Raphson method for certain functions.

Page 27: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

22

MATERIAL AND METHOD

Considering the nonlinear equation, such as( ) = 0 (1)Now writing ( + ℎ) in Taylor’s series expansion about x, we obtain( + ℎ) = ( ) + ℎ `( ) + (ℎ) (2)(ℎ) = ( + ℎ) − ( ) − ℎ `( ) (3)Supposing `( ) ≠ 0,for searching the value of h, so ( + ℎ) = 0, such that( ) + ℎ `( ) + (ℎ) = 0 (4)This is equivalent to finding the following `h`

ℎ = − ( )`( ) − (ℎ)`( ) (5)Eq (5) can be rewritten in the following formℎ = + (ℎ) (6)where = − ( )`( ) (7)and (ℎ) = − (ℎ)`( ) = − ( + ℎ) − ( ) − ℎ `( )`( ) (8)Here c is a constant and N(h) is a nonlinear function. When we apply the technique of [11], inEq (8) we get

= − ( + ∗) + ( + )1 − ′( + ∗) (9)when x is sufficiently close to the real solution of f(x) = 0, ∗ ≈ 0. Thus Eq (9) is convertedto

= − ( ) + ( + )1 − ′( ) (10)Applying the Adomian’s method to Eq (6), we get

Page 28: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

23

= (ℎ ) = ( ) = ( )1 − ′( ) = ( + )2 ′( ) − 2 ′( + ) (11)Now we construct the iterative method. For ℎ ≈ ℎ = − ( )`( ) , obtains ℎ + ≈ −( )`( ), which yields Newton method = − ( )`( )For k = 1, one obtains ℎ ≈ ℎ + (ℎ ), ℎ + ≈ + ℎ + (ℎ ), which suggests thefollowing iterative technique, we have

= − ( )2 `( ) − `( ) (12)Eq (12) can also be written as

= − ( )2 `( ) − `( ) (13)Where, = + ℎ (14)By using numerical condition i.e. ℎ =∆ ( ) = ( ) in Eq (14), then substitute in Eq (13),we get = − ( )2 ` + ( ) − `( ) (15)Hence Eq (15) is a new accelerated iterated method.

RATE OF CONVERGENCE

The following section shows that the New Accelerated Method is Quadratic Convergence.proofUsing the relation = − a in Taylor series, ore from Taylor series we estimate( ), `( ) `( + ( )) with using this condition c = ``( )`( ) and ignoring higher

order term, we have ( ) = `( )( + ) (16)or `( ) = `( )(1 + 2 ) (17)and

Page 29: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

24

+ ( ) = `( )[ + ( ) + ( + ( ))Or ` + ( ) = `( )[ 1 + `( ) + 2 1 + `( ) + ( ) ]` + ( ) = `( ) 1 + `( ) [1 + 2 + ( ) ] (18)By using Eq (16) and Eq (17) in Eq (18), we obtain` + ( ) = `( ) 1 + `( )(1 + 2 ) 1 + 2 1 + `( )` + ( ) = `( ) [1 + `( ) + 2 + 6 `( ) + 2 ` ( )] (19)By usingEq (19) and Eq (17), thus2 ` + ( ) − `( ) = `( )[2 + 2 `( ) + 12 `( ) + 4 + 4 ` ( ) − 1 − 2 ]2 ` + ( ) − `( ) = `( )[[1 + 2 `( ) + 2 {1 + 6 `( ) + 2 ` ( )}] (20)Substituting Eq (16) Eq (17) Eq (15), we have

= − `( )( + )`( )[[1 + 2 `( ) + 2 {1 + 4 `( ) + 2 `( ) + 2 ` ( )}]= − (1 + )[1 + 2 `( ) + 2 {1 + 4 `( ) + 2 `( ) + 2 ` ( )}]= − (1 + )[1 − 2 `( ) − 2 {1 + 4 `( ) + 2 `( ) + 2 ` ( )}]= 2 `( ) + 2 32 + 4 `( ) + 2 ` ( ) (21)

By using Eq (1) in Eq (16), then substitute Eq (21), we get

= −2 ``( ) + 2 32 + 4 `( ) + 2 ` ( )= −2 ``( ) + 2 32 + 4 `( ) + 2 ` ( )

Hence, this proves that the proposed iterative method has second order ofconvergence.

Page 30: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

25

RESULTS AND DISSCUSSIONS

In this section, C++ programming is used to examine the fallouts of proposed method.The developed second order method is applied on few examples of nonlinear functions andinterrelated with the Newton Raphson Method as shown in Table-1. From the numericalresults, it has been observed that the second order accelerated method is reducing the numberof iterations which is less than the iteration number of the Newton Raphson Method as wellas accuracy as depicted in the following table.

Table-1: Numerical Results of New Method

FUNCTIONS METHODS ITERATIONS ROOT A Ee-x-cosx

xo=5Newton Raphson Method

New Method32

4.72129 4.76837e-7

1.35655e-2

Sinx-x+1xo=0.5

Newton Raphson MethodNew Method

87

1.93456 1.19209e-7

1.27554e-5

x2 -ex–3x+2xo=1.5

Newton Raphson MethodNew Method

54

0.259171 2.98023e-8

1.77413e-4

2x-lnx-7xo=6

Newton Raphson MethodNew Method

44

4.21991 4.76837e-7

4.76837e-7

sinx-0.5xxo=2

Newton Raphson MethodNew Method

44

1.89549 1.19209e-7

1.19209e-7

CONCLUSION

The problem of locating roots of nonlinear equations occurs frequently in scientificwork. In this study, a new accelerated method is suggested and analyzed to determine thenonlinear problems. The developed method is convergence quadratically. Few examplesexemplify the performance of the new method this makes it superior to the Newton iterativemethod with accuracy as well as iteration perception. Henceforth, it is observed fromnumerical outcomes that the proposed second order accelerated method is well execution,more effectual and informal to employment for solving non-linear equations.

ACKNOWLEDGEMENT

The author is thankful to Dr. Asif Ali Shaikh (Professor Department of Basic Sciences andRelated Studies, Mehran University of Engineering and Technology, Jamshoro, Pakistan)who provided me professional knowledge and guidance for research work, as well as supportto family, that’s why the author could be able to do this research.

Page 31: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 :21-26 A New Accelerated Method

26

REFERENCES

[1] C. N. Iwetan et al., “Comparative Study of the Bisection and Newton Methods insolving for Zero and Extremes of a Single-Variable Function” J. NAMP, vol. 21, pp.173–176, 2012.

[2] N. Yasmin and M. Junjua, “Some derivative free iterative methods for solvingnonlinear equations,” Acad. Res. Int., vol. 2, no. 1, pp. 75–82, 2012.

[3] B. N. Datta, “Lecture Notes on Numerical Solution of Root − Finding Problems MATH435,” 2012. Available: http://www.math.niu.edu/~dattab/math435.2009/ROOT-FINDING.pdf

[4] U. K. Qureshi, A. A. Shaikh, et al., “Modified Free Derivative Open Method forSolving Non-Linear Equations,” Sindh Univ. Res. Journal-SURJ (Science Ser.), vol.49, no. 4, pp. 821–824, 2017.

[5] S. Q. E. Soomro and A. A. Shaikh, “On the Development of A New Multi-StepDerivative Free Method to Accelerate the Convergence of Bracketing Methods forSolving f(x)=0,” Sindh Univ. Res. Journal-SURJ (Science Ser.), vol. 48, no. 3, pp.601–603, 2016.

[6] S. Abbasbandy, “Improving Newton Raphson Method for Nonlinear Equations byModified Adomian Decomposition Method,” Appl. Math. Comput., vol. 145, no. 2–3,pp. 887–893, 2003.

[7] E. Babolian and J. Biazar, “Solution of Nonlinear Equations by Modified AdomianDecomposition Method,” Appl. Math. Comput., vol. 132, no. 1, pp. 167–172, 2002.

[8] C. Solanki, P. Thapliya, and K. Tomar, “Role of Bisection Method,” Int. J. Comput.Appl. Technol. Res., vol. 3, no. 8, pp. 533–535, 2014.

[9] A. A. Sangah, A. A. Shaikh, et al., “Comparative Study of Existing BracketingMethods with Modified Bracketing Algorithm for solving Nonlinear Equations insingle variable,” Sindh Univ. Res. Journal-SURJ (Science Ser.), vol. 48, no. 1, pp.171–174, 2016.

[10] A. A. Siyal, et al., “Modified Algorithm for Solving Nonlinear Equations in SingleVariable,” J. Appl. Environ. Biol. Sci, vol. 7, no. 5, pp. 166–171, 2017.

[11] M. Basto, et al., “A new iterative method to compute nonlinear equations,” Appl.Math. Comput., vol. 173, no. 1, pp. 468–483, Feb. 2006.

[12] J. Feng, “A New Two-step Method for Solving Nonlinear Equations,” Int. J. NonlinearSci., vol. 8, no. 1, pp. 40–44, 2009.

Page 32: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

27

Mining the Twitter Data Stream: A Review

MUHAMMAD USMAN*, MUHAMMAD AKRAM SHAIKH

Pakistan Scientific and Technological Information Center, Islamabad, Pakistan

*Corresponding Author’s e-mail: [email protected]

Abstract

Now a days, due to recent growth of the Internet and the World Wide Web largevolume of data is available everywhere. Various approaches have been proposed in theliterature to analyze such large volume data involving large number of dimensions. Real timedata streams of large volume can be exploited for knowledge discovery. In the current study, aliterature review of the different approaches adapted by various researchers to extractknowledge from Twitter data streams particularly by creating a multi-dimensionalarchitecture comprising of Meta tags with tweets is conducted. These approaches have beenreviewed in terms of data retrieval, storage, database design, analysis, visualization abilitiesand the technologies being used for these components. The study also discusses thetechnological gaps and proposes some recommendations for the future research.

Keywords: Twitter, Data Streams, Data Mining, Data Warehouse, Multidimensional Mining

INTRODUCTION

Over the last two decades, the data is being generated in large volume due to the useof automated systems, enhancements in storage capacity of devices and availability ofinternet connectivity [1]. Large volume of data contains important information related tospecific domains and can be exploited further for knowledge extraction and high-leveldecision making. Many knowledge extraction techniques have been proposed by differentresearchers which help the decision makers to study the insights of data [2]. Data warehousesare used to store large volume of data in aggregate form to enable the multi-dimensionalanalysis. Moreover, data mining techniques have been used in the past to extract and predicttrends by researchers [3]-[5].

Data streams are specific type of big data, in which real-time data is generated in largevolume. Streaming data is available in all social networks like Twitter, Facebook andInstagram etc. at real time and in big volume. For example, according to a study [6], onlyTwitter generates 500 Million Tweets per day. Analysis of these data streams is gainingpopularity these days.

Page 33: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

28

In previous studies, researchers have exploited the Twitter Streaming API and storedthe tweets in local file systems or databases for analysis purposes. Some studies have focusedretrieving real-time data for real-time analysis, while others have saved data over a specificperiod of time for different analysis perspectives. Apart from this, the tweet text is targetedfor textual and opinion mining along with the analysis which is done on Meta tags associatedwith each tweet. Data storage has been done in files systems, open-source databases, licenseddatabase as well as aggregated storage repositories like data warehouses. The targetedobjective in the research mainly include topic identification, location based intensity ofTwitter usage, improvements in storage time of data streams, application of OLAP overmultidimensional stream data, behavioral analysis, crisis detection and other eventsdetections. Apart from such analysis, researchers have also focused on visualization ofTwitter data streams in order to assist the analyst to explore streams graphically.

In this study, some recent research work related to storage of the large volume Twitterdata streams for performing analysis at large scale has been studied. At the end theseapproaches are critically evaluated in terms of purpose, data storage methods andtechnologies used.

LITERATURE REVIEW

Bringay et al. [7] focused on defining and manipulating the tweet cubes in a multi-dimensional architecture. The study proposes to create a STAR Schema in a multi-dimensional environment for analysis of data stream from Twitter. In the first step of theprocess, the authors define a STAR schema for the data warehouse by exploring thedimensions and facts from the tweets data. The second step in the process used TF-IDFAdaptative measure to find out more significant words in the hierarchy levels of the cubes.The model identifies the context of tweets by analyzing the word hierarchy using MeSH(Medical Subject Headings), a vocabulary for medicines, if no predefined hierarchy exists inthe first step. Although the authors are able to classify the most important words in dimensionhierarchies, the approach seems to work for only STAR Schema for multidimensionalarchitectures. Moreover, the contextualization is done using Medical Science Library, andthus the hierarchal contextualization only works for medical sciences if a predefinedhierarchy is not available.

A similar approach on text cube mining was presented by Liu et al. [8] in which theyaimed to study the human social cultural behavior using Twitter stream data. For thispurpose, the authors created text cube architectures in order to develop prediction models.The framework proposed in this research has three parts. The framework extracts linguisticfeatures from the data in the generic language function. The next immediate part enables theselection of linguistic features from the list by exploiting the psychological and sociologicalexpectations. The HSCB layer is added to evaluate the given dimensions. These dimensionsand facts are used to create a STAR Scheme at the next stage. The prediction models areapplied at the last step on the data warehouse for each dimension using the selected features.

Page 34: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

29

The methodology is tested for prediction of violence in Egypt during 2011. However, themethodology is not tested with huge amount of data.

In another approach, Rehman et al. [9] introduced a technique which created a datawarehouse based upon the Twitter streams. The architecture presented in the technique workson 5 layers. The first layer is used for capturing Twitter Streams using Twitter APIs (RESTAPI, Search API or Streaming API). Second layer in the process reads the data available inJSON Format and creates an XML. MS SQL Server is used for saving the data in datawarehouse architecture in the third layer. The remaining two layers in the architecture includefront end tools for analysis and presentation purposes. The technique is tested on the datagathered from Twitter stream related to earthquake in Indonesia. Data was stored in Datawarehouse, analyzed and was mapped graphically for visualization purposes. The authorsintend to perform content analysis involving spam filtering and language detection fromtexts. Authors also suggested the future directions like usage of Yahoo, Wikipedia andSimilar services for events and entity detection.

Extending their work further, Rehman et al. [10] worked on analysis of unstructuredand semi-structured data from Twitter data streams. In the first step of this methodology, theauthors extract facts and dimensions required for multi dimensional STAR Scheme to buildthe data warehouse. These facts and dimensions are extracted on the basis of aggregatedinformation formed by semantic analysis. Furthermore, two APIs are used for semanticanalysis including Alchmey API and Open Calais. These APIs are used for sentiment analysisand topic generation respectively. A sentiment analysis is performed which categories thetweets into Negative, Neutral, Positive and No Sentiment categories. Afterwards, entity andevent detection is performed to get further insights according to the entities like Persons orCountries. Such Aggregate data is then stored in the data warehouse, where data miningtechniques are applied to explore the aggregated information for analysis purposes.

Liu et al. [11] also extended their work further by studying different kinds of human,social and cultural behaviour (HSCB) embedded in the Twitter stream. The authors haveextended their previous approach of sentiment analysis on text cubes by displaying contentsof cubes over a heat map along with application of mining models on the data from thesecubes. To start the process, linguistic features are extracted particularly emotional features.At this step, these features are treated as measures, whereas the sentiments are targeted asfacts to create a STAR Schema for the data warehouse. Afterwards data mining methods likeLIBSVM, REGTree and IBL methods are used to exploit the data warehouse for analysispurposes. The contents of cubes are further mapped on a heat map, where degree of opacityindicates the value of the behavioral, social or cultural measures, helping analysts to focus onconcerned hotspots on the map. The methodology is tested with two data sets taken from aU.S. City, and Arab Spring.

In a similar approach for behavior detections, Vioulès et al. [21] presented amethodology to automatically identify a sudden change in a Twitter user online behavior. Inthis technique, the authors have combined natural language processing techniques with some

Page 35: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

30

text and behavioral features. The experiments show that this scoring method is able to capturethe warning signs better than compared with some other machine learning algorithms likeRandom Forest, Simple Logistic and J48 Classifiers.

A location based study was performed by Kumar et al. [12], in which they worked onidentification of a tweet whether it is coming from a crisis region or not. Authors argue thatonly 1% of the tweets include geographic information, and therefore it is not easy to confirmif the tweet is from a crisis region or not. Moreover, in such cases, it is a time consumingprocess to analyze the history of the tweeter to determine the same. Authors focus on thetweet text instead in order to identify as if it is coming from a crisis region. Authors havefound that if a tweet is from crisis region, then its more likely to discuss a novel topic, it isless likely to seek attention and it is more likely to use external resources to convey theirmessage. The authors have used different parameters like mobile features, resource features,textual features, linguistic features, and user features to identify behavioral patterns,. Theexperiments were conducted using Naïve Bayes and Random Forest Classifier through WekaTool. In order to find out the most important feature among the given features, logisticregression was applied. It was found that linguistic features were the most important class offeatures. A case study performed on Arizona Wildfire Tweets by using the describedapproach, and it was found to be more effective than another similar approach. In future, theauthors intend to extend this study in terms of size of training data in order to see its effect onclassification.

In a different study, Koupaie et al. [13] presented an algorithm which is used to detectoutliers in data streams by involving a clustering method. According to the authors, theunsupervised data mining approaches are more feasible when compared with the supervisedapproaches, because these don’t require class labels of objects. Moreover, such techniquescan detect unforeseen outlying cases. Moreover, these techniques don’t require knowledge ofdata in advance. Authors presented a cluster based outlier detection algorithm for datastreams, where real time outlier is detected in stream data using incremental clusteringalgorithm. The methodology presented in this paper has not been compared with any othersimilar approaches to draw a comparison for accuracy and efficiency.

Kraiem et al. [14] worked on OLAP Analysis of Twitter data stream, whereas amultidimensional model was proposed. The methodology works on specificity of the tweettext and creates a linkage between the tweets and its responses. The text of the tweets isstudied to create a multidimensional architecture. A logical model is created by applyingdifferent set of rules. A tweet OLAP prototype (OLAP4Tweet) is created to analyze theTwitter accounts. This prototype is created in Java with Oracle Database. In the first step ofanalysis, the stream of data is read into the database for a couple of days. As the next step, thelanguage detection is done by using a JSON-based API. The tweets are categorized perlanguage at this stage. Afterwards the user category (Information Seeker,Friendship/Relationship, and Information Sharing) is used to allocate the data within thesecategories. In the text step, Tweet Type (Normal Tweet, Responses, Mentions, Retweet) issetup against each tweet by analyzing the data. OLAP Cubes are further created to perform

Page 36: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

31

the actual analysis as per requirements. The methodology allows Twitter data exploration;however it requires the analyst to setup predefined categories, tweet types and otherdimensions, therefore, it is not a generalized approach which can be used for specific cases.

In order to test the effectiveness of open source databases, Murazza andNurwidyantoro [15] tested Cassandra NoSQL Database to create a data warehouse using realtime Twitter data. Authors have written data into the Cassandra NoSQL database, andperformed read operations on the same. The results obtained were compared with the otherrelational databases like MySQL and PostgreSQL for both write and read operations. It wasfound that the Cassandra provides better write operation time, compared with other databases.However, it is not faster than other databases in terms of read operations. The authors havecreated visualization features using JQuery Libraries to show real time hash-tag counts.However, the real-time visualization requires a one second gap after every 20 seconds to loadthe new data. Authors have suggested that the research work can be enhanced in twodirections. One of the directions is to cover the join and aggregation limitations, which can bedone by integration it with some real-time analytic framework. Other future direction is toimprove the performance by creating clusters of data instead of applying analysis on all dataat once.

In order to receive Twitter data, perform analysis and provide visualizationcapabilities, Sechelea et al. [16] presented a methodology which exploits the geo-locationparameters. Authors have created a script in Python which harvests the Twitter data basedupon some geo-location parameters and saves data into a text file. Afterwards, the data ispreprocessed by removing re-tweets, applying tokenization, removing stop-words, andapplying stemming. Afterwards, clusters of data are created using a consensus matrix alongK-Means Algorithm and DBSCAN algorithm. The clustering technique allowed the authorsto obtain hot topics from the data which is collected and preprocessed in the previous step.Open Street Maps are used to provide the 3-D visualization capability through a pythonscript. The visualization shows the density of Twitter activity in a given locationgeographically.

Bordogna et al. [17] proposed a new technique to exploit time-stamped geo-taggedtweets posted by the users. The tweets in this research work are tracked to trace their tripsutilizing a clustering algorithm. The algorithm focuses on grouping similar tours together toanalyze popular tours. The overall framework created by the authors is termed as TheInteroperable Framework for Trips Tracking and Analysis. The framework includes two toolsnamed as Follow Me Suite and Trips Analysis Suite, which are used for Trips Identificationand Geographical Analysis respectively. Another component in the architecture is called asGeo Server which is used to publish the data of trips as open data for further analysis throughOGC Standard Geo Portal Clients available online. The fourth component is the TouristTracker Portal that enables online analysis of popular tours. Authors intend to integrate thistool with other social network platforms. Moreover, in future it was aimed to utilize anotherclustering technique to exploit unexpected knowledge from gathered trips.

Page 37: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

32

Cuzzocrea et al. [18] presented a new methodology to target the arrangement ofTwitter tweets in OLAP Cube within a conceptual hierarchy. To achieve this, the authorshave integrated Time Aware Fuzzy Formal Concept Analysis Theory with OLAPTechnology over multidimensional tweet streams. Authors have also introduced asummarization algorithm which creates subjects of tweets according to the analysisperspectives.

In a recent research, Mallek et al. [19] presented an approach to create a datawarehouse for Twitter data which uses NoSQL database for storage of data imported fromTwitter. The tool created by the authors is termed as BigDimETL which processes theunstructured data received from Twitter streams into a NoSQL Database and then usesMapReduce functions authored by Google's Hadoop Framework to load the data into a datawarehouse based upon STAR Schema. The data warehouse is further exploited to performdifferent kinds of analysis.

In another recent approach, Ibrahim et al. [20] conducted a study to discuss differenttechniques for topic detection from Twitter data stream. Authors have tested SFM, Bngram,CSS and Examplar-based topic detection techniques on 3 Twitter data sets and calculatedtheir performance using term precision, term recall and topic recall along with the runningtime. It was found that SFM and Bngram provide better precision among the techniques.Moreover, CSS performs better in terms of topic and term recall. From the study, it can beconcluded that each method has its advantages and disadvantages. The business analysts canselect a certain technique by adapting the required precision and recall values. It will beinteresting to see these techniques in terms of F1 measure which seeks a balance betweenboth of these measures.

Summary of previous research

A brief Summary of Techniques used in mining of Twitter data stream is presented inTable 1. Furthermore, the technological summary of these techniques is given in Table 2.According to the findings, it was found that Twitter Streaming API was used in most of thestudies to receive the data in JSON format and store it locally for processing purposes [2],[7], [8], [14], [17], [18]. However, some studies focus on geo-locations while receiving thedata, resulting in location-specific analysis. Some approaches targeted specific cities insteadof tweets from all over the world. At times, the streaming data was received using a specificevent, like a crisis a political campaign or a mega event like a football final [12], [17]. Suchstudies show that business analysts from all domains have targeted the Twitter data stream forknowledge discovery.

There are 67 variables in a tweet [10]. Some of these are numeric, whereas somevariables involve timestamps. One of the major attributes is tweet text itself. Most of thestudies have focused on tweet text for semantic analysis, textual mining, opinion mining,topic exploration or behavioral analysis [11],[12], whereas other studies have utilized theassociated Meta tags present with the tweets for knowledge discovery [9],[18].

Page 38: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

33

It is also interesting to review the storage mechanism in different approaches used forknowledge discovery from Twitter data streams. Some approaches have preferred to use textfiles, whereas there are approaches where XML objects are used to store the extracted data[10]. The usage of open source databases like Cassandra, MySQL, PostgreSQL etc. has alsobeen witnessed [15], [17]. The approaches which have targeted multi-dimensional analysishave preferred to use SQL Server to store the retrieved information. In multi-dimensionalmining of data streams, most of the authors have used STAR Schema as the multi-dimensional architecture in order to store the aggregated information for analysis purposes[8], [11].

The targeted objectives in the reviewed research mainly include the following: topicidentification, location based intensity of Twitter usage, improvements in storage time of datastreams, application of OLAP over multidimensional stream data, behavioral analysis, crisisdetection and other events detections. These objectives are either met by analyzing the tweettext or the analysis done using Meta information of tweets in a multi-dimensionalenvironment [9], [14].

It is also found that most of the techniques include a visualization component to assistthe analyst to explore the tweet streams graphically. Some of the researchers have created piecharts or column bar charts to display the aggregated data from the databases. Whereas someresearchers have tried to plot the Twitter activities on Google Maps or similar components todisplay the location based intensity of Twitter data streams. Some of the researchers havecreated Heat Maps as well. For visualization purposes, researchers have utilized open sourceand licensed components including GeoServer, Google Map Services, Wed Map Services,HoloVizio and JQuery High Chart Components [15]-[17].

Different tools, technologies, algorithms and libraries have been used in the previousresearches for such analysis which include open source and licensed databases/languages likeMySQL, PostgreSQL, PHP, SQL Server, JQuery, JSON and XML.

Discussion and Future Recommendations

A discussion on the gaps identified in the previous researches is given below along with thefuture recommendations to fill these gaps:

1. Almost all approaches require data to be in file system, database or a data warehouse inorder to process for analysis purposes. Some approaches are using XML structures;therefore these approaches take more time to process the data.

2. The approaches which are storing data to the database are lacking techniques or amechanism to deal with continuous incoming data streams. In our opinion, there shouldbe a precise way to continuously update the underline storage with the latest data streamso that a nearly real-time analysis is possible.

Page 39: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

34

Table 1: Summary of Techniques used in mining of Twitter data streamNo. Title Objectives Findings/Achievements Limitations1 Towards an on-line analysis of

tweets processing - [7]-Development of a data warehousefor multi-dimensional analysis-Contextual Analysis ofMultidimensional data

-Usage of TF-IDF Adaptative Measure forfinding significant words at hierarchicallevels.-Usage of MeSH Library for Hierarchicallevels in case of absence of predefinedhierarchies

-The Approach has targeted dietary domain.It will be interesting to test the methodologyin other domains.

2 SocialCube: A text cubeframework for analyzing socialmedia data - [8]

Multidimensional Modeling ofTwitter Data Stream for InformationQuerying and Visualization

-Human, Social, Cultural and Behavior(HSCB) Analysis-Predictive HSCB Modeling

-Methodology works effectively for politicalevents but can be explored further on varietyof datasets.-HSCB Analysis is done in a textual databasewhich can increase the computation time.

3 Building a Data Warehouse forTwitter Stream Exploration - [9]

To Demonstrate the power of Datawarehouse technology for datastream analysis

-Knowledge Discovery using multi-dimensional schema for Twitter datastream

Relies heavily on topics already defined byTwitter, so hidden knowledge discovery isvery limited.

4 OLAPing social media: the case ofTwitter - [10]

Multidimensional Modeling ofTwitter Data Stream for Text andOpinion Mining

-Semantic Analysis on multi-dimensionalTwitter stream data-Event Detection

For opinion mining the targeted APIs have alimit per day, which can be a hurdle for themining process.

5 A text cube approach to human,social and cultural behavior in theTwitter stream – [11]

Multidimensional Modeling andVisualization of Twitter StreamData for HSCB Study

-Human Social and Culture Behavior Study-Text Cube Visualization on Heat Map

-Methodology works for political eventseffective and it can be explored to see itseffectiveness on variety of datasets.-HSCB Analysis is done in a textual databasewhich can increase the computation time.

6 A Behaviors Analysis Approachto Identifying Tweets from CrisisRegions - [12]

Identify a tweet as if it’s comingfrom a crisis region

-Behavioral Analysis using Tweet Contents -Requires rules to define the crisis behavior.-A visualization capability will suit the topic

7 Outlier Detection in Stream Databy Clustering Method – [13]

An algorithm to detect outliers indata streams

-Outlier detection in data streams usingclustering-Accuracy better than other algorithms

-No experiment data is available to see themethodology in place-The comparison of other approaches is unknown

8 Modeling and OLAPing socialmedia: the case of Twitter – [14]

Multidimensional Modeling ofTwitter Data Stream for InformationRetrieval

Development of Constellation Schema forMultidimensional modeling of Twitter data

The exploration works effectively buttechniques like topic exploration andautomated schema generation can be handy ifintegrated.

9 Cassandra and SQL DatabaseComparison for Near Real-Time

Comparison of Cassandra and SQLServer for data storage and

-Cassandra has better storage ability forstreaming data

-The analysis power of the approach is notknown as the focus is on time analysis

Page 40: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

35

No. Title Objectives Findings/Achievements LimitationsTwitter Data Warehouse – [15] multidimensional analysis for

streaming data-Cassandra has lower query performancefor data retrieval

-It will be appropriate if the approach iscompared with conventional OLAP and otherdata mining techniques

10 Twitter Data Clustering andVisualization – [16]

Identification of topics from Twitterdata streamDetection of intensity of Twitteractivity at a location

-Topic Clustering-Visualization of Tweets in Density and 3-D Maps

-No Pruning of topics is done, so garbagetopics are expected.-Visualization remains overly technical forbusiness analysts

11 Clustering Geo-Tagged Tweets forAdvanced Big Data Analytics –[17]

-Group Similar Trips based uponGeo-Location to analyze populartours

-Grouping of Tweets based up the geo-location-Visualization Ability of Geo-TaggedTweets

-Highly focused on Geo-Location which isnot publically available most of the time

12 OLAP Analysis ofMultidimensional Tweet Streamsfor Supporting AdvancedAnalytics – [18]

-Integration of FFCA with OLAP-A microblog summarizationalgorithm to reduce the tweets datato a subset

-Integration of FFCA with OLAP.-Usage of Microblog Summarize for datareduction

-It is not known if the discarded data in theprocess has an impact on the analysis-A comparison with the full data could havehelped as a comparative study

13 BigDimETL with NoSQLDatabase – [19]

Usage of NoSQL database for datawarehouse for Twitter data stream

-A tool called BigDimETL having abilityto create a NoSQL Database for datawarehouse creation-Usage of Map-Reduce functions to loaddata into data warehouse

-A comparison with similar approach can bedone to see the effectiveness of usingNoSQL rather than traditional relationaldatabases

14 Tools and Approaches for TopicDetection from Twitter Streams:Survey – [20]

Comparison of techniques for topicdetection from Twitter Data Stream

-SFM and Bngram provide better precision-CSS performs better in terms of topic andterm recall

-The approach requires technical knowledgeof Matlab.-F1 Measure can be integrated in the systemfor the comparison purposes.

15 Detection of Suicide-Related Postsin Twitter Data Streams – [21]

Sudden Change Detection in user’sonline behavior

-Combined Natural Language ProcessingMethods with Textual and Behavioralfeatures for user’s behavior detection-Experiments conducted on datasets andcompared with Machine LearningClassifiers

-The behavioral features are different fordifferent individuals, so the approach can beimproved by testing different behavioralfeatures for different persons instead of fixedbehaviors.

Page 41: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

36

Table 2: Technological Summary of Techniques used in mining of Twitter data stream

Sr.No.

Title Database Storage Experiment Dataset / Size Visualization Technologies / APIs /Libraries / Algorithms

1 Towards an on-line analysisof tweets processing – [7]

PostgreSQL 1,801,310 Tweets No Visualization Ability MeSH Library, TF-IDFAdaptative Measure,PostgreSQL

2 SocialCube: A text cubeframework for analyzingsocial media data – [8]

Data Warehouse(STAR Schema)

Egyptian Revolution64,000 Tweets

Visualization throughABMiner

JSON, ABMiner,LibSVM, RepTree, IBK,OLAP

3 Building a Data Warehousefor Twitter StreamExploration – [9]

Data Warehouse(X-DFM basedSchema)

Indonesia Earthquake 201286000 Tweets

-Time Series Charts-Pie-Charts

Microsoft SQL ServerAnalysis Services

4 OLAPing social media: thecase of Twitter – [10]

XML Based Database(BaseX)

Euro 2012 Final0.5 Million Tweets

Basic Column and PieCharts

Alchmey API,OpenCalais, XML,BaseX

5 A text cube approach tohuman, social and culturalbehavior in the Twitterstream – [11]

Data Warehouse(STAR Schema)

(Washington D.C Data Set)~0.5 million tweetsEgypt Revolt –UnknownSize

Visualization using HeatMap

OLAP, TwitterStreaming API, XML,libSVM, REPTree, IBK

6 A Behaviors AnalysisApproach to IdentifyingTweets from Crisis Regions- [12]

Not used Arizona Wildfires(Size Unknown)

No Visualization Ability Naïve Bayes, RandomForest Classifier,Wilcoxon Signed RankTest

7 Outlier Detection in StreamData by Clustering Method -[13]

Not Known Not Known Not Known Not Known

8 Modeling and OLAPingsocial media: the case ofTwitter - [14]

OLAP(Constellation Schema)

72,000 Tweets Custom ChartingComponent (Column, PieChart Graphs)

Java, Oracle, JSON

Page 42: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

37

Sr.No. Title Database Storage Experiment Dataset / Size Visualization

Technologies / APIs /Libraries / Algorithms

9 Cassandra and SQLDatabase Comparison forNear Real-Time TwitterData Warehouse - [15]

CassandraMySQLPostgreSQL

10,000 Tweets Hashtag Counts shownover time using JQueryHigh Charts Component

Cassandra, PHP, JQueryCharts, MySQL,PostgreSQL, JSON

10 Twitter Data Clustering andVisualization - [16]

Not Known 250,000 Tweets fromLondon 15,000 Tweetsfrom Brussels

Heat Map using GoogleMaps3-D Map usingHoloVizio

Python, Matlab, C++,Apache

11 Clustering Geo-TaggedTweets for Advanced BigData Analytics - [17]

PostgreSQL 9127 Tweets fromBergamo, Italy

Visualization onGeoServer as per OGCStandard via Wed MapService

Not Known

12 OLAP Analysis ofMultidimensional TweetStreams for SupportingAdvanced Analytics - [18]

Data Warehouse(DFM based Schema)

2015 Italian ElectionCampaign(Unknown Data size)

No Visualization Not Known

13 BigDimETL with NoSQLDatabase – [19]

NoSQL Database Not clear No Visualization NoSQL, Hadoop, MapReduce, XML, JSON,CSV

14 Tools and Approaches forTopic Detection fromTwitter Streams: Survey –[20]

Not Known FA Cup (74,00 Tweets)US Elections (620,000Tweets)Super Tuesday (230,000Tweets)

Matrix Visualizationusing TMG

Text to Matrix Generator(TMG) Matlab tool

15 Detection of Suicide-Related Posts in TwitterData Streams – [21]

Not Mentioned 1: 5446 Tweets2: 11,000 Tweets

Small Graphs Not Known

Page 43: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

38

3. Some approaches neglect the Meta tags while saving the data stream in the underlinestorage being used. The Meta tags have very detailed related information of tweet text, forexample the location, timestamp, number of retweets and likes etc. which can beconsidered for a better analysis.

4. The approaches, which involve data warehouse for storage of aggregate data, don’t have amechanism to decide the dimensions as well as facts at run time. Such pruning may resultin improving the storage capacity. It will further improve the analysis computation time.Moreover, a more targeted and meaningful analysis will be performed.

5. It can be seen that all these techniques are missing a one-window analysis framework.Different technologies and components are being used in isolation for performing certainoperations. For example the data reading component is separate of the analysiscomponent in most of the approaches. Moreover, the visualization component worksseparately from analysis component in some of the researches. A one-window frameworkwill not only make the operations smooth, but will also reduce the computation time.

CONCLUSION

In this research study, the literature related to the mining of Twitter data stream hasbeen studied. It is found that approaches using file systems to store the data streams havegreater computation time. Moreover, there is a need to create a real-time data retrievalprocess for real-time analysis. Some approaches don't make use of Meta tags associated withTweets, and the approaches which use such information don’t prune these tags to ensure thatonly required tags are used for storage and analysis to avoid storage and computation issues.Moreover, there is a need to create a one-window analysis framework where all operationslike data retrieval, storage, analysis and visualization are possible instead of working indifferent components in isolation.

REFERENCES

[1] X. Wu et al., “Data mining with big data,” IEEE Trans. Knowl. Data Eng., vol. 26, no.1, pp. 97–107, 2014.

[2] O. Maimon and L. Rokach, “Introduction to Knowledge Discovery and Data Mining,”in Data Mining and Knowledge Discovery Handbook, Boston, MA: Springer US,2009, pp. 1–15.

[3] M. Usman, “Multi-level mining of association rules from warehouse schema,” KuwaitJ. Sci., vol. 44, no. 1, 2017.

Page 44: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

39

[4] M. Usman and M. Usman, “Multi-Level Mining and Visualization of InformativeAssociation Rules,” J. Inf. Sci. Eng., vol. 32, no. 4, pp. 1061–1078, 2016.

[5] M. Usman and M. Usman, Predictive Analysis on Large Data for ActionableKnowledge: Emerging Research and Opportunities: Emerging Research andOpportunities. IGI Global, 2018.

[6] A. Goritz et al., Analyzing Twitter Data: Advantages and Challenges in the Study ofUN Climate Negotiations. SAGE Publications Ltd, 2019.

[7] S. Bringay et al., “Towards an on-line analysis of tweets processing,” in Int. Conf.Database and Expert Systems Applications, 2011, pp. 154–161.

[8] X. Liu et al., “SocialCube: A text cube framework for analyzing social media data,” inInt. Conf. Social Informatics, 2012, pp. 252–259.

[9] N. U. Rehman et al., “Building a Data Warehouse for Twitter Stream Exploration,” inIEEE/ACM Int. Conf. Advances in Social Networks Analysis and Mining, 2012, pp.1341–1348.

[10] N. U. Rehman et al., “OLAPing social media: The case of Twitter,” IEEE/ACM Int.Conf. Adv. Soc. Networks Anal. Min. (ASONAM 2013), pp. 1139–1146, 2013.

[11] X. Liu et al., “A text cube approach to human, social and cultural behavior in theTwitter stream,” in Int. Conf. Social Computing, Behavioral-Cultural Modeling, andPrediction, 2013, pp. 321–330.

[12] S. Kumar et al., “A behavior analytics approach to identifying tweets from crisisregions,” in Proc. 25th ACM Conf. Hypertext and Social Media, 2014, pp. 255–260.

[13] H. M. Koupaie et al., “Outlier Detection in Stream Data by Clustering Method,” Int. J.Adv. Comput. Sci. Inf. Technol., vol. 2, no. 3, pp. 25–34, 2013.

[14] M. Ben Kraiem et al., “Modeling and OLAPing social media: the case of Twitter,”Soc. Netw. Anal. Min., vol. 5, no. 1, p. 47, 2015.

[15] M. R. Murazza and A. Nurwidyantoro, “Cassandra and SQL database comparison fornear real-time Twitter data warehouse,” in Int. Seminar Intelligent Technology and ItsApplications (ISITIA), 2016, pp. 195–200.

[16] A. Sechelea et al., “Twitter data clustering and visualization,” in 23rd Int. Con.Telecommunications (ICT), 2016, pp. 1–5.

Page 45: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 27-40 Mining the Twitter Data Stream

40

[17] G. Bordogna et al., “Clustering geo-tagged tweets for advanced big data analytics,” inIEEE Int. Congr. Big Data (BigData Congress), 2016, pp. 42–51.

[18] A. Cuzzocrea et al., “OLAP analysis of multidimensional tweet streams for supportingadvanced analytics,” in Proc. 31st Annual ACM Symposium Applied Computing, 2016,pp. 992–999.

[19] H. Mallek et al., “BigDimETL with NoSQL Database,” Procedia Comput. Sci., vol.126, pp. 798–807, Jan. 2018.

[20] R. Ibrahim et al., “Tools and approaches for topic detection from Twitter streams:survey,” Knowl. Inf. Syst., vol. 54, no. 3, pp. 511–539, 2018.

[21] M. J. Vioulès et al., “Detection of suicide-related posts in Twitter data streams,” IBMJ. Res. Dev., vol. 62, no. 1, p. 7: 1--7: 12, 2018.

Page 46: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

41

Proposed Migration Approach to implementIPv6 Network over IPv4

JAWAD HUSSAIN AWAN*1, RAZA HUSSAIN SHAH1, SHAHZAD AHMEDMEMON1, MUHAMMAD JURIAL SANGI2 AND RAHAT ALI KHAN1

1Institute of Information and Communication Technology, University of Sindh, Jamshoro2Mechanical Department, Mehran University of Engineering & Technology, Jamshoro

*Corresponding Author’s email: [email protected]

Abstract

Modern field of edge computing is playing vital role in the development of servicesof cloud and distributed computing. Therefore, fast and reliable access is necessary for suchsystems, services and tools. The current study aims to implement Internet Protocol version6(IPv6) network. It is observed that during upgrading or transforming a network from InternetProtocol version4 (IPv4) to IPv6, the proposed network becomes hybrid due to deployment ofIPv4 and IPv6 networks. Further, this research supports and enhances the networkcompatibility between IPv4 to IPv6. Dual Stack and Tunneling mechanisms are also discussedin this study. Though, Dual stack is one of the most successful mechanism, which works inboth networks either it is IPV4 or IPV6 and defines a compatible platform and mechanism torun them in a single mode. It is observed that Tunneling is a reliable and protected mechanismin comparison with different transition methods. Although, it is not appropriate for homeusers but suitable for Data centres. Moreover, the pools of IPv6 addresses are successfullybeing deployed at five departments in Campus Area Network (CAN). The author has alsodiscussed a design and specification model, which includes the mechanism of transformationof IPv4 to IPv6 and IPv6 addressing schemes. This paper proposed Dual Stacking TransitionMechanism (DSTM) and 6 to4 Tunneling transition mechanisms for transform of IPv4 toIPv6 transition.

Keywords: IPv4, IPv6, Cisco, Dual Stack, Tunneling, Networking

INTRODUCTION

Internet connectivity is as old as communication [1]. With the passage of time, thecommunication has widen by providing vigorous, interoperable, and reliable performance.The evolution of technology demands up gradation, advancement and development intocommunication systems. In this way, IPv6 becomes vital protocol for communication ascompared to IPv4. This protocol has various advanced features such as Quality of Service(QoS). However, it has introduced security issues such as inherited attacks of IPv4 andtransition threats as discussed in [2]. This digital wireless communication age has evolved interms of scale and services very rapidly; and now its growth has reached to its limit, as thereare no more IPv4 addresses in the world. Therefore, to solve this problem, the IETF (InternetEngineering Task Force) designed the latest extensive version of Internet Protocol with moreaddress space, called as IPv6. It is also called IPng (Internet Protocol Next Generation). IPv6

Page 47: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

42

is an enhanced and successor version of network layer protocol of IPv4 and resolves theproblems of IPv4. But, IPv6 faces the backward-compatibility issue with IPv4. These issuesled the existing internet (which is based on IPv4) to completely switch to Internet 2 (based onIPv6). Hence, it has become crucial for every organization to implement new generation ofInternet Protocol. Since 2010, all public and private networks have started to shift tointernet2. Correspondingly, CAN for smart university is experiencing the same global issueof technology-shift, therefore, it needs to hook up with internet2 for implementing IPv6. Inthis context, this research aims to provide the solutions of IP address exhaustion and globaltechnology-shift problem (IPv4-to-IPv6 change). It also aims to help in better networkinfrastructure management; with more reliability, and more security. Transformation, upgradation and evolution of technology is complex, because these require higher budget andskilled personnel to maintain, supervise and control, otherwise it requires a training forexisting employees. The whole process deploys various techniques, approaches and tools forcommunication. Further, few mechanisms support co-existence between IPv4 and IPv6 suchas DSTM while other like 6 to 4 Point-to-Point Tunneling allows IPv6 packets over existingIPv4 networks. This research covers both transition techniques. The evolution andimprovement in technology growth demands better Internet access address solutions. So, itsupports the deployment of IPv6 in cyber age. Initial step towards deployment of IPv6 is anadvanced version of IPv4 and transforms the transition of internet, which is a huge task for ateam to build and maintain across the world. The whole process takes several years [1].Therefore, co-existence mechanism is proposed as a meaningful choice for internetdeployment. The main theme of this research is to achieve the parameters such as deploymentof IPv6 on CAN, provision of IPv4 and IPv6 co-existence CAN and provision of IPv6-to-IPv6 communication over IPv4 networks. Although, smart phones also demand better QoSwhile these devices lack the capability to make changes from voice-centric to multimedia-centric. Software Defined Networking (SDN) is a dynamic and flexible architecture, whichtackles and solves routing and handoff problems. The unique nature of SDN has attracted theresearcher’s attention pertaining to enhance the QoS in mobile computing [3].

BACKGROUND

IP Protocol is an essential communication protocol, which is used to connect differentnodes in a system. IPv6 is a widespread solution for internet access, and deploys ever-expanding schema of supplementary users, devices and traffic services [4]. More than 200million modern Internet services are offered every year and those services are used by 7.6billion internet users approximately. All of these services are covered by IP protocol [5,6].

IPV4 ADDRESS EXHAUSTION

In 2011, IANA (Internet Assigned Numbers Authority) transferred the rights of lastaddress blocks to the RIRs (Regional Internet Registry). Since then, there is no more IPaddress for any RIR. The unallocated pool of IPv4 addresses has become utterly emptied.This task was carried out on 3rd Feb 2011, which is a critical point in the domain of Internet.It reached to the allocation of the left over IPv4 addresses from a fundamental pool of IANA.After that task, IANA was unable to provide new addresses to any of five RIRs. The APNIC

Page 48: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

43

(Asia-Pacific Network Information Centre) fatigued its broad use of IPv4 address pool in2011. To overcome this IP address exhaustion, IETF planned to restructure the IPv4addresses, and designed new addressing style termed as Variable Length Subnet Mask(VLSM).

VARIABLE LENGTH SUBNET MASK (VLSM)

VLSMs is an additional subnet mask network, which is assigned to a network by IP,and considers it VLSM embedded network due to the extensive network statistics havingdifferent lengths at every subnet making level. As VLSM solved the issue of IP addressexhaustion, on the other side it has created another issue that is: it leads the overflow in theglobal routing tables of core routers at backbone networks caused by increased number ofsubnets created with VLSM as a solution to address exhaustion. Again, to solve this issueanother technique was created that is Classless Inter-Domain Routing (CIDR).

CLASSLESS INTER-DOMAIN ROUTING (CIDR)

CIDR is also called supernetting. It is an effective mechanism to overcome theproblems of IPv4 exhaustion as well as it supports to route table overflow. The mainadvantage of CIDR is to permit route aggregation or route summarization of multiple subnetmasks by supernetting the subnets. This approach significantly decreases the routinginformation at the fundamental level of backbone. CIDR is an efficient technique to impedethe IP address allocation flood and routing table overflow [7]. The internet systems whichwere developed in 1994 and 1995 lacks CIDR.

NETWORK ADDRESS TRANSLATION (NAT)

NAT is an efficient and fundamental routing function and its function is to translatenetwork addresses from one network to another. Port Address Translation (PAT) hasimportance over NAT technology, because PAT can translate various IP addresses intoanother one. NAT Breaks IP’s End-to-End Model, Applications Problems with NAT,Security Problems with NAT and Address Space Collision are emerging challenges to NAT[8]. Renumbering or twice the NAT approach is a solution of above-mentioned issues.However, these approaches are expensive and increase the complexness of NAT.

Problem Statement

The present international share of IPv6 is low in the development and implementationphases during promotion. The research community has urged the requirements to understandthe reasons regarding slowness of transition in adaptation of IPv6. In addition, it is alsounknown that which factors may be used to migrate from IPv4 to IPv6. To overcome theseissues, the author proposed a design and specification model, which includes the mechanismof transformation of IPv4 to IPv6, IPv6 addressing schemes, Dual Stacking TransitionMechanism (DSTM) and 6 to 4 Tunneling transition mechanisms.

Page 49: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

44

Proposed Approach to Migrate from IPv4 to IPv6

We proposed the designing approach to adopt IPv6 instead of IPv4 to improve thecommunication traffic among networks. In addition, this section includes tools & techniques,specification & design and IPv6 addressing schemes.

Tools and Techniques

This section comprises of tools, Cisco IOS, and IPv4 to IPv6 transition techniques. Italso contains sub sections to develop a IPv6 based network to solve IPv4 related issues.

Tools

University of Sindh Campus Network is providing Dynamic Host Addressing, Domain NameService, Web Server and some other services. These network services are used as supportingtool for this research.

CISCO IOS

Cisco Internetwork Operating System (IOS) is a network support program, which isused in Cisco Systems, network routers and network switches [9]. IOS is a complete packagethat includes routing, switching, and internetworking functions. These all functions areintegrated in a real time operating system.

The specification for the proposed research requires CISCO Systems Vendor, IOSSoftware, 12.3 major release with 12.3(26) release number, 2501-2525 platform, IP plus asfeature set and c25000-is-l.123.26.bin as image name. 16 MB Dynamic Random AccessMemory (DRAM) and Flash are minimum requirements while the proposed research includesAuthentication, Authorization, and Accounting (AAA ), Access Control List (ACL), AddressResolution Protocol (ARP), Asynchronous Transfer Mode (ATM ), Border Gateway Protocol(BGP), Cisco Discovery Protocol (CDP), Dynamic Host Configuration Protocol (DHCP),Domain Name Servers (DNS), Enhanced Interior Gateway Routing Protocol (EIGRP), FrameRelay, File Transfer Protocol (FTP), Hyper Text Transfer Protocol (HTTP) Web Server,HTTP Security, Internet Protocol Service Level Agreements (IP SLAs), Internet ProtocolSecurity (IPSec), Internet Protocol Version 6 (IPv6), Integrated Services Digital Network(ISDN), Layer 2 Forwarding-Fast Switching, Quality of Service (QoS), Network TimeProtocol (NTP), Next Hop Resolution Protocol (NHRP), Open Shortest Path First (OSPF),Password Authentication Protocol (PAP), Protocol-Independent Multicast (PIM) Version 1,PIM Version 2, Policy-Based Routing (PBR), Point to Point Protocol (PPP), RADIUS,Response Time Reporter (RTR), Routing Information Protocol (RIP), Resource ReservationProtocol (RSVP), Real-Time Transport Protocol (RTP), Simple Network ManagementProtocol (SNMP), Spanning Tree Protocol (STP), Virtual Private Dial-up Network (VPDN)and X.25 [10].

Page 50: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

45

IPv4-TO-IPv6 TRANSITION TECHNIQUES

IPv6 is an advanced or successive version of IPv4. Thus, IPv6 is not “backward-compatible”. Moreover, Back-ward compatibility allows gradually transition, that’s why IPv6is regarded as an abundant functional alternative for IPv4. So, the recent devices andnetworking tools deploy IPv6 in communication networks. IPv4 and IPv6 both are divergentand dissimilar communication protocols.

The inference of this compatibility and failure to work as automated translator, thispractice maintains inclusive any-to-any connectivity in the transition process, each device isdeployed to perform a transition with IPv4 and IPv6 protocol stacks. This approach allowsthe device to be “bilingual,” and intermediate between IPv4 and IPv6 to provide effectivecommunication. The whole transition process is termed as dual-stack transition. In thisresearch, following two transitions are proposed to transform the IPv4-IPv6 transition:

DSTM 6 to 4 Tunneling

DSTM

DSTM is a basic and fundamental technique of migration strategy. This approachallows both IPv4 or IPv6 tools and equipment to communicate in both modes. Dual stackingupgrades the system or network devices and programs on the network once. Thus, it becomeseasy for use and deployment. Moreover, all connected or associated hosts and devices requireto be upgraded [11]. Normally, all communication prefers successive version, and improvestheir internet access. The implementation of DSTM is as shown in Figure 1 and Figure 2.

6 to 4 Tunneling

6 to 4 tunneling are also interesting and supportive mechanism to carry IPv6 data overan existing or IPv4 compatible network. It is relatively achievable to have IPv6 subnets, andallows all networks to communicate with each other. It is not complicated but require a WideArea Network (WAN) for this mechanism [12]. In this approach, the host or node devices areconnected in IPv6 network with connected router. When it is connected to another router forcommunication it requires tunneling the IPv6 into IPv4 as shown in figure 1 where Physicsand Chemistry departments are taken as an example to illustrate the implementation whilecomplete architecture is illustrated in Figure 2.

Page 51: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

46

Figure 1 IPv6 to IPv4 Tunneling

Figure 2: Overall Network Map of CAN

Page 52: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

47

SPECIFICATION & DESIGNThis section includes CISCO three layer network model, core layer, distribution layer

as explained below:

The CISCO Three-Layer Network ModelCISCO is a Network Company which defined three layered network model. This

model is mostly preferred by network professionals for designing, implementation, andmaintenance of a scalable, consistent, inexpensive network model. Core layer, distributionlayer and access layer are main three layers of above mentioned model as shown in Figure 2.[13]. Above mentioned layers are logical instead of physical and each layer performs specificfunctionality. Physical implementations of this network requires devices to perform functioneither in a single or double layer. “Keep local traffic local” becomes a famous concept andphrase of networking principle due to the merit of the concept in discussion.

The Core LayerThe Core layer is central part and soul of the network. This layer is responsible to

transport the hefty amounts of traffic data in an efficient and quickest manner. The mainpurpose of this layer is to toggle the traffic rapidly, while distribution layer process the userdata and forwards the requests to the core if requested. Unfortunately, fault tolerance is achallenge for this layer. Following are the specifications of core layer.

Design high reliability core. Routing information of Virtual Local Area Networks (VLANs), access lists,

and filtration of packets take place. Usage of lower convergence protocols.

The Distribution LayerThe Distribution layer is the middleware and most important point between access

layer and core layer. The key role of distribution layer is to provide the network informationsuch as routing, filtering, and WAN/CAN access. Moreover, it determines the movement orexchange of packets from networks and can access the core, if desired. The implementationof the policies may take place in this layer for the network.

The Distribution layer includes following mechanism or procedures to achievereliable and requested results.

Implementation of tools like access lists, filtration of packets, and creatingqueues.

Adoption of some security and network policies such as firewall, and addresstranslation.

Redistribution of routing between static and dynamic routing protocols. Routing of data between VLANs.

Page 53: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

48

The Access Layer

This layer monitors the network resources, access controls of user and workgroup. Itis also referred as the desktop layer. The network resources are defined and provided at locallevel as users get access of them easily. Some functions are defined as under:

Data flow starts from distribution layer to get access control and enforcepolicies.

Segmentation takes place to create separate collision domains. Connecting workgroup with distribution layer.

IPV6 ADDRESSING SCHEME

In this section, multiple scenarios of CAN are taken and the results of University ofSindh are used as sample. Every department is allocated a separate pool of IPv6 addressesand one pool for Network Operation Centre (NOC). The below mentioned address schemesare allocated to different departments on testing purpose. Therefore, there is not any fixedlimit for start and end schemes while tested addressing schemes are discussed in thefollowing Table 1. For practical use IPv6 chart is available for usage [14].

Table 1: Address Allocation Details of different IPv6 PoolsLocation IPv6

Pools no.Network

PrefixLength

Starting IPv6 Ending IPv6

IICT 11 2001:DB8:0:11:: 64 2001:DB8:0:11:: 2001:DB8:0:11:FFFF:FFFF:FFFF:FFFF

IMCS 22 2001:DB8:0:22:: 64 2001:DB8:0:22:: 2001:DB8:0:22:FFFF:FFFF:FFFF:FFFF

Chemistry 33 2001:DB8:0:33:: 64 2001:DB8:0:33:: 2001:DB8:0:33:FFFF:FFFF:FFFF:FFFF

Physics 44 2001:DB8:0:44:: 64 2001:DB8:0:44:: 2001:DB8:0:44:FFFF:FFFF:FFFF:FFFF

NOC 100 2001:DB8:0:100:: 64 2001:DB8:0:100:: 2001:DB8:0:100:FFFF:FFFF:FFFF:FFFF

CONCLUSIONIPv6 is an extended and successive version of Internet Protocol. It became evident

due to rapid growth of the Internet technology that supplementary addresses would berequired to connect devices as compared to IPv4 address space. This advancementintroduced modern features and functionalities in the network technology to overcome thelimitations of IPv4. The grouping arrangement of CIDR and NAT support the network as thecollapse time of IPv4 address may lessen. Although, the NAT mechanism splits the endnodes of Internet Protocol model, that’s why it has limitations for protocols. In migrationstage of technology, transition mechanism is one of the best solution which providescompatible platform to run a network of IPv6 with same infrastructure of IPv4 networks.Moreover, other mechanism and approaches have been developed and designed as perrequest in organizations. This paper has discussed two mechanisms: Dual Stack and 6 to 4

Page 54: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

49

Tunneling. These mechanisms have distinct features in their defined Infrastructure. DualStack transition is most preferable mechanism in networking and communication. It is alsoappropriate for Internet Service Providers (ISPs), home users and enterprise network users.Tunneling is a reliable and protected mechanism in comparison with different transitionmethods. Similarly, it is not appropriate for home users but suitable for Data centres. At last,it is concluded that the transition methods accomplish and resolve the issues of potentialnetwork development and require selection as per infrastructure requirement. With theevolution of technology, the communication has widened and has come across interoperable,and reliable performance issues. Thus, IPv6 is used to cover these issues instead of IPv4.This persistent research can attract the attention of researcher and academicians to provideQoS, security and flexible network.

ACKNOWLEDGMENTWe are thankful to the Director, Institute of Information & Communication

Technology, University of Sindh, Jamshoro, Pakistan for providing Information Technologylaboratory and digital library access facility to accomplish this research work.

REFERENCES

[1] Huston, G., “A Rough Guide to Address Exhaustion,” The Internet Protocol Journal,vol. 14, no. 1, pp. 2-11, Indiana, USA, 2011 [Online]. Available: https://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-51/141-address.html

[2] Internet World Statistics, “Top 20 countries with the highest number of internet users,”Miniwatts Marketing Group, Sayre, Africa, 2017.

[3] Amoss, J.J., and Minoli, D., Handbook of IPv4 to IPv6 Transition, Methodologies forInstitutional and Corporate Networks, New York: Auerbach, 2007.

[4] Bruno, A., and Ciesielczyk, J. M., “Method and system for gateway selection in inter-region communication on IP networks”. US Patent US20070011351A1, Jan 11, 2007.

[5] Barayuga, V. J. D. , and Yu, W. E. S., “Packet Level TCP Performance of NAT44,NAT64 and IPv6 Using Iperf in the Context of IPv6 Migration,” in 5th Int. Conf. ITConvergence and Security (ICITCS), Kuala Lumpur, Malaysia, 2015.

[6] Desmeules, R., Cisco Self-Study: Implementing Cisco IPv6 Networks (IPv6), CiscoPress, Indiana, USA, 2003.

[7] Fairhurst, G., and Jones, T., “Transport Features of the User Datagram Protocol(UDP),” Internet Society and The Internet Engineering, University of Aberdeen,Aberdeen, UK, 2018.

Page 55: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1 : 41-50 Proposed Migration Approach

50

[8] Tahir, H.M et al., “Implementation of IPv4 Over IPv6 using Dual Stack TransitionMechanism (DSTM) on 6iNet,” in 2nd Int. Conf. Information CommunicationTechnologies, Damascus, Syria, 2006.

[9] Al-Azzawy, D. S. and Neamah, M. R., “Comparative Analysis Study on NetworkPerformance of VoIP and Data Traffic Pattern using Queuing Algorithms in WiredNetwork Environment: An Implementation using OMNeT++ Simulator,” Ind. J. Sci.Technol., vol. 10, no. 36, September 2017.

[10] McMillan, T., Cisco Networking Essentials, Wiley, Canada, August, 2015.

[11] RIPE Network Coordination Centre, “IPv6 chart,” RIPE NCC, Amsterdam, 2015.

[12] Tsetsi, E., and Rains, S. A., “Smartphone Internet access and use: Extending the digitaldivide and usage gap,” Mobile Media Commun., vol. 5, no. 3, pp. 239 - 255, London,UK, 2017.

[13] Shiranzaei, A., and Khan, R. Z., “IPv6 Security Issues—A Systematic Review,” inAdvances in Intelligent Systems and Computing, D. Lobiyal et al., Eds. vol. 638, pp.41-49, Springer, Singapore, 2018.

[14] Yap, K. L., and Chong, Y. W., “Software-Defined Networking Techniques to ImproveMobile Network Connectivity: Technical Review,” IETE Technical Review, pp. 1-13,London, UK 2017.

Page 56: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

51

Real time Face Recognition and Detectionfor Industrial Application

MUHAMMAD USMAN GHANI KHAN1,2*, ALI FAROOQ1, OMER IRSHAD1,2

1Al-Khawarizmi Institute of Computer Sciences2Department of Computer Science and Engineering, University of Engineering and Technology

Lahore, Pakistan*Corresponding author’s e-mail: [email protected]

Abstract

In the recent era, face recognition has procured the most important application ofimage understanding and analysis which is quite evident by some of the recent researches.Computer vision, pattern recognition and biometrics are emerging field for the researchers inthe recent past with the focus in the area of the human recognition. Face recognition is animportant factor keeping different issues such as robotics, security, attendance system. Thispaper concentrates on the industrial utility of Computer Vision techniques, namely facedetection and recognition module. The present study mainly focuses on the Real Time FaceRecognition System (RTFRS) and its applications. This scenario provides the opportunity totreat the face as a biometric identifier, that the person using information about the person orother biological organism, and, that how to deal this identifier for different services.

Keywords: Real time Face recognition; Face detection; Computer Vision technology; PatternRecognition; Artificial Intelligence

INTRODUCTION

Humans have been using different characteristics to recognize each other througheyes, face, fingerprints, voice and gait since ancient times. After the emergence of theComputer Science, the replication of human perception is being one of the most growingtasks. Researchers and Scientists have been working to recreate the human intelligence. Thisgeneration is known as the generation of Artificial Intelligence. A lot of work has been doneto enable human’s interaction with the electronic devices and to make those devices sensitiveso that they can respond back to the human. This area of artificially intelligent systems allowsus to create such computer aided security systems [1]. Biometric systems are basicallyautomated system to detect the human’s presence. This can be done on the basis of theirbehavior and their psyche [2]. As described earlier, this paper is divided into two parts, firstface detection and recognition and then its application

The Real time face detection and recognition [3] module mainly takes the raw imageas an input afterwards face detection using haar cascade classifier is done [4], then extracted

Page 57: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

52

facial coordinates are sent to the pre-processing module where orientation of the image isprocessed and at last it is sent to the face recognition module which is linked with thedatabase and returns the identity of the sample as shown in Figure 1. This methodology isexplained in detail in forthcoming sections.

Figure 1: Face Detection System and Recognition

Literature Survey

Wiskott et al. [5] presented human face recognition system using the labeled graphcomparison. This system produced graphs of concise facial descriptions in which descriptivefiducial points for describing face like eyes, mouth, nose, were illustrated by group ofwavelet components. The construction of bunch graph was achieved by utilizing graphimages from samples. Recognition was done through straight-forward comparison. Thesystem was tested on FERET database alongwith self-generated Bochum database andachieved satisfactory results.

Ahonen et al. [6] proposed a novel approach in face recognition in which the imageswere split into small patches that are further employed for computing Local Binary Pattern(LBP) to describe texture information. Face recognition was achieved using nearest neighborclassifier on extracted features. This system claimed 79% accuracy which was more than thestate-of-the-art approaches like PCA, BIC and EBGM on FERET database.

Zhu and Ramanan [7] used the pose and landmark estimation to implement a facedetection system. They used tree-structured models which was more convenient to optimizethan dense graph structure. They used self-collected and annotated dataset and achieved theaccuracy of 76.7 % to detect the face from the frontal (45 to 45) side and the partial side.

Page 58: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

53

Yang et al. [8] presented Principal Component Analysis in two-dimensional (2D-PCA) model for image representation. This model was based on image matrices of two-dimensions for feature extraction in which a covariance matrix for the image is establisheddirectly utilizing the actual photo matrices, furthermore its eigenvectors were extracted forfeature computation from image. The proposed system was evaluated on three different facedatasets: ORL, AR and Yale face dataset. It was claimed that the throughput of facerecognition was higher in case of 2DPCA rather than PCA.

Wright et al. [9] presented an algorithm for recognizing the potential objects in theimage of a scene. This algorithm asserts that feature selection process can be improved byconsidering the sparsity in the problem of facial identification. Unusual representations suchas resized photographs and random projections were extracted by choosing accuratethreshold, which was computed on the basis of sparse representation theory. This frameworkcan handle problems because of occluded region and corruption quite efficiently. Theyperformed different experiments on globally accessible databases and achieved 95%accuracy.

He et al. [10] proposed a face recognition method based on appearance features byemploying Locality Preserving Projections (LPP) and named their approach the Laplacian-Face Approach. They generated face subspace from face images for further analysis. LPPkeeps unchanged local features and attains a face subspace for deciphering manifoldstructure. A comparison was made among Laplacian-face approach, Fisher-Face andEigenface methods on three multiple face datasets and better accuracy and low errors inrecognition were pro claimed.

Tan and Triggs [11] proposed combination of methodologies for reliable facerecognition under un-restrained illumination circumstances. They combined illuminationnormalization, filter-based feature computation, multiple feature merging, texture orientedfacial features and distance transform based matching. Specially, they introduced localternary patterns (LTP), principal component analysis (PCA) for computing features andGabor wavelets based local representation cues. It was proposed that feature set alone is lessaccurate than its combination. The resulting system achieved state-of-the-art results on threedistinct datasets by reaching a facial verification rate of 88.1.

Neto et al. [12] used Kinect sensor for an RBG-D image especially, after acquiringthe image, Histogram of oriented gradients (HOG) and Principal component analysis (PCA)was applied for face recognition with a combination of K-NN used as a classifier byEuclidean distance metric and at-last 3-D audio was used at the detected position. They usedKinect-based dataset. The developed system achieved 94.26% accuracy.

Kumar et al. [13] detected the face from the live camera and extracted features byusing open-cv and passed these features to Neural Network to train the model. Aftersummarizing these features, they were stored in the database. The developed system wascontrolled by a cell-phone while accuracy of 90.0% was achieved.

Page 59: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

54

Rehman et al. [14] captured the image, then preprocessing was applied on therequired image and Bayesian classifier for face skin detection was employed. Calculatedhistogram using Gray Level Co-occurrence matrix (GLCM) and extracted features byHaralick classifier and Local binary pattern (LBP). 85.39 % of accuracy in Ethnicity wasachieved.

Khan et al. [15] input the image to framework, pre-processing was applied on thecaptured image and then facial feature extraction and facial landmarks point was done. Theoptical flow of these feature points were tracked and passed to the classifier to detect thefacial expression. Self-generated dataset was used and accuracy of 93.3% was achieved.

Ding and Tao [17] recognized face even from the blurred video stream. First, the facewas detected, and then its image was restored to the original dimension by using the Trunk-Branch Ensemble CNN model (TBE-CNN). Video-based face (VFR) dataset was used andaccuracy of 94.96% was achieved.

Methodology

There are a series of steps which are followed in real time face recognition anddetection. A system is created which takes an image as an input, that is simple raw image ofany type (.png,.jpg). Haar classification is applied on that image for the detection of eyes andmouth which returned the reduced image of the sample. This reduced image is then passed tothe pre-processing module where image rescaling, resizing and bit-reduction is accomplishedi.e. converted into gray-scale. Afterwards image is passed to the face recognition module onwhich different algorithms are employed e.g. Eigen faces, Fisher faces, LBPH and, CNN andthen the sample is identified as shown in Figure 2.

Figure 2: System Implementation

Page 60: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

55

Face Identification using Haar Classification: Viola and Jones et al. proposed a method ofHaar like features [13]. It sums up the intensities of the neighboring periodic areas at aspecific location and calculates the variation between the sums which is used to categoricallydivide the image in the sub sections. Rectangular features of an image are calculated viasome intermediate expression. This representation is called integral image. For facedetection, the local area of eyes is darker than the other part of the face, same is true for lips[5]. If original image is f(x;y), then integral of an image is f[i(x; y)] that can be computed asshown below (Equation 1):

fI (x; y) = Px0x; x0x0 - jy - y0jA(x0; y0) (1)

Face Recognition Module: For face recognition, the prerequisite is face detection. Thedetected model is sent to the face recognition model and then identity is matched with thedatabase, if the sample is identified then its result is shown, else it ignores the sample andwaits for the new sample, as shown in the Figure 3. As discussed earlier, for face detection,Haar Cascade is applied. For validation, below points are followed in the order.

1. ROI of image is set for face detection.

2. Eyes and mouth are detected using eyes cascade and mouth cascade.

3. If any mouth is detected in upper portion of face, it is discarded.

4. If more than one mouth are detected, the largest one is kept.

5. If any eye is detected in the lower portion of face, it is discarded.

6. If more than 2 eyes are detected, eyes having the largest distance between them w.r.t

x-axis, the left most eye and the right most eye rectangle are kept while other eye

rectangle are discarded.

7. At this point, all the rectangles of mouth more than one and all the rectangles of eyes

more than two are discarded. If eyes are less than two and mouth is less than one then

the faces are discarded.

8. Above steps are repeated for all detected faces.

9. After that weight patterns or Eigen-faces are updated, both are optional.

10. If the same face appeared more than x times then its features are calculated and it

was added to the face space optional.

Page 61: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

56

Figure 3: Face Detection Module

Eigen Faces Algorithm: Face is mainly composed of the following features set eyes, nose,hair and lips. In information theory, we need to get the pertinent knowledge in a face imageand then judging according to the local database we own [18]. The face recognition methodsconsist of the following initialization steps:

1. The face images set for the training are acquired.2. The Eigen Faces are computed on the basis of training images, M photos that

represent the largest Eigen value are kept only. These representative images depictthe face space. As current faces are experienced, the value of Eigen Faces can bemodified or recalled.

Page 62: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

57

3. Concordant distribution is computed in M dimensions for every known individual inthe form of weight space by mapping their facial images onto the face space. A set ofweights based on the input frame is calculated and the M Eigen Faces by projectingthe input frame onto each of the Eigen Faces.

4. Figured out whether the input image contains a face after all (whether known orunknown) by confirming if the image is quite close to the faces space.

5. Classification of the weight pattern is made to determine whether it is a face, and of aknown person or an unknown.

Fisher Faces Algorithm (FLD): Fisher Linear Discriminant is also well familiar as LinearDiscriminant Analysis (LDA) [19] which draws a difference between different classes as alarge scatter and within classes as a small scatter [18] resulting into a very compact and welldefined clusters. FLD is behind many face recognition methods like Eigen-faces [20],[21],discriminant analysis [22] and robust coding scheme [23]. Mostly techniques utilize PCA toreduce the dimensionality of high dimension image space of original input images. After thatFLD transformation extracts further discriminant features (MDF) space for classification. Itneeds large training set for proper generalization that is its drawback, and generally there arelarge training faces for face recognition but per class they are less. And for that artificialgeneration of data is a way out.

Local Binary Patterns Histograms (LBPH): As a whole, Fisher-faces and Eigen-faces usea mathematical elaboration of the most prominent features of the training. LBPH overviewseach face in the set of training data independently and separately. In LBPH every image ischecked separately, while the Eigen-faces computes features on the complete dataset at thesame time. The LBPH algorithm is somehow simpler, in the way that we distinguish everyimage in the dataset locally and when a different unknown image is given, similar analysis isperformed on it and the results are compared in the data-set. The manner in which welocalize the image patterns for every location f the image.

Convolutional Neural Network: Face recognition system includes local image sampling,then passing these samples self-organizing maps(SOM) for dimensionality reduction andthen reduced set of feature scaled maps are passed to convolutional neural network to trainthe network [24]. Convolutional Networks architectures make the clear suppositions that theinputs are images. CNNs take advantage over ordinary neural network as the input comprisesof images and the architecture is restricted in a more sensible way. Specially, unlike a regularNeural Network, the layers of a neural network which is convolutional in nature haveneurons patterned in 3 dimensions: height, width, depth. Here depth can be taken as theaugmented number of layers in a given network. Proposed convolutional neural networkarchitecture comprises five layers and they work repeatedly for training of a neural network.

I. INPUT raw pixels of images.II. CONV output that is coming from the neurons which are connected to local regions

of the input.III. RELU element wise activation function e.g. thresholding at zero.

Page 63: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

58

IV. POOL perform down sampling along spatial dimensions.V. FC (Fully Connected) computes class scores from the number of classes.

Our system of face recognition based on convolutional neural networks performs thefollowing steps.

1. The face image are taken as input for face detection and validation as discussed

earlier.

2. They are resized to 256 x 256.

3. They are passed over by a window of 5 x 5 with a shift of 4 to get 64 x 64 a total of

4096, 5 x 5 matrices.

4. These 5x5 matrices are then flattened to pass through SOM, so finally three-

dimensional array of 64 x 64 x 25 is generated for every image.

5. Each row of 25 samples is assigned a label by SOM. Therefore, the image are

reduced from 256 x 256 to 64 x 64.

6. Then they are passed to Convolutional Neural Networks for training.

7. As a comparison of dimensionality reduction, CNN is trained on images directly

i.e. without dimensionality reduction.

For implementation, an online attendance system is created using the LBPH asdescribed below: graphical user Interfaces include two windows forms: Admin and Client

Admin module provides an interface for initial configuration, capture image from any ofthe two IP cameras i.e front and rear camera to get the samples of any person. Theinterface of admin module is shown in Figure 4. These samples will then be used fortraining the system for face recognition. Name and designation of a person is alsorequired for entering samples to database. Another part of admin module is reporting.This application is also capable of generating reports of attendance record. Customizedreport is also supported based on name, date and designation

Client Interface provides the visual information of front and rear cameras. Every 6th

frame of live video is processed for face detection. The interface of client module isshown in Figure 5. If face is detected, then face recognition API is called for results basedon image. Image captured is re sized and gray scaled before sending for recognition to getbetter results. For accuracy improvement, if same person is recognized in a 5-6 of framesonly then attendance is marked. This system generates different type of reports.

Page 64: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

59

Figure 4: Admin Interface

Figure 5: Client Interface

Page 65: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

60

Attendance Report: Attendance report is generated using crystal reports as shown in Figure6. Crystal reports uses different formulas to display the results based on a query. Crystalreports relates two database tables and displays the person name across the login and logoutdetails. Multiple filters can be applied to the reports which are provided in the admininterface. You can get the report of all persons from start date till today (all entries in thedatabase). Filters based on name of the person, designation of the person and date are alsoavailable. Reports are available for printing as well as saving for future reference.

Figure 6: Attendance Report

Database: Amazon SQL Server database is currently being used by the software tomaintain the attendance record, person's data and images path as well as the initialconfigurations of software. The architecture of database is shown in Figure 7. The database islocated at cloud storage, so internet connection is needed to use this software. Twenty GB ofspace is allocated for this database by Amazon server.

Figure 7: Data schema

Page 66: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

61

Working: For initial configurations, admin module has to be run. Selection of pathfor storage of images and training is made. After the initial configuration, admin module'sinterface appears. <Capture> button can be clicked to start the live view from the camera.When the person's face is detected, a red rectangle is drawn on the person's face. <TakePicture> button is clicked to capture the image of that person. After entering Name andDesignation of person, <Submit> button is clicked to submit the record. On same pattern, 5samples may be taken. After that <Train> button is clicked to train the system with capturedimages. Then client module is run. Two views of cameras: one front and one rear appear overthe interface for recognition. The source path of these cameras is stored into the databasewhich can be edited later on based upon the requirements.

RESULTS AND DISCUSSIONS

SOM and CNN are computationally expensive algorithms because they requiretraining recursively. Hidden layers, convolution and many other computations take moreresources in case of hardware as well as time. The training time of CNN with 55 image assamples and 11 images per subject of 256x256 resolution, was around 30 minutes on the Intelcore i5 processor. This CNN is restricted to 20 epochs only. The accuracy of CNN is 91percent with 44 images for training and 11 images for testing. After increasing the number ofsamples to 200 samples per subject for 14 subjects, resulted in a total of 2800 images withsame resolution of 256x256 and thus training time of CNN with these inputs increased from30 minutes to 30 hours (Table 1). Although CNN is using more resources as compared toEigen, Fisher and LBPH but results are outstanding as compared to these algorithms. TheGPU therefore uses dedicated memory and are made for heavy computation; there will behuge difference in the training time of the system. Table 2 shows that CNN is giving the bestprecision, accuracy and F Score while the recall is dependent upon the false positive sample,but due to the limited data, at the moment we are not able to get the recall value improved.We make the confusion matrix for evaluating system as shown in Table 3.

Table 1: Details of methodologies

No ofImages Resolution Methodology

TrainingTime(min)

Images /Subject Accuracy (%)

1400 256 x 256 Eigen 10 200 85 (119 out of 140 correct)

1400 256 x 256 Fisher 15 200 77 (108 out of 140 correct)

1400 256 x 256 LBPH 10 200 85 (119 out of 140 correct)

55 256 x 256 CNN 30 11 91 (10 out of 11 correct)

55 256 x 256 SOM+CNN3600

(60hrs)11 54 (6 out of 11 correct)

2800 256 x 256 CNN1800

(30hrs)200 99 (555 out of 560 correct)

2800 256 x 256 SOM+CNN In Progress 200 In Progress

Page 67: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

62

Table 2: Measure of Relevance

Methodology Precision Recall Accuracy F Score

CNN 99.10 100 99 99.55

Eigen Faces 85 100 85 91.89

Fisher Faces 77.14 100 77.14 87.09

LBPH Faces 85 100 85 91.89

Table 3: Confusion Matrix

N=560Predict

NoPredict

YesActualNo 3 2

ActualYes 1 555

CONCLUSION

Eigen faces in live environment are not so successful because variation of angle offace images produces a lot of change in accuracy of Eigen faces, Fisher accuracy is not muchfor it to implement in live environment. As per observation, LBPH performs better in liveenvironment as compared to Fisher and Eigen, because it is working on the basis of thelocality of the sample and then calculating its probability. CNN uses more hardwareresources for training, since it works in deep learning so it takes the data into layers, becauseof epochs. But its accuracy is far better than Eigen, BPH and Fisher. CNN is a betterapproach as compared to accuracy and prediction. Prediction time as per results of 560images is 360 seconds almost half a second for CNN.

FUTURE WORK

The current module is working on the LBPH, as we are getting better and moreaccurate results we will move our interface to CNN as deep learning seems to be a positivefield to dig out better and fast results. Moreover we have some flaws in our dataset too likewe are unable to generate false positives and, ground trothing for enhancement in our data.

REFERENCES

[1] A. F. et al., “3D Face Recognition in a Ambient Intelligence Environment Scenario,”in Face Recognition, I-Tech Education and Publishing, 2012.

[2] R. Bhatia, “Biometrics and Face Recognition Techniques,” Int. J. Adv. Res. Comput.Sci. Softw. Eng., vol. 3, no. 5, pp. 93–99, 2013.

Page 68: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

63

[3] W.-Y. Zhao et al., “Face Recognition: A Literature Survey,” ACM Comput. Surv.,vol. 35, pp. 399–458, 2003.

[4] P. Ian Wilson and D. John Fernandez, “Facial feature detection using Haarclassifiers,” J. Comput. Sci. Coll., vol. 21, 2006.

[5] L. Wiskott et al., “Face recognition by elastic bunch graph matching,” in Int. Conf.Comput. Anal. Images Patterns, 1997, pp. 456–463.

[6] T. Ahonen et al., “Face recognition with local binary patterns,” in Eur. Conf. Comput.Vis., 2004, pp. 469–481.

[7] X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localizationin the wild,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 2879–2886.

[8] J. Yang et al., “Two-dimensional PCA: a new approach to appearance-based facerepresentation and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no.1, pp. 131–137, 2004.

[9] J. Wright et al., “Robust face recognition via sparse representation,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227, 2009.

[10] X. He et al., “Face recognition using laplacian faces,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 27, no. 3, pp. 328–340, 2005.

[11] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition underdifficult lighting conditions,” IEEE Trans. image Process., vol. 19, no. 6, pp. 1635–1650, 2010.

[12] L. B. Neto et al., “A Kinect-based wearable face recognition system to aid visuallyimpaired users,” IEEE Trans. Human-Machine Syst., vol. 47, no. 1, pp. 52–64, 2017.

[13] P. M. Kumar et al., “Intelligent face recognition and navigation system using neurallearning for smart security in Internet of Things,” Cluster Comput., pp. 1–12, 2017.

[14] A. Rehman et al., “Modified Texture Features from Histogram and Gray Level Co-occurence Matrix of Facial Data for Ethnicity Detection,” in 5th Int. Multi-Topic ICTConf., 2018, pp. 1–6.

[15] G. Khan et al., “Geometric positions and optical flow based emotion detection usingMLP and reduced dimensions,” IET Image Process., vol. 13, no. 4, pp. 634–643, Mar.2019.

[16] C. Ding and D. Tao, “Trunk-Branch Ensemble Convolutional Neural Networks forVideo-Based Face Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no.4, pp. 1002–1014, Apr. 2018.

[17] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simplefeatures,” in Proc. 2001 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognition.CVPR, 2001, vol. 1, pp. 511–518.

Page 69: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

PJCIS (2018), Vol. 3, No. 1: 51-64 Real Time Face Recognition

64

[18] C. Liu and H. Wechsler, “Gabor feature based classification using the enhanced fisherlinear discriminant model for face recognition,” IEEE Trans. Image Process., vol. 11,no. 4, pp. 467–476, 2002.

[19] R. A. Fisher, “The use of Multiple Measurements in Taxonomic Problems,” Ann.Eugen., vol. 7, no. 2, pp. 179–188, Sep. 1936.

[20] P. N. Belhumeur et al., “Eigenfaces vs. Fisherfaces: recognition using class specificlinear projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul. 1997.

[21] D. L. Swets and J. J. Weng, “Using discriminant eigenfeatures for image retrieval,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 8, pp. 831–836, Aug. 1996.

[22] K. Etemad and R. Chellappa, “Discriminant analysis for recognition of human faceimages,” J. Opt. Soc. Am. A, vol. 14, no. 8, p. 1724, Aug. 1997.

[23] C. Liu and H. Wechsler, “Robust coding schemes for indexing and retrieval fromlarge face databases,” IEEE Trans. Image Process., vol. 9, no. 1, pp. 132–137, 2000.

[24] S. Lawrence et al., “Face recognition: A convolutional neural-network approach,”IEEE Trans. Neural Networks, vol. 8, no. 1, pp. 98–113, 1997.

Page 70: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Guidelines For Authors

Pakistan Journal of Computer Science and Information Systems (PJCIS) is an

official journal of PASTIC meant for students and professionals of Computer Science &

Engineering, Information & Communication Technologies (ICTs), Information Systems,

Library and Information Science. This biannual open access journal is aimed at

publishing high quality papers on theoretical developments as well as practical

applications in all above cited fields. The journal also aims to publish new attempts on

emerging topics/areas, original research papers, reviews and short communications.

Manuscript format: Manuscript should be in English, typed using font Times New

Roman and double spaced throughout with 0.5 inches indention and margins of at least

1.0 inch in Microsoft word format (doc or docx) or LaTex. The manuscripts should be

compiled in following order: Abstract, Keywords, Introduction, Materials and Methods,

Results, Discussion, Conclusion, Recommendations, Acknowledgement and References.

Follow the format guidelines provided in the following table:

Type Characteristics Letter case

Title Font size 16, bold, single line spacing

centered

Sentence case

Author names Font size 12, bold, centered, single line

spacing

Uppercase

Institutional

affiliations

Font size 10, centered, single line

spacing

Sentence case

All headings Font size 16, bold, centralized and

numbered

Uppercase

Body text Font size 12 and justified Sentence case

Captions

(Table/Figures)

Font size 12 and bold Sentence case

Title: This should contain title of the articles by capitalizing initial letter of each main

word. Avoid abbreviations in the title of the manuscript.

Author’s Name: Title should be followed by complete author names, e-mail addresses,

landline, fax and cell numbers. In case of more than one author with different Institutions,

use numbering in superscripts (1, 2, 3 etc) to separate the author's affiliation. Do not use

any other symbol such as *, † or ‡. For multiple authors, use comma as separator.

Author’s Affiliation: Mark the affiliations with the respective numbering in superscripts

(1, 2, 3 etc) used in the authors field. PJCIS encourages the listing of authors’ Open

Researcher and Contributor Identification (ORCID).

Page 71: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

Corresponding author: One author should be identified as the corresponding author by

putting asterisk after the name as superscript.

Abstracts: Abstract must be comprehensive and self explanatory stating brief

methodology, important findings and conclusion of the study. It should not exceed 300

words.

Keywords: Three to eight keywords depicting the article should be provided below the

abstracts separated by comma.

Introduction: The introduction should contain all the background information a reader

needs to understand the rest of the author’s paper. This means that all important concepts

should be explained and all important terms defined. Introduction must contain the

comprehensive literature review on the problem to be addressed along with concise

statement of the problem, objectives and valid hypotheses to be tested in the study.

Material and Methods: An adequate account of details about the procedures involved

should be provided in a concise manner.

Results & Discussion: Results should be clear, concise and presented in a logical

sequence in the text along with, tables, figures and other illustrations. If necessary,

subheadings can be used in this section. Discussion should be logical and results must be

discussed in the light of previous relevant studies justifying the findings. If necessary, it

may be split into separate "Results" and "Discussion" sections.

Recommendations: Research gap may be identified and recommendations for future

research should be given here.

Acknowledgement: In a brief statement, acknowledge assistance provided by people,

institutions and financing.

Abbreviations: Abbreviations used should be elaborated at least first time in the text

inside parenthesis e.g., VPN (Virtual Private Network).

Tables/Figures: Caption must have descriptive headings and should be understandable

without reference to the text and should include numbers [e.g., Figure 1: Summary of

quantitative data analysis]. Put citation at the end of the table/figure or in the text, if it has

been copied from already published work. Photographic prints must be of high resolution.

Figures will appear black & white in the printed version however, they will appear

coloured in the full text PDF version available on the website. Figures and tables should

be placed exactly where they are to appear within the text. Figures not correctly sized will

be returned to the author for reformatting.

References: All the references must be complete and accurate. When referring to a

reference in the text of the document, put the number of the reference in square brackets

e.g., [1]. All references in the bibliography should be listed in citation order of the

authors. Always try to give url or doi in case of electronic/digital sources along with

Page 72: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

access date. References should follow IEEE style, Details can be downloaded from

Journal’s home page. http://www.pastic.gov.pk/pjcis.aspx

Publication Ethic Policy: For submission to the Pakistan Journal of Computer Science

and Information Systems please follow the publication ethics listed below:

The research work included in the manuscript is original.

All authors have made significant contribution to the conception, design,

execution, analysis or interpretation of the reported study.

All authors have read the manuscript and agree for submission to the Pakistan

Journal of Computer Science and Information Systems.

This manuscript has not been submitted or published elsewhere in any form

except in the form of dissertation, abstract or oral or poster presentation.

All the data has been extracted from the original research study and no part has

been copied from elsewhere. All the copied material has been properly cited.

Journal Review Policy: PJCIS follows double blind peer review policy. All the

manuscripts are initially evaluated by the internal editorial committee of the PASTIC.

Manuscripts of low quality which do not meet journal’s scope or have similarity index >

20% are rejected and returned to the authors. Initially accepted manuscripts are

forwarded to two reviewers (one local and one foreign). Maximum time limit for review

is six weeks. After submission of reviewer’s report final decision on the manuscript is

made by the chief editor which will be communicated to the corresponding author

through email. All dates i.e., submission and acceptance will be clearly printed on the

manuscript.

Submission policy: Before submission make sure that all authors have read publication

ethics policy for PJCIS and agree to follow them. There is no submission or processing

charges for publishing in Pakistan Journal of Computer Science and Information

Systems. Manuscript complete in all respect should be submitted to

[email protected] or [email protected]

Editorial Office: Correspondence should be made at the following address:

Editor in Chief

Prof. Dr Muhammad Akram Shaikh

Director General

Pakistan Scientific & Technological Information Centre (PASTIC)

Quaid-e-Azam University (QAU) Campus

Islamabad

Please send all inquiries to [email protected]

Page 73: PISSN: 2519-5395pastic.gov.pk/downloads/PJCIS/PJCIS_V3_1.pdfrecognition or CV (C omputer Vision) domain: i. Presence of Human ii. Human Skeleton/Structure iii. Inner parts of body

ISSN Print : 2519-5395ISSN Print : 2519-5395ISSN Online : 2519-5404ISSN Online : 2519-5404ISSN Print : 2519-5395ISSN Online : 2519-5404