design of a framework for extraction of...

199
DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted to Shobhit University For the award of Doctor of Philosophy In Computer Engineering By Dilip Kumar Sharma (Regd No. SU/SCE&IT/Ph.D./PT/09/02) Under the supervision of Prof. (Dr.) A. K. Sharma Professor & Chairman Dept of Computer Engineering YMCA University of Science & Technology, Faridabad Faculty of Electronics, Informatics & Computer Engineering Shobhit University, Meerut (India)-250110 2012

Upload: others

Post on 28-Mar-2020

20 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

DESIGN OF A FRAMEWORK FOR EXTRACTION

OF DEEP WEB INFORMATION

A Thesis submitted to

Shobhit University

For the award of Doctor of Philosophy

In Computer Engineering

By

Dilip Kumar Sharma(Regd No. SU/SCE&IT/Ph.D./PT/09/02)

Under the supervision of

Prof. (Dr.) A. K. Sharma

Professor & Chairman Dept of Computer Engineering

YMCA University of Science & Technology, Faridabad

Faculty of Electronics, Informatics & Computer Engineering

Shobhit University, Meerut (India)-250110

2012

Page 2: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

ii

CERTIFICATE OF THE SUPERVISOR

This is to certify that the thesis entitled “Design of a Framework for

Extraction of Deep Web Information” submitted by Mr. Dilip Kumar

Sharma for the award of Degree of Doctor of Philosophy in the Faculty of

Electronics, Informatics & Computer Engineering at Shobhit University,

Modipuram, Meerut is a record of authentic work carried out by him under

my supervision.

The matter embodied in this thesis is the original work of the candidate and

has not been submitted for the award of any other degree or diploma.

It is further certified that he has worked with me for the required period in the

Department of Computer Engineering at YMCA University of Science

and Technology, Faridabad, India.

PROF. (DR.) A. K. SHARMA

Professor & Chairman

Dept of Computer Engineering

YMCA University of Science & Technology, Faridabad (India)

(SUPERVISOR)

Page 3: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

iii

CANDIDATE’S DECLARATION

I, hereby, declare that the work presented in this thesis entitled “Design of a

Framework for Extraction of Deep Web Information” in fulfilment of the

requirements for the award of Degree of Doctor of Philosophy, submitted in

the Faculty of Electronics, Informatics & Computer Engineering at

Shobhit University, Modipuram, Meerut is an authentic record of my own

research work under the supervision of Prof.(Dr.) A.K. Sharma, Professor

& Chairman, Dept of Computer Engineering, YMCA University of

Science & Technology, Faridabad, India.

I also declare that the work embodied in the present thesis

(i) is my original work and has not been copied from any Journal/thesis/book,

and

(ii) has not been submitted by me for any other Degree.

DILIP KUMAR SHARMA

Page 4: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

iv

ACKNOWLEDGEMENT

I am thankful to the Almighty for making things possible at the right time

always.

“Gratitude is the least of virtues

Ingratitude is the worst of vices” …………….. Shakespeare

My heart is filled with the gratitude when I owe it all to my revered and

indigenous supervisor and number of other persons. Words often fail to

express one’s inner feeling of indebtedness to one’s benefactors. Nevertheless,

it gives me an immense pleasure to express my sincere gratitude to my

esteemed supervisor Prof. (Dr.) A.K. Sharma, Professor & Chairman, Dept

of Computer Engineering, YMCA University of Science & Technology,

Faridabad (India). I am deeply influenced by his way of guidance during my

entire research work. He is kind of a person who is always ready and does

help and guide the researchers and fellows day and night. I wish, I could

acknowledge his generosity in terms of candid interaction, learned guidance,

valuable suggestions and inspiring advices, without which the completion of

this thesis work would have been a wild goose chase.

I would also like to thank Prof. R. P. Agrawal, Eminent personality in field of

Microelectronics, Vice Chancellor & Dean, the Faculty of Electronics,

Informatics & Computer Engineering, Shobhit University Meerut, for all the

generous support he extended.

I am also grateful to Prof. (Dr.) Vinay Kumar Pathak, Vice Chancellor,

Uttarakhand Open University, Haldwani, my M. Tech (CSE) supervisor, who

generously extended his help and support throughout my research work.

Page 5: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

v

I am also thankful to Dr. Komal Bhatia, Associate Professor, Dr. Ashutosh

Dixit, Associate Professor, Dr. Neelam Duhan, Assistant Professor, Dr. Jyoti

Assistant Professor and Dr. Anuradha, Assistant Professor, and other faculty

members of Department of Computer Engineering, YMCA University of

Science & Technology, Faridabad and Dr. Divakar Yadav, Assistant

Professor, JP Institute of Information Technology, Noida for their kind

support and help during my research and thesis work.

At the same time I would like to express my sincere thanks to Shri N.D.

Agrawal, Chancellor, Mr. Neeraj Agrawal, Treasurer, Mr. Vivek Agrawal,

Member-Executive Council, Prof. Jai Prakash, Vice Chancellor, Prof. Anil

Kumar Gupta, Pro-Vice Chancellor, Prof. Anoop Kumar Gupta, Director,

IAH, Mr. Ashok Kumar Singh, Registrar, Prof. Krishna Kant, Head, Dept of

CEA, Prof. Charul Bhatnagar, Professor, Dept of CEA, Dr. Ashwani

Upadhyay, Associate Professor, IBM, Mr. Abhay Chaturvedi, Associate

Professor, Dept of ECE, GLA University, Mathura, for their continuous and

praiseworthy support. They were always there in hard times to motivate, help

and support me.

Lastly, I feel like expressing my gratitude from depth of my heart to my

parents, wife, bhiya-bhabhi, in-laws, friends and colleagues for their continual

support, feedback and suggestions.

Finally I wish to thank all, who directly or indirectly helped me during entire

course of my thesis.

DILIP KUMAR SHARMAAssociate Professor

Dept of Computer Engineering & Applications GLA University, Mathura

Page 6: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

vi

DEDICATION

Dedicated

To

My parents, wife & kids

Page 7: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

vii

ABSTRACT

World Wide Web (WWW) is a huge repository of hyperlinked documents containing useful

information. WWW can be broadly classified in two types i.e. surface web and deep web

from the user's point of view. The surface web consists of static hyperlinked web pages that

can be crawled and index by general search engine. On the other hand the deep web refers

to the dynamic web pages which can be accessed through specific query interfaces. Web

crawler is program that is specialized in downloading web contents. Conventional web

crawler can easily index, search and analyze the surface web having interlinked html pages

but they have the limitations in fetching the data from deep web due to the query interface.

To access deep web, a user must request for information from a particular database through

a query interface.

Extraction of deep web information is quite challenging particularly due to some of its

inherent characteristics such as non interlinked documents that's why it is generally in

accessible to most of the general search engine crawler. Pages of deep web are updated

very frequently and size is so large that it cannot measure accurately and structure of deep

web is unconnected in self organized manner.

Working of conventional crawler mainly depends on the hyperlinks on the Publically

Indexable Web to discover and download pages. Due to the lack of links pointing to the

deep web pages, the current search engines are unable to index the Deep Web pages.

Due to the above specialized feature of deep web, searching and crawling of deep web

cannot be done efficiently by general search engines so there is a requirement for

development of a novel framework for facilitating extraction of deep web information.

There are certain issues and challenge crawling and searching such as proper identification

of entry points among the searchable forms and also there is big challenge to identify the

relevant domain among the large number of domains available on World Wide Web. Deep

web crawler should not affect the normal functioning of websites and a deep web crawler

should be able to access hidden database by automatically parsing, processing and

interacting with the form based search interface. A deep web crawler should be able to

Page 8: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

viii

access the websites which are equipped with client-side scripting languages and session

maintenance mechanisms.

In this work a novel framework for efficient extraction of deep web information with the

aim to resolve the issues and challenges associated with the retrieval of deep web

information. A Query Intensive Interface Information Extraction Protocol (QIIIEP) is also

proposed for deep web crawling i.e. a protocol for deep web information retrieval.

Development of the proposed framework requires the parallel development of various

supporting modules such as ranking algorithm for query word and cross site

communication technique for user submitted query words extraction. A novel ranking

algorithm for query word stored in QIIIEP server and cross site communication is

implemented on QIIIEP. Finally a QIIIEP based hidden web crawler and a novel

architecture for deep web crawler are proposed that has features of both privatized search as

well as general search for the deep web data that is hidden behind the html forms. The

results of the proposed framework are analyzed and found as per our expectation.

Page 9: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

ix

TABLE OF C O N T E N T S

Certificate of the Supervisor ii

Candidate’s Declaration iii

Acknowledgement iv

Dedication vi

Abstract vii

Table of Contents ix

List of Figures xiii

List of Tables xvi

List of Abbreviations xviii

CHAPTER NO. CHAPTER'S NAME PAGE NO.

CHAPTER 1: INTRODUCTION 1

1.1 INTRODUCTION TO WEB 1

1.2 DEEP WEB (Hidden or Invisible Web) 1

1.3 MOTIVATION 2

1.4 ISSUES & CHALLENGES OF DEEP WEB CRAWLING 3

1.5 ORGANIZATION OF THESIS 5

CHAPTER 2: WEB INFORMATION RETRIEVAL: A REVIEW 7

2.1 INTERNET 7

2.2 INFORMATION RETRIEVAL 8

2.3 WORLD WIDE WEB 9

2.4 WEB SEARCH AND WEB CRAWLING 9

2.5 PUBLIC INDEXABLE WEB V/S DEEP WEB 10

2.6 WEB CRAWLER 12

2.6.1 ROBOT.TXT PROTOCOL 12

2.6.2 TYPES OF WEB CRAWLER 13

2.7 CURRENT DEEP WEB INFORMATION

RETRIEVAL FRAMEWORK 21

2.8 FEW PROPOSED PROTOCOL FOR DEEP

WEB CRAWLING PROCESS 39

Page 10: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

x

2.9 COMPARATIVE STUDY OF DIFFERENT

SEARCH ENGINES 43

2.10 EXISTING DEEP WEB CRAWLERS 46

2.11 SUMMARY OF VARIOUS DEEP

WEB CRAWLER ARCHITECTURES 52

2.12 COMPARISON OF THE VARIOUS DEEP

WEB CRAWLER ARCHITECTURES 53

CHAPTER 3: QUERY WORD RANKING ALGORITHMS AND CROSS-SITE

COMMUNICATION: A REVIEW 57

3.1 INTRODUCTION 57

3.1.1 WEB MINING 58

3.1.2 EXISTING IMPORTANT WEB PAGE

RANKING ALGORITHMS 58

3.1.3 SUMMARY OF VARIOUS WEB PAGE

RANKING ALGORITHMS 64

3.1.4 COMPARISON OF VARIOUS WEB PAGE

RANKING ALGORITHMS 66

3.2 WEBSITE ANALYTICS TOOLS 68

3.2.1 NEED FOR WEB ANALYTICS TOOLS 69

3.2.2 COMPARISON OF DIFFERENT EXISTING

WEBSITE ANALYTICS TOOLS 70

3.3 QUERY WORDS RANKING ALGORITHM 71

3.3.1 DENSITY MEASURE TECHNIQUE 71

3.3.2 TERM WEIGHTING TECHNIQUE 72

3.3.3 ADAPTIVE TECHNIQUE 73

3.3.4 GENERIC ALGORITHM BASED TECHNIQUE 73

3.4 CROSS SITE COMMUNICATION 74

3.4.1 CLIENT-SIDE MASHUPS 75

3.4.2 SERVER-SIDE MASHUPS 76

3.4.3 VARIOUS EXISTING TECHNOLOGIES FOR

CROSS-SITE COMMUNICATION 76

3.4.4 COMPARISON OF DIFFERENT TECHNIQUES

FOR CROSS-SITE COMMUNICATION 81

Page 11: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xi

3.5 PRIVATIZED INFORMATION RETRIEVAL 82

3.5.1 ORACLE ULTRA SEARCH CRAWLER 83

3.5.2 IBM LOTUS EXTENDED SEARCH TECH. 83

3.5.3 FORM BASED AUTHENTICATION PROCESS 84

3.5.4 KERBEROS AUTHENTICATION MECHANISM 84

3.5.5 DYNAMIC URL TAB TECHNIQUE 84

CHAPTER 4: QUERY INTENSIVE INTERFACE INFORMATION EXTRACTION

PROTOCOL (QIIIEP) 86

4.1 INTRODUCTION 86

4.1.1 SPECIFICATION OF INFORMATION

EXCHANGE BETWEEN QIIIEP & CRAWLER 87

4.1.2 ARCHITECTURE AND WORKING

STEPS OF QIIIEP 89

4.1.3 IMPLEMENTATION ISSUES OF QIIIIEP 92

4.2 CROSS SITE COMMUNICATION ON QIIIEP 92

4.2.1 IMPLEMENTATION APPROACH 92

4.2.2 SECURITY MEASURE 95

4.2.3 IMPLEMENTATION RESULTS 95

4.3 RANKING ALGORITHM OF QUERY WORDS

STORED IN QIIIEP SERVER 97

4.3.1 QUERY WORD RANKING ALGORITHM 97

4.3.2 EXPERIMENTAL RESULTS 103

CHAPTER 5: A FRAMEWORK FOR PRIVATIZED INFORMATION

RETRIEVAL IN DEEP WEB SCENARIO 109

5.1 INTRODUCTION 109

5.2 FRAMEWORK FOR PRIVATIZED INFORMATION

RETRIEVAL 110

5.2.1 USER AUTHENTICATION TO

DOMAIN MAPPER 111

5.2.2 DEPLOYMENT OF AGENT FOR

AUTHENTICATED CRAWLING 112

Page 12: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xii

5.3 IMPLEMENTATION OF THE PROPOSED

FRAMEWORK 112

5.4 IMPLEMENTATION RESULTS 115

CHAPTER 6: A NOVEL ARCHITECTURE FOR DEEP WEB CRAWLER 118

6.1 INTRODUCTION 118

6.2 A QIIIEP BASED DOMAIN SPECIFIC HIDDEN WEB

CRAWLER 118

6.3 ARCHITECTURE OF PROPOSED DEEP WEB

CRAWLER (DWC) MECHANISM 122

6.3.1 FEATURES OF THE PROPOSED

ARCHITECTURE 126

6.3.2 BENEFITS ACCRUED THROUGH PROPOSED

ARCHITECTURE 129

CHAPTER 7: IMPLEMENTATION & RESULTS 131

7.1 INTRODUCTION 131

7.2 EXPERIMENTAL SETUP 132

7.2.1 GRAPHICAL USER INTERFACE 132

7.2.2 DATA SET 136

7.3 PERFORMANCE METRICS 137

7.4 EXPERIMENT RESULTS 138

7.5 COMPARISON OF ALL DOMAINS 156

7.6 COMPARISON OF PROPOSED DEEP WEB CRAWLER

DWC” WITH EXISTING HIDDEN WEB CRAWLER 157

CHAPTER 8: CONCLUSION & FUTURE SCOPE OF WORK 159

8.1 CONCLUSION 159

8.2 FUTURE SCOPE OF WORK 161

REFERENCES 163

APPENDIX A 176

APPENDIX B 179

Page 13: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xiii

LIST OF FIGURES

FIGURE NO. FIGURE DESCRIPTION PAGE NO.

Fig. 1.1: Query Interfaces form penguinbook.com, naukri.com and cars.com 2

Fig. 2.1: General Architecture of Internet 7

Fig. 2.2: General Process of Information Retrieval 8

Fig. 2.3: General Architecture of WWW 9

Fig. 2.4: Cyclic Architecture for Web Search Engine 10

Fig. 2.5: General Architecture of the Focused Crawler 14

Fig. 2.6: General Architecture of a Parallel Crawler 15

Fig. 2.7: The Mapping Manager 16

Fig. 2.8: The Crawl Manager 17

Fig. 2.9: The Crawl Worker 18

Fig. 2.10: General Architecture of Mobile Web Crawling 19

Fig. 2.11: General Architecture of Distributed Web Crawler 20

Fig. 2.12: Query or Credentials required for Contents Extraction 22

Fig. 2.13: Contemporary Deep Web Content Extraction and Indexing Framework 23

Fig. 2.14: Different types of Search Interface 24

Fig. 2.15: Variation of Results Count versus Query Qords for Different Search

Engines 44

Fig. 2.16: Snap Short Showing Results by Different Surface Web Search Engine

in respect to Query Word 45

Fig. 2.17: Snap Short Showing Results by Different Deep Web Search Engine in

respect to query word 45

Fig. 2.18: A Single Attribute Search Interface 47

Fig 2.19: A Multi-Attributes Search Interface 47

Fig 3.1: Working of Web Search Engine 57

Fig 3.2: Classification of Web Mining 58

Fig 3.3: Illustration of Back Links 59

Fig 3.4: Illustration of Hub and Authorities 60

Fig 3.5: Illustration of HITS Process 60

Fig 3.6: General Framework for Web Analytics Process 69

Fig 3.7: Client-Side Mashups 75

Fig 3.8: Server-Side Mashups 76

Page 14: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xiv

Fig 4.1: Basic Architecture of QIIIEP 90

Fig 4.2: Auto Query Word Extraction Module 93

Fig 4.3: Flow Graph of Query Word Selection Algorithm 99

Fig 4.4: Counts of Keywords Found on Surface Web 104

Fig 4.5: Computed Chi-Square values for each Group for each Context Word under

Test for Case-1 106

Fig 4.6: Computed Chi-Square values for each Group for each Context Word under

Test for Case-2 107

Fig 5.1: Framework for Privatized Information Retrieval 110

Fig 5.2: Form for Receiving user’s Credentials on Search Engine 110

Fig 5.3: Framework for User Authentication to Domain Mapper 111

Fig 5.4: Framework for Agent for Authenticated Crawling 112

Fig 5.5: Authentication by Search Engine in Private Area 115

Fig 6.1: Domain Specific Hidden Web Crawler 119

Fig 6.2: Architecture of Proposed Deep Web Crawler Mechanism 123

Fig 7.1: Homepage of Deep Web Search Engine 132

Fig 7.2 : Login and Registration Page 133

Fig 7.3: Gmail Crediantails Submission Form 133

Fig 7.4: Yahoo Crediantails Submission Form 134

Fig 7.5 : Administrator Interface for Deep Web Crawler 134

Fig 7.6: Authenticated Crawling Management Interface 135

Fig 7.7: Option Associated with Crawling of each URL 135

Fig 7.8: MYSQL Database for Crawler in same Domain 136

Fig 7.9: Search Interface of Book Domain: www.borders.com 139

Fig 7.10: Search Interface of Book Domain: www.bookshare.org 139

Fig 7.11: Search Interface of Book Domain: Textbooks.com 140

Fig 7.12: Search Interface of Book Domain: www.ecampus.com 140

Fig 7.13: Search Interface of Book Domain: www.textbooksrus.com 141

Fig 7.14: Search Interface of Book Domain: www.bookfinder.com 141

Fig 7.15: Result Page on www.borders.com 142

Fig 7.16: Result Page on www.bookshare.org 142

Fig 7.17: Result Page on www.ecampus.com 143

Fig 7.18: Result Page on www.textbooks.com 143

Fig 7.19: Result Page on www.textbooksrus.com 144

Page 15: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xv

Fig 7.20: Result Page on www.bookfinder.com 144

Fig 7.21: Result Page on www.ecampus.com 145

Fig 7.22: Result Page on www.textbooks.com 145

Fig 7.23: Comparison of Success with No. of Query Words at Different Domain 147

Fig 7.24: Graph Showing Performance Measures of Crawler for Book Domain:

Site 1 148

Fig 7.25: Graph Showing Performance Measures of Crawler for Book Domain:

Site 2 148

Fig 7.26: Graph Showing Performance Measures of Crawler for Book Domain:

Site 3 149

Fig 7.27: Graph Showing Performance Measures of Crawler for Book Domain:

Site 4 149

Fig 7.28: Graph Showing Performance Measures of Crawler for Book Domain:

Site 5 150

Fig 7.29: Graph Showing Performance Measures of Crawler for Job Domain:

Site 1 151

Fig 7.30: Graph Showing Performance Measures of Crawler for Job Domain:

Site 2 151

Fig 7.31: Graph Showing Performance Measures of Crawler for Job Domain:

Site 3 152

Fig 7.32: Graph Showing Performance Measures of Crawler for Job Domain:

Site 4 152

Fig 7.33: Graph Showing Performance Measures of Crawler for Job Domain:

Site 5 153

Fig 7.34: Graph Showing Performance Measures of Crawler for Auto Domain:

Site 1 154

Fig 7.35: Graph Showing Performance Measures of Crawler for Auto Domain:

Site 2 154

Fig 7.36: Graph Showing Performance Measures of Crawler for Auto Domain:

Site 3 155

Fig 7.37: Graph Showing Performance Measures of Crawler for Auto Domain:

Site 4 155

Fig 7.38: Graph Showing Performance Measures of Crawler for Auto Domain:

Site 5 156

Page 16: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xvi

LIST OF TABLES

TABLE NO. TABLE DETAILS PAGE NO.

Table 2.1: Comparison between PIW and Deep web 11

Table 2.2: Summary of various Techniques for Detection

of deep web search interface 28

Table 2.3: Summary of Different Schema Mapping Techniques 32

Table 2.4: Summary of Different Protocols in the Context of

Deep Web Information Extraction 43

Table 2.5: Query Words versus Results Counts for Surface Web Search Engines 44

Table 2.6: Query Words versus Results Counts for Deep Web Search Engines 44

Table 2.7: Summary of various Deep Web Crawler Architectures 52

Table 2.8: Comparison of the various Deep Web Crawler Architectures 54

Table 2.9: Comparison of the various Deep Web Crawler Architectures 55

Table 3.1: Summary of various Web Page Ranking Algorithms 65

Table 3.2: Comparison of various Web Page Ranking Algorithms 66

Table 3.3: Comparison of various Web Page Ranking Algorithms 67

Table3.4: Comparison of Different existing Web Analytics Tools 70

Table 3.5: Comparison of Different Available Technologies for

Cross-site Communication 82

Table 4.1: Stored Query Words-1 96

Table 4.2: Stored Query Words-2 96

Table 4.3: Clustering of Context Words, Label and Query Words 100

Table 4.4: Number of Web Counts versus Query Words for Case 1 104

Table 4.5: Chi- Square value Matrix 105

Table 4.6: Number of Web Counts versus Query Words for Case 2 106

Table 4.7: Computed Chi-Square value for Case-2 107

Table 5.1: Number of Links return against a Query Word for User 1 116

Table 5.2: Number of Links return against a Query Word for User 2 116

Table 5.3: Number of Links return against a Query Word for User 3 116

Table 6.1: Brief Output of the various Modules 125

Table 7.1: Form Identification Statistics 146

Table 7.2: Query Words and Content Extraction Statistics 146

Table 7.3: Parameters and Values Statistics 147

Page 17: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xvii

Table 7.4: Precision, Recall and F-measure for Book, Job & Auto Domains 156

Table 7.5: Comparison of Proposed Deep Web Crawler “DWC” with existing

Hidden Web Crawlers 157

Page 18: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xviii

LIST OF ABBREVIATIONS

C_PROC Multiple Crawling Processes

CQL Contextual Query Language

DARPA Defense Advanced Research Projects Agency

DOM Document Object Model

DWC Deep Web Crawler

DWSE Deep Web Search Engine

FBA Form Based Authentication

FM F-Measure

HITS Hypertext Induced Topic Selection

HLP Host-List Protocol

IBM International Business Machines

IMAP Internet Message Access Protocol

IR Information Retrieval

ISODATA Iterative Self Organizing Data Analysis

LVS Label Value Set

MEP Minimum Executing Pattern

MrCoM Multiple-Regression Cost Model

PARCAHYD Parallel Crawler using Augmented Hypertext Documents

PHITS Probabilistic Analogue of the HITS

PHP Hypertext Preprocessor

PLQL ProLearn Query Language

PR Precision

QIIIEP Query Intensive Interface Information Extraction Protocol

RE Recall

REST Representational State Transfer

RSS Really Simple Syndication

SRU Search/Retrieval via URL

URL Uniform Resource Locator

WAMP Windows, Apache, MySQL, PHP

WCM Web Content Mining

WLRank Weighted Links Rank

WPR Weighted Page Rank Algorithm

Page 19: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

xix

WSM Web Structure Mining

WUM Web Usage Mining

WWW World Wide Web

XDM Cross-Document Messaging

XDR XDomainRequest

XSS Cross-Site Scripting

Page 20: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

1

CHAPTER 1

INTRODUCTION

1.1 INTRODUCTION TO WEB

World Wide Web (WWW) [1][2] is a huge repository of hyperlinked documents containing

useful information. In the recent years, the exponential growth of the information

technology has led to large amount of information available through the WWW. The

searching of WWW for the useful and relevant information has become more challenging

as the size of the Web continues to grow.

WWW can be broadly divided in two types i.e. surface web and deep web according to

their depth of data. The contents of the deep web are dynamically generated by web server

and return to the user during online query. Web crawler is a program that is specialized in

downloading web contents. Conventional web crawler can easily index, search and analyze

the surface web having interlinked html pages but they have limitations in fetching the data

from deep web. To access deep web, a user must request for information from a particular

database through a search interface [3].

1.2 DEEP WEB (Hidden or Invisible Web)

A large amount of information on the web today is available only through search interfaces

(forms). Hence a lot of data remains invisible to the users. For example, if a user wants to

search information about some train route, then in order to get the required information,

he/she must go to railway reservation site and fill the details in a search form. As a result,

he/she gets the dynamically generated page consisting of details of the trains available.

Thus part of the web that is reachable only through search interface is called deep web.

Therefore, search engines cannot discover and index the information contained in these

sites that lie hidden behind search interface. Whereas, according to recent studies

[1][2][3][4][5][6], the content provided by many deep web sites is often of very high

quality in term of relevance and therefore, very valuable to many users. Three sample query

interfaces form from three different domain web sites i.e. book, job and auto are shown in

Fig. 1.1.

Page 21: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

2

(a) penguinbook.com (b) naukri.com (c) cars.comFig. 1.1: Query Interfaces Forms

A brief discussion of specific characteristics of deep Web is given as follows:

BROAD AND RELEVANT

Deep web information is very large in size and contains highly relevant information

compared to surface web. The Hidden Web is the largest growing category of new

information on the Internet. Hidden Web sites tend to be narrower than conventional

surface sites [3]. More than half of the Hidden Web content resides in topic-specific

databases and most of the Hidden Web sites are publicly accessible without any fees or

subscriptions.

HIGHER QUALITY

It has been found that deep web sites contain better quality information as compared to

surface web sites. The quality of the retrieved documents is measured in terms of

computational linguistic score for finding its relevancy required by the user. It is also found

that quality of content increases with increase in number of Web sites, simultaneously

being searched [3].

1.3 MOTIVATION

Owing the following features the crawling of hidden web assumes implications towards

collection of highly distilled relevant information for storage at the search engine site.

1. The first motivation is that the deep web is inaccessible to most search engine

crawlers, which means that many users are unaware of its rich content. It is difficult

to index the deep web using traditional methods [7][8].

Page 22: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

3

2. Web pages are updated very frequently. It is a general fact that pages larger in size

changes more frequently and are more extensible then smaller sizes pages.

3. The size of the web is so huge [3] that it can’t be measured accurately. Deep Web

contains very large and wide range of freely accessible databases, not searched or

indexed by the search engines.

4. The deep web is self organized in the sense of that different web page are

dynamically generated for different purposes. Anyone can post a webpage on the

Web. There are no standards and no policies for structuring and formatting the web

pages. In addition, there is no standard editorial review process to find the errors,

falsehoods and invalid statements. Due to the unconnected structure of the deep

web, new search techniques must be developed to retrieve the information [9][10].

1.4 ISSUES AND CHALLENGES OF DEEP WEB CRAWLING

Web Crawlers are programs that traverse the Web and automatically download pages for

search engines. Working of conventional crawler mainly depends on the hyperlinks on the

Publically Indexable Web to discover and download pages. Due to the lack of links

pointing to the deep web pages, the current search engines are unable to index the Deep

Web pages. The major issues and challenges in designing an efficient deep web crawler to

crawl the deep web information are given as follows:

Identification of entry points- Identification of a proper entry point among the searchable

forms is a major issue for crawling the hidden web contents [4] [5][11].If entry to the

hidden web pages is restricted only through querying a search form, two challenges arises

i.e. to understand and model a query interface as well as handling of these query interfaces

with significant quires.

This issue is resolved by element level mapping with global knowledge base of Query

Intensive Interface Information Extraction Protocol (QIIIEP) server through form

generated by auto form id generator and to resolve other issue in the proposed

architecture, the query word selection takes place according to the context identification as

well as verification of the web page crawled by crawler. This feature also facilitates the

identification of new query words.

Page 23: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

4

Identification of the domains- There is a big challenge to access the relevant information

through hidden web by identifying a relevant domain among the large number of domains

present on World Wide Web.

For resolving this issue, similar domains are identified according to context and query

interface and categorized at element level so that a large number of deep web domains can

be crawled with proposed architecture.

Social Responsibility –A crawler should not affect the normal functioning of website

during crawling process by over burdening website. Sometimes a crawler with a high

throughput rate may results in denial of service.

The proposed deep web crawler uses a QIIIEP server to fetch query words which are

sorted according to domain context. So only the high rank query words are used to extract

the deep web data which reduces the overall burden on the server.

Interaction with the Search Interfaces- An efficient hidden web crawler [4] should

automatically parse, process and interact with the form-based search interfaces. A general

PIW crawler only submit the request for the url to be crawled where as a hidden web

crawler also supply the input in the form of search queries in addition to the request for url.

The results show that QIIIEP server knowledge base is continuously updated for level value

mapping through user submitted cross-site query words extraction, web master submitted

query words and QIIIEP internal interoperable data mapping.

Handling of form-based queries – Many websites presents query form to the users to

further access their hidden database. The general web crawler cannot access these hidden

databases due to inability to process these query forms.

To handle these types of challenges, an auto guided mechanism of crawling deep web is

presented which uses a form id based level value matching mechanism which correlates

relevant level with QIIIEP knowledge base.

Client-side scripting languages and session maintenance mechanisms- Many websites

are equipped with client-side scripting languages and session maintenance mechanisms. A

general web crawler cannot access these types of pages due to inability to interact with such

type of web pages [12].

For handling these types of pages we implemented authenticated crawling, which monitor

session and crawl authenticated contents.

Page 24: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

5

Finally, it can be concluded that the Deep Web is a large repository of high quality and

relevant information but there is a need to develop a framework for extraction of Deep Web

information with the help of an efficient deep web crawler compatible with current web

scenario.

1.5 ORGANIZATION OF THESIS

The following is an outline of the contents of this thesis.

Chapter 1 explores theoretical aspects of deep web and challenges associated with

the efficient crawling of the deep web.

Chapter 2 reviews literature specific to our proposed work. This chapter contains the

general architecture of different types of web crawlers. The purpose of this review is

to find out the strengths and limitations of current web crawler techniques.

Chapter 3 reviews query word ranking algorithm and cross site communication

techniques to find out their strengths and limitations. Ranking of query word and

cross site communication is one of the important steps in web crawling.

In Chapter 4 Query Intensive Interface Information Extraction Protocol (QIIIEP) is

proposed for deep web crawling and it presents the implementation techniques for

Cross Site Communication on QIIIEP. A novel ranking algorithm of query words

stored in QIIIEP server is also presented.

Chapter 5 presents a framework for privatized information retrieval in deep web

scenario. Privatized information retrieval in deep web scenario refers to providing

precise information by searching facility for an authenticated user.

Chapter 6 presents “A QIIIEP Based Domain Specific Hidden Web Crawler” and

another novel architecture for deep web crawler “DWC” are proposed, "DWC" has

features of both privatized search as well as general search for the deep web data

that is hidden behind the html forms.

In chapter 7, the various important issues identified in chapter 2 and 3 have been

addressed and resolved. The implementation of Deep web crawler “DWC” has been

provided. The results obtained thereof are also provided.

Finally, Chapter 8 summarizes our contributions and provides guidelines for future

scope of work in this area.

Finally, the bibliography includes references to publications in this area.

Page 25: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

6

In Appendix A and Appendix B, about PHP, MYSQL & Apache Web Server and

publications related to this thesis are provided.

A technical review of existing web information retrieval techniques is given in next

chapter.

Page 26: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

7

CHAPTER 2

WEB INFORMATION RETRIEVAL: A REVIEW

2.1 INTERNET

The Internet is a global system of interconnected computer networks that use the standard

Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of

networks that consists of millions of private, public, academic, corporate and government

networks, of local to global scope, that are linked by a broad array of electronic, wireless

and optical networking technologies. The Internet carries a vast range of information

resources and services, such as the inter-linked hypertext documents of the Web and the

infrastructure to support email. The origin of Internet is traced back to a project sponsored

by the U.S. called ARPANET an initiative of Defense Advanced Research Projects Agency

(DARPA) in 1969. This was developed as system for researchers and defence contractors to

share information over the network [13]. Fig. 2.1 depicts the general architecture of

Internet.

Fig. 2.1: General Architecture of Internet

The Internet provides a number of services and resources. Some of them are E-Mail, Telnet,

file transfer protocol, Web, Mosaic, and Netscape. The Internet mail service allows users to

send and receive e-mail reliably to/from other users on the Internet, and to/from other e-

Page 27: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

8

mail systems. Telnet permits an Internet user's computer to connect to an Internet host on

the other side of the world. Essentially, Telnet enables two programs to work cooperatively

over the Internet. The FTP allows an Internet user to download and upload files. The

Internet services provide a number of client/server facilities, which are multimedia

information-retrieval technologies. A client program allows a user's computer to connect to

another Internet host to ask for the help of a server program. Popular client/server facilities

of the Internet are Usenet, IRC, Gopher etc.

2.2 INFORMATION RETRIEVAL

An Information Retrieval (IR) system retrieves information about a subject from a

collection of data objects. This is different from Data Retrieval, which in the context of

documents consists mainly of determining which documents of a collection contain the

keywords of a user query. IR deals with satisfying specific need of a user. The IR system

must somehow ‘interpret’ the contents of the information items (documents) in a collection

and rank them according to a degree of relevance to the user query. This ‘interpretation’ of

document content involves extracting syntactic and semantic information from the

document text [1].

There are three techniques of information retrieval i.e. free browsing, text-based retrieval

and content-based retrieval. In free browsing, a client browses through a set and searches

for the required information. In text-based retrieval, full text of text-based documents and

metadata of multimedia documents is indexed. In content-based retrieval, a client searches

multimedia information in terms of the actual content of image, audio or video. Some

content features include colour, text, size, shape, and pitch [14]. Fig. 2.2 depicts the general

process of information retrieval.

Fig. 2.2: General Process of Information Retrieval

Initially, for every multimedia document in the database, specific feature information is

extracted, indexed and stored in database. When a user writes a query, the feature

Page 28: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

9

information of the query is calculated as vectors. Finally, the system compares the

resemblance between the feature vectors of the query and multimedia data and recovers the

best matching results. If user is not satisfied with the extracted results, he can filter the

search results by selecting the most significant ones to the search query and repeat the

search with the new information.

2.3 WORLD WIDE WEB

The World-Wide Web was developed to be a pool of human knowledge and human culture,

which would allow collaborators of remote sites to share their ideas and all aspects of a

common project. It is a system of interlinked hypertext documents accessed via the Internet.

With a web browser, one can view web pages that may contain text, images, animation,

videos and other multimedia and navigate between them via hyperlinks. Using concepts

from earlier hypertext systems, English engineer and computer scientist Sir Tim Berners-

Lee, now the Director of the World Wide Web Consortium, wrote a proposal for the World

Wide Web [15]. Fig. 2.3 depicts the general architecture of WWW.

Fig. 2.3: General Architecture of WWW

2.4 WEB SEARCH AND WEB CRAWLING

The broad design of a search engine generally consists of three activities: crawler, indexer

and search modules [16]. Cyclic architecture of search engine is shown in Fig. 2.4 which

shows that how different components can use the information generated by the other

components.

Page 29: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

10

Fig. 2.4: Cyclic Architecture for Web Search Engine

It may be noted that the Web crawler is the first stage that downloads Web documents

which are indexed by the indexer for later use by search module, with a feedback from

other stages. Besides simple indexing, the indexer module helps the Web crawler by

arranging information about the ranking of web pages to make crawler more selective in

downloading of high ranking web pages. The search process, through log file analysis

process, optimizes both indexer and the crawler by providing information about ‘active set’

of web pages which are in fact seen and sought by users. At last, the Web crawler could

provide on-demand crawling services for search engines.

2.5 PUBLIC INDEXABLE WEB V/S DEEP WEB

Surface data is easily available on the web. Surface web pages can be easily indexed

through conventional search engine. Current-day crawlers retrieve content only from the

publically indexable Web, i.e., the set of web pages reachable purely by following

hypertext links, ignoring search forms and pages that require authorization or prior

registration. The web searching has been classified into surface crawling and deep web

crawling. The content accessible through hyperlinks forms the surface web while the

hidden content forms the deep web. It is the content on the World Wide Web that is

available to the general public and for indexing by a search engine. It is also called the

"crawlable web" and "public Web" and links to the pages on the surface Web are displayed

on search engine results pages. Descriptive text on the pages as well as meta tags hidden

within the Web pages identifies the page's contents for the various search engines

[3][4][5][6][11].

Page 30: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

11

Deep web crawling is the need of the hour. Surface crawling is unable to provide the user

with all the data he demands because of the hidden nature of certain information. The deep

web pages do not exist until they are created dynamically as the result of a specific search.

Deep Web sources store their content in searchable databases that only produce results

dynamically in response to a direct request.

The hidden web crawler deals with the crawling of deep web content which remains

uncovered by the surface crawler. Other terms that are often used in place of Web crawler

are ants, automatic indexers, bots, Web spiders, Web robots etc. The hidden Web is

particularly important, as an organization with large amounts of high-quality information

(e.g., the Census Bureau, Patents and Trademarks Office, News media companies) are

placing their content online. This is typically achieved by building a Web query front-end

to the database using standard HTML form elements. As a result, the content from these

databases is accessible only through dynamically generated pages, delivered in response to

user queries.

A number of recent studies [3][4][5][6] have noted that a tremendous amount of content on

the Web is dynamic. This dynamism takes a number of different forms. For instance, web

pages can be dynamically generated, i.e., a server-side program creates a page after the

request for the page is received from a client. Similarly, pages can be dynamic because they

include code that executes on the client machine to retrieve content from remote servers.

Web crawlers specialize in downloading, analyzing and indexing web content from surface

web. Content of surface web consist of interlinked HTML pages but the Web crawlers have

limitations if the data is hidden behind the query interface. Response to hidden web data

depends on the querying party’s context in order to engage in dialogue and negotiate for the

information. Tabular comparison between PIW and deep web is summarised in Table 2.1.

Table 2.1: Comparison between PIW and Deep web

PUBLIC INDEXABLE WEB DEEP WEB1) The set of web pages reachable by following hypertext links, ignoring search forms and pages that require authorization forms the PIW

1) Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request.

2)It covers lesser subject areas 2)It deals with e-commerce as well as non-commerce ones3)Data sources are mostly unstructured 3)Data sources are structured4)Ex- Google 4)Ex- Deepbot

Page 31: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

12

2.6 WEB CRAWLER

A Web crawler [4][6][11] is a computer program that traverses the World Wide Web in a

methodical, automated manner but in an orderly fashion. Web crawling is an important

method for collecting data from, the rapidly expanding Internet. A vast number of web

pages are continually being added every day, and information is constantly changing.

General-purpose web crawlers collect and process the entire contents of the Web in a

centralized location, so that it can be indexed in advance to be able to respond to many user

queries. In the early stage, when the Web was still not very large, simple or random

crawling method was enough to index the whole web. The existing crawlers have limited

capability in the sense that they are designed to have a large coverage area with negligible

refresh cycle. Even if a crawler has good coverage and fast refresh rate, it may lack in term

of ranking function or advance query processing power. Hence we need to device high

performance crawler.

2.6.1 ROBOT.TXT PROTOCOL

Robots are programs that traverse many pages in the World Wide Web by recursively

retrieving linked pages. These are used by search engines to index the web content and

spammers use them to scan for email addresses. On many occasions these robots have

visited WWW servers but they weren't allowed for variety of reasons. At times these

reasons were robot specific like certain robots retrieved the same files repeatedly [17].

The /robots.txt file is used to give instructions about the site to web robots with the help of

“The Robots Exclusion Protocol”. A file is created on the server which specifies an access

policy for robots and by this robots are excluded from a server. This file should be

accessible via HTTP on the local URL "/robots.txt". It was chosen because it can be easily

implemented on any existing web server and a robot can find the access policy with only

single document retrieval. The drawback of such an approach is that only a server

administrator can maintain such a list.

A robot wants to visits a Web site URL, say http://www.example.com/welcome.html.

Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:

The "User-agent: *" means that this section applies to all robots.

Page 32: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

13

The "Disallow: /" tells the robot that it should not visit any pages on the site.

There are two important considerations while using “/robots.txt”:

Robots can ignore the /robots.txt especially malware robots, that scan the web for

security vulnerabilities.

The /robots.txt file is a publicly available file. Anyone can see what sections of your

server you don't want robots to use.

The format and semantics of the "/robots.txt" file is as follows:

The file consists of one or more records separated by one or more blank lines. Each record

contains lines of the form "<field>:<optionalspace><value><optionalspace>". The field

name is case insensitive. The record starts with one or more User-agent lines, followed by

one or more Disallow lines. Unrecognized headers are ignored.

There are different types of web crawlers [2] available in literature. Few important types of

web crawlers are discussed here.

2.6.2TYPES OF WEB CRAWLER

A. Simple Crawler

Developed by Brian Pinkerton in 1994 [18], is a, single-process information crawler which

was initially made for his desktop. Later, it was extended on to the internet. It had a simple

structure, and hence had limitations of being slow and having limited efficiency.

B. Focused Web Crawler

This concept was given by Manczer in 1998, refined by Soumen, Chakrabarti et al in 1999.

They focused on specific topics and got the relevant result. They are also called as topical

crawler. They do not carry out the unnecessary task by downloading all the information

from the web instead it download only the relevant web pages, one associated with those

predefined topics and ignores the rest. This is advantageous for time saving factor. They are

limited in their field. A general architecture of focused crawler is shown in Fig. 2.5.

Examples: Healthline ,Codase etc. [19].

Page 33: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

14

Fig. 2.5: General Architecture of the Focused Crawler

As shown in Fig. 2.5, the key components of a focused crawler are a document classifier to

check whether a visited document is related to the topic of interest or not, and a document

distiller to identify the best URLs for the crawl frontier. So for both the components, the

quality of the training data is of utmost importance to keep the crawl on focus and also it is

the main bottleneck for the effective performance of the focused crawler.

C. Parallel Web Crawler

Initially given by Junghoo Cho in 2002, this approach relied on parallelizing the crawling

process, which was done by multi-threading. A parallel crawler has multiple crawling

processes. Each process performs the basic tasks that a single-process crawler conducts. It

downloads the web pages from the WWW, stores the pages locally, extracts URLs from the

downloaded pages and follows links. Some of the extracted links may be sent to other

processes depending on how the processes split the download task. These processes may be

distributed either on the same local network or at geographically distant locations. It has

faster downloading, less time consuming but requires more bandwidth and computational

power for parallel processing. It parallelizes the crawling process and uses the

multithreading that reduces the downloading time. For example, Google Bot employs

parallel, single-threaded processes [20] [21]. A general architecture of parallel crawler is

shown in Fig. 2.6.

Page 34: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

15

Fig. 2.6: General Architecture of a Parallel Crawler

It consists of multiple crawling processes (C_PROC). Each C_PROC performs the basic

task that a single-process crawler conducts. It downloads pages from the Web, stores the

pages locally, extracts URLs from them and follows links. The C_PROC’s performing

these tasks may be distributed either on the same local network or at geographically distant

locations. When all C_PROC’s run on the same local network and communicate through a

high speed interconnect (such as Local Area Network), user call it an intra-site parallel

crawler, where as when C_PROC’s run at geographically distant locations connected by the

Internet (or a wide area network), user call it a distributed crawler. For example, one

C_PROC may run in India, crawling all pages that belong to India and another C_PROC

may run in England, crawling all England pages.

D. PARCAHYD (Parallel Crawler using Augmented Hypertext Documents)

Parallelisation of web crawling system is very important from the point of view of

downloading documents in a reasonable amount of time. PARCAHYD [22] focuses on

providing parallelisation at three levels: the document, the mapper and the crawl worker

level. PARCAHYD is designed to be scalable parallel crawler.

At the first stage, the document retrieval system has been divided into two parts: the

crawling system and the hypertext (augmented) documents system. The augmented

hypertext documents provide a separate TOL for each document to be downloaded by the

crawling process.

Page 35: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

16

At the second stage, the crawling system has been divided into two parts: Mapping Process

and Crawling Process. The Mapping process resolves IP addresses for a URL and Crawling

Process downloads and processes documents.

Fig. 2.7: The Mapping Manager

a) The mapping process

The mapping process, shown in Fig. 2.7, consists of the following functional components:

URL-IP Queue: It consists of a queue of unique seed URL-IP pairs. The IP part may or may

not be blank. It acts as an input to the Mapping Manager.

Database: It contains a database of downloaded documents and their URL-IP pairs.

Resolved URL-Queue: It stores URLs which have been resolved for their IP addresses and

acts as an input to the Crawl Manager.

URL Dispatcher: This component reads the database of URLs and fills the URL-IP Queue.

It may also get initiated by the user who provides a seed URL in the beginning.

DNS Resolver: The name of the server must be translated into an IP address before the

crawler can communicate with the server. The internet offers a service that translates

domain names to corresponding IP addresses and the software that does this job is called

the Domain Name System (DNS). The DNS resolver uses this service to resolve the DNS

address for a URL and returns it back to the calling URL Mapper. It then updates the

database for the resolved IP address of URL.

Page 36: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

17

Mapping Manager: After receiving the signal Something to Map, it creates multiple worker

threads called URL Mapper. It extracts URL-IP pairs from the URL-IP Queue and

assembles a set of such pairs called URL-IP set.

URL Mapper: This component gets a URL-IP set as input from the Mapping Manager. It

examines each URL-IP pair and if IP is blank then the URL is sent to the DNS Resolver.

After the URL has been resolved for its IP, it is stored in the Resolved URL Queue.

b) The crawling process

Crawl Manager: This component (see Fig. 2.8) waits for the signal something to crawl. It

creates multiple worker threads named as Crawl Workers.

Crawl Worker: It maintains two queues: MainQ and LocalQ. The set of URLs received

from crawl Manager is stored in MainQ (see Fig. 2.9). It down loads the documents.

Document and URL Buffer: It is basically a buffer between the crawl workers and the

update database process.

Fig. 2.8: The Crawl Manager

It consists of following three Queues:

1. External Links queue (ExternLQ): This queue stores the external links.

2. Document-URL queue (DocURLQ): This queue stores the documents along with their

URLs.

3. Bad-URL queue (BadURLQ): This queue stores all the resolved Bad-URLs.

Page 37: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

18

Fig. 2.9: The Crawl Worker

When the information contained in a document changes very frequently, a crawler

downloads the document as often as possible and updates it into its database so that fresh

information could be maintained for the potential users. Now, for maintaining fresh

information at the search engine side, the crawler need to re-crawl only the .TVI file which

would ensure minimization of network traffic. Having brought the .TVI file at the search

engine side, the changes stored in it can be updated in its corresponding main document in a

transaction processing manner.

E. Mobile Web Crawlers

J. Hammer and J. Fiedler in [23] initiated the concept of Mobile Crawling by proposing the

use of mobile agents as the crawling units. According to Mobile Crawling, mobile agents

are dispatched to remote Web servers for local crawling and processing of Web documents.

After crawling a specific Web server, they dispatch themselves either back at the search

engine machine, or at the next Web server for further crawling. The crawlers based on this

approach are called as Mobile Crawlers. The goals of Mobile Crawling System are (a) to

minimize network utilization (b) to keep up with document changes by performing on-site

monitoring, (c) to avoid unnecessary overloading of the Web servers by employing time

realization, and (d) to be upgradeable at run time. A general architecture of mobile web

crawling process is shown in Fig. 2.10.

Page 38: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

19

Fig 2.10: General Architecture of Mobile Web Crawling

Initially, the crawler obtains a list of target locations from the crawler manager. These

addresses are referred to as seed URLs since they indicate the beginning of the crawling

process. In addition, the crawler manager also uploads the crawling strategy into the

crawler in form of a program. This program tells the crawler which pages are considered

relevant and should be collected. In addition, it also generates the crawler path through the

Web site. Before the actual crawling begins, the crawler must migrate to a specific remote

site using one of the seed URL’s as the target address. After the crawler successfully

migrates to the remote host, pages are retrieved and analyzed recursively using traditional

crawling techniques. When the crawler finishes, it either returns to the crawler manager

(crawler home) or, in case the list of seed URL's is not empty, it migrates to the next Web

site on the list and continues. Once the migrating crawler has successfully migrated back to

its home, all pages retrieved by the crawler are transferred to the search engine via the

crawler manager.

Mobile Crawling System outperforms traditional crawling by delegating the crawling task

to the mobile agent units and gives the as mentioned advantages:

Eliminates the enormous processing bottleneck from the search engine site.

Traditional crawling requires that additional machines be plugged into increase

processing power, a method that comes at a great expense and cannot keep up with

the current Web expansion rate and the document update frequency.

Page 39: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

20

Reduces the use of the Internet infrastructure and subsequently downgrades the

network bottleneck by an order of magnitude. By crawling and processing Web

documents at their source, user avoids transmission of data such as HTTP header

information, useless HTML comments, and JavaScript code.

Employs Time Realization. Mobile Crawlers do not operate on their hosts during

peak time. The administrator can define the permissible time spans for the crawlers

to execute.

The driving force behind this approach is the utilization of mobile agents that migrate from

the search engine to the Web servers, and remain there to crawl and process data locally on

the remote site which saves lot of network resources.

F. Distributed Web Crawlers

Distributed web crawling [24] is a distributed computing technique whereby Internet search

engines employ many computers to index the Internet via web crawling. It uses distributed

computing and reduces the overload on server. It also divides the works into different sub

servers. Their main focus is on distributing the computational resources and the bandwidth

to the different computers and the networks. Examples: YaCy: P2P web search engine with

distributed crawling. A general architecture of distributed web crawler is shown in Fig.

2.11.

Fig. 2.11: General Architecture of Distributed Web Crawler

The architecture of the distributed crawler has been partitioned into two major components

i.e. crawling system and application. The crawling system itself consists of several

Page 40: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

21

specialized components, in particular a crawl manager, one or more downloader’s, and one

or more DNS resolvers. All of these components, plus the crawling application, can run on

different machines (and operating systems) and can be replicated to increase the system

performance. The crawler manager is responsible for receiving the URL input stream from

the applications and forwarding it to the available downloader’s and DNS resolvers while

enforcing rules about robot exclusion and crawl speed. A downloader is a high-performance

asynchronous HTTP client capable of downloading hundreds of web pages in parallel,

while a DNS resolver is an optimized stub DNS resolver that forwards queries to local DNS

servers.

G. Hidden / Deep Web Crawler

ICT provides various tools and one among them is deep web or web wrappers. But to have

access on these tools developer requires deep knowledge as web wrappers are site specific

solutions that have dependencies with web structure and also, a web wrapper requires

constant maintenance in order to support new changes on the web sites they are accessing.

The deep web / hidden web refer to World Wide Web content that is not part of the surface

Web, which is not indexed by surface web search engines. In Deep web, Pages not indexed

by search engine. It mainly requires registration, signing up. Example- Yahoo

Subscriptions, LexiBot etc [3][4][6].

2.7 CURRENT DEEP WEB INFORMATION RETRIEVAL

FRAMEWORK

An extensive survey of deep web information retrieval system was carried out in this work

[6].The versatile use of the internet brought a remarkable revolution in the history of

technological advancement. Accessibility of web pages starting from zero in 1990 has

reached more than 1.6 billion during 2009. It is like a perineal stream of knowledge. The

more we dig the more thirst can be quenched. Surface data is easily available on the web.

Surface web pages can be easily indexed through conventional search engine. But the

hidden, invisible and non indexable contents which cannot be retrieved through

conventional methods. The size of deep web is estimated to be thousands times larger than

the surface web. The deep web consist of a large database of useful information such as

audio, video, images, documents, presentations and various other types of media. Today

people really heavily depend on internet for numerous applications such as flight and train

reservations, to know about new product or to find any new locations and job etc. They can

Page 41: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

22

evaluate the search results and decide which of the bits or scraps reached by the search

engine is most promising [25].

Unlike the surface web, the deep web information is stored in searchable databases. These

databases produce results dynamically after processing the user request [3]. Deep web

information extraction first uses the two regularities of the domain knowledge and interface

similarity to assign the tasks that are proposed from users and chooses the most effective set

of sites to visit by ontology inspection. The conventional search engine has limitations in

indexing the deep web pages so there is a requirement of an efficient algorithm to search

and index the deep web pages [11]. Fig. 2.12 shows the interface in information extraction

in the form of search form or login form.

Fig. 2.12: Query or Credentials Required for Contents Extraction

After exhaustive analysis of existing deep web [4][5][11][26] information retrieval

processes, a deep web information retrieval framework is developed, in which different

tasks in deep web crawling are idetified, arranged and aggregated in sequential manner .

This framework is useful for understanding entire working of deep web crawling

mechnisms as well as it enables the resercher to find out the limitations of present web

crawling mechanisms in searching the deep web. The taxonomical steps of developed

framework can be classified into following four major parts.

•Query Interface Analysis: Initially the crawler requests to a web server to fetch a page

there after, it parses and processes the form to build an internal representation of web page

based on the model developed.

•Values Allotment: It provides appropriate value to each and every input element present

on the search interface by using different combinations of keywords, which will be

Page 42: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

23

allocated by using some string matching algorithms after analyzing the form labels using

knowledge base.

•Response Analysis and Navigation: Crawler analyzes the response web pages received

and there of it checks if the submission has yielded valid search results. Crawler uses this

feedback to tune the values assigned and crawl the hypertext links iteratively, received by

response web page to some pre-specified depth.

•Relevance Ranking: Relevance ranking means the order in which search engine should

return the URLs, produced in response to a user’s query i.e. to display more relevant pages

to the user on priority basis. In fact the deep web’s working is entirely different from

traditional web. In deep web there is none of the ‘href’ tags that link to content and hence

no chain of links to be followed. On the other hand, the hidden web pages are dynamically

generated and therefore in deep web retrieval process, quality of a page cannot be predicted

with its reference. Thus, there is the need to developing a framework that may provide high

quality deep web contents. These above steps are represented in following Fig. 2.13 which

demonstrates the flow of control over the deep web contents extraction mechanism.

Fig. 2.13: Contemporary Deep Web Content Extraction and Indexing Framework

Exhaustive literature analysis of the above taxonomically classified steps are given below.

2.7.1 QUERY INTERFACE ANALYSIS

In query interface analysis, a request of fetching a web page from a web server is made by a

crawler. After completion of the fetching process, an internal representation of the web

Page 43: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

24

page is produced after parsing and processing of html forms based on the developed model.

Further the query interface analysis can be broken into the some modules that are detection

of hidden web search interface, search form schema matching and domain ontology

identification. In these modules the detection of hidden web search interface is the first and

foremost step towards deep web information retrieval. As expected, a human user can

easily identify a deep web search interface but to understand a deep web search interface

through a automatic technique without human intervention is a challenging task

[3][6][26][27][28]. Fig. 2.14 depicts the different types of search interfaces.

Fig. 2.14: Different Types of Search Interface

Query interface analysis can be taxonomically partitioned into the following steps.

A. Random Forest Algorithm Based Approach

One of the prominent works for detection of deep web search interface has been carried out

by Leo Breiman (2001) [29] in the form of random forest algorithm, that detects the deep

web search interface by using a model, based on decision trees classification. A random

forest model can be defined as a collection of decision trees. A decision tree can be

generated by bootstrapping processing of the training data. Various classification trees can

be generated through random forest algorithm. To classify a new object from its input

vector, the sample vector is passed to every tree defined in algorithm. A decision for

classification is given by every tree. A decision about most voted classification is done by

using all of the classification results of the individual trees.

The advantages of random forest algorithm are that it exhibits a substantial performance

improvement over single tree classifiers and injecting of the right kind of randomness

makes accurate classifiers and regulators. The disadvantage of this algorithm is that it may

select unimportant and noisy features of the training data, resulting in a bad classification

results.

B. Task-Specific, Human-Assisted Based Approach

One of the deep web crawler architecture was proposed by Sriram Raghavan and Hector

Garcia-Molina (2001) [4]. In this work, a task-specific, human-assisted approach is used to

Page 44: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

25

crawl the hidden web i.e. hidden web exposure (HiWE). There are two basic problems

related to deep web search, firstly the volume of the hidden web is very large and secondly

there is a need of such type of crawlers which can handle search interfaces efficiently,

which are designed mainly for humans.

The advantages of HiWE architecture is that its application/task specific approach allows

the crawler to concentrate on relevant pages only and with the human assisted approach,

automatic form filling can be done. Limitations of this architecture are that it is not precise

with response to partially filled forms and it is not able to identify and respond to simple

dependencies between form elements.

C. Hidden Web Agents Based Approach

A technique for collecting hidden web pages for data extraction was proposed by Juliano

Palmieri Lage et al. (2002) [30] . In this technique the authors have proposed the concept of

web wrappers. A web wrapper is a program which extracts the unstructured data from web

pages. It takes a set of target pages from the web source as an input. These set of target

pages are automatically generated by an approach called “Spiders”. Spiders automatically

traverse the web for web pages. Hidden web agents assist the wrappers to deal with the data

available on the hidden web.

The advantage of this technique is that it can access a large number of web sites from

diverse domains and limitation of this technique is that it can access only that web site that

follow common navigation patterns. Further, modification can be done in this technique to

cover navigation patterns based on these mechanisms.

D. Single Tree Classifiers Based Approach

A technique for automated discovery of search interface from a set of html forms was

proposed by Jared Cope, Nick Craswell and David Hawking (2003) [31]. This work defined

a novel technique to automatically detect search interface from a group of html forms. A

decision tree was developed with an algorithm called ‘C4.5’ that uses automatically

generated features from html markup giving a classification accuracy of about 85% for

general web interfaces.

Advantage of this technique is that it can automatically discover the search interface.

Limitation of this technique is that it is based on single tree classification method and

number of feature generation is limited due to use of limited data set. As a future work,

modification is suggested that a search engine can be developed using existing methods for

other stages along with the proposed one with a technique to eliminate false positives.

Page 45: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

26

E. 2P Grammar & Best-Effort Parser Based Approach

A technique for understanding web query interfaces through “best-effort parsing” with

hidden syntax was proposed by Zhen Zhang et al. (2004) [32]. This work addresses the

problem of understanding web search interfaces by presenting a best-effort parsing

framework. This work presented a form extractor framework based on 2P grammar and the

best effort parser parses in a language parsing framework. It identifies the search interface

by continuously producing fresh instances by applying productions until attaining a fix-

point i.e. when no fresh instance can be produced. Best effort parser technique minimizes

wrong interpretation as much as possible in a very fast manner. It also understands the

interface to a large extent.

Advantage of this technique is that it is a very simple and consistent technique with no

priority among preferences and it can handle missing elements in the form and limitation of

this technique is that establishment of a single global grammar is a challenging task.

F. Automatic Query Generation Based Approach

A technique named as “siphoning hidden web data through key word based interface” for

retrieval of information from hidden web databases through generation of a small set of

representative keywords and build queries was proposed by Luciano Barbosa and Juliana

Freire (2004) [33]. This technique is designed to enhance coverage of deep web.

Advantage of this technique is that it is a simple and completely automated strategy that can

be quite effective in practice, leading to very high coverage of deep web. Limitation of this

technique is that it is not able to achieve the coverage for collection whose search interface

fixes a number of results. Further the authors have advised that modification can be done in

this algorithm to characterize search interfaces techniques in a better way so that different

notions and levels of security can be achieved.

G. Weighted Feature Selection Algorithm Based Approach

An improved version of random forest algorithm was proposed by Deng et al. (2008) [34].

In this improved technique a weighted feature selection algorithm was proposed to generate

the decision trees.

The advantage of this improved algorithm is that it minimizes the problem of classification

of high dimension and sparse search interface using the ensemble of decision trees.

Disadvantage of this improved algorithm is that it is highly sensitive towards the changes in

training data set.

Page 46: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

27

H. Feature Weighted Random Forest Algorithm Based Approach

Further improvement in random forest algorithm was done by Yunming Ye et al. (2009)

[35] by using feature weighting random forest algorithm for detection of hidden web search

interface. This work presented a feature weighting selection process rather than random

selection process.

Advantage of this technique is that it reduces the chances of noisy feature selection.

However, the limitation of this technique is that features available only in the search forms

could be used. Future modification suggested in random forest algorithm to investigate

more feature weighting methods for construction of random forests.

I. Naive Bayesian Web Text Classification Algorithm Based Approach

An algorithm named as “The naive bayesian web text classification algorithm” was

proposed by Ping Bai and Junqing Li (2009) [36] for automatic and effective classification

of web pages with reference to given model for machine learning. In the conventional

techniques, category abstracts are produced using the inspection by domain experts either

through semiautomatic method or artificial method. All the items are provided equal

importance according to conventional common bayesian classifier whereas according to

improved naive bayesian web text classification algorithm, whole of the items in every title

are provided higher importance than others.

The strength of this technique is that text classification results are very accurate. Further

scope in this algorithm is suggested to make the classification process automatic in an

efficient way.

J. Domain Ontology Based Approach

An approach for automatic detection and unification of web search query interfaces using

domain ontology was proposed by Anuradha and A.K.Sharma (2010) [37]. The technique

proposed in this works by concentrating the crawler on the given topic based on domain

ontology. This technique results in the pages which contains the domain specific search

form. The strength of this technique is that the results are produced from multiple sources,

human effort is reduced and results are very accurate in less execution time. Limitation of

this technique is that it is domain specific.

Page 47: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

28

2.7.2 SUMMARY OF VARIOUS TECHNIQUES FOR DETECTION OF

DEEP WEB SEARCH INTERFACE

A tabular summary is given below in Table 2.2, which summarizes the techniques,

strengths and limitations of some of important detection techniques for deep web search

interface.

Table 2.2: Summary of Various Techniques for Detection of Deep Web Search Interface

Deep web search interface are the entry point for the searching of the deep web

information. A deep web crawler should understand and detect the deep web search

interface efficiently to facilitate the further process of deep web information retrieval. An

efficient detection of deep web search interface may results towards a significant retrieval

of deep web information. In fact, the first and foremost step of deep web information

retrieval is the efficient understanding and detection of deep web search interface.

Authors Technique Strengths LimitationsLeo Breiman(2001)

Forest of regression trees as classifiers.

A substantial improvement in performance over single tree classifiers.

May include un-important or noisy features.

Sri Ram Raghavan et al. (2001)

Hidden Web Exposer.

An application specific approach to hidden web crawling.

Imprecise in filling the forms.

Palmieri Lage et al. (2002)

Hidden Web Agents.

Wide coverage of distinct domains.

Restricted to web sites that follow common navigation patterns.

Jared Cope et al.(2003)

Single tree classifiers.

Automatically discovery of search interface, performed well when rules are generated on the same domain.

Long rules, large size of feature space in training samples, Over fitting, Classification precision is not very satisfying.

Zhen Zhang et al. (2004)

2P Grammar and Best effort Parser.

Very simple and consistent, Nopriority among preferences,Handling of missing elements in form.

Critical to establish single global grammar that can be interacted to the machine globally.

Luciano Barbosa et al. (2004)

Automatic query generation based on small set of keywords.

A simple and completely automated strategy that can be quite effective in practice

A large domain of Keywords has to be generated.

Deng, X. B. et al. (2008)

weighted feature selection algorithm

Minimizes the problem of classification of high dimension and sparse search interface using the ensemble of decision trees

Highly sensitive towards the changes in training data set.

Ye, Li, Deng et al.(2009)

Feature weighted selection process

Minimizes the chances of selection of noisy features.

No use of contextual information associated with forms.

Ping Bai et al.(2009)

Naïve Bayesian Algorithm

Text classification results are very accurate.

Classification algorithm is not automatic.

Anuradha et al. (2010)

Based on domain ontology.

Results are produced frommultiple sources, reduces the human effort, less execution time, accuracy is high.

It is domain specific.

Page 48: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

29

2.7.3 SEARCH FORM SCHEMA MAPPING

After detection of hidden web search interface, the next task is to identify accurate

matching for finding semantic correspondences between elements of two schemas. Schema

extraction of query interface is one of the very prime research challenges for comparing and

analysis of an integrated query interface for the deep web. Many algorithms are suggested

in last few years which can be classified in two groups.

I. Heuristic

Heuristic techniques for Search form Schema Mapping are based on guessing relations

which may consider similar labels or graph structures and can be further taxonomically

classified in two groups.

A. Element-level explicit techniques

Explicit techniques use the semantics of labels such as used in precompiled dictionary

(Cupid, COMA), Lexicons (S-Match, CTXmatch) e.g. sedan: Car is a hypernym for Four

Wheeler, therefore, Car Four Wheeler.

B. Structure-level explicit techniques

These are based on taxonomic structure format (Anchor-Prompt, NOM) e.g. DEPTT and

Department can be found as an appropriate match.

II. Formal

Formal techniques are based on model-theoretic semantics which are used to justify their

results.

A. Element-level explicit techniques

It uses xthe OWL properties (NOM) e.g. same Class as constructor explicitly states that one

class is equivalent to the other such as hybrid car = Car or CNG based car.

B. Structure-level explicit techniques

The approach is based on to translate the matching problem, namely the two graphs (trees)

and mapping queries into propositional formula and then to check it for its validity.

C. Modal SAT (S-Match)

The idea is based on to enhance propositional logics with modal logic operators. Therefore,

the matching problem is translated into a modal logic formula which is further checked for

its validity using sound and complete search procedures.

Many automatic or semi-automatic search form schema mapping systems meticulous in a

simple 1:1 matching are proposed such as Cupid method [38]. This method proposed a new

technique by including subtential lingusitic matching step and biasing matches by leaves of

Page 49: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

30

a schema. Future work in this regard include intergating cupid transperently with an off-

the-shelf thesaurus using schema notations for the linguistic matching and automatic tuning

of control parameter.

Do and Rahm [39], developed the COMA schema matching system to combine multiple

matchers. It uses COMA as a framework to evaluate the effectiveness of different matcher

and their combination for real world schemas. Future work in this regard is to add other

match and combination algorithm in order to improve match quality. LSD method [40],

proposes a initial idea for automatic learning mappings between source schemas and

mediated schemas. The work [41] describe LSD as a system that employs and extends

current machine learning techniques to semi-automatically find semantic mappings between

the source schema and mediated schema.

The work [42] presents a matching algorithm based on a fixed point computation. It uses

two graphs such as schemas catalogs as input and produces as output a mapping between

corresponding nodes of the graph. The work [43] proposed a technique that automatically

and dynamically summarized and organized web pages for displaying on a small device

such as PDA. This work proposed eight algorithms for performing label-widget matches in

which some algorithms were based on n-gram comparisons and others based on common

form layout specifications. Results can be improved by using syntactic and structural

feature analysis.

For schema extraction, [44] consider the non-hierarchical structure of query interface

assuming that a query interface has a flat set of attributes and the mapping of fields over the

interfaces is 1:1, which neglects the grouping and hierarchical relationships of attributes. So

the semantics of a query interface cannot be captured correctly.

Literature [45] proposed a hierarchical model and schema extraction approach which can

group the attributes and improve the performance of schema extraction of query interface,

but they show the poor clustering capability of pre-clustering algorithm due to the simple

grouping patterns and schema extraction algorithm and possibly outputs the subsets

inconsistent with those grouped by pre-clustering algorithm. Semantic matching is based on

two ideas: (i) To discover an alignment by computing semantic relations (e.g., equivalence,

more general); (ii) To determine semantic relations by analyzing the meaning (concepts, not

labels). Although this work provides a good accuracy but it can be improved by

investigating the interaction to help break ties when the ordering based strategy does not

work. Another improvement can be done by investigating the use of automatic interface

model procedure into the proposed approach.

Page 50: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

31

Most of the proposed search form schema mapping techniques require human involvement

and are not suitable for dynamic large scale data sets. Other approaches such as DCM

framework [46] and MGS framework [47] pursues a correlation mining approach by

exploiting the co-occurrence patterns of attributes and proposes a new correlation measure

while other [48] hypothesizes that every application field has a hidden generative model

and can be viewed as instances generated from models with possible behaviors. There are

certain issues in this algorithm that can be improved such as how to select the appropriate

measure to filter out false matching and how to design a dynamic threshold to apply it to all

domains. Table 2.3 depicts the summary of different schema mapping techniques.

New schema extraction algorithm Extr [49], which is based on three metrics (LCA)

precision, (LCA) recall, and (LCA) F1 are employed to evaluate the performance of schema

extraction algorithm by pre-clustering of attributes P by using MPreCluster. Then all the

subsets in P are clustered once again according to spatial distance. Finally all singleton

clusters are merged according to n-way constrained merging operation as algorithm

Ncluster. Hence the result is a hierarchical clustering H over the attributes of query

interface on the deep web. The experimental results indicate that proposed algorithm can

obviously improve the performance of schema extraction of query interfaces on the deep

web and avoid resulting inconsistencies between the subsets by pre-clustering algorithm

and those by schema extraction algorithm.

The last but not the least, correlated-clustering framework works in four phases. In the first

phase, it finds frequent attributes in the input attribute groups. In the second phase, it

discovers group where positively correlated attributes form potential attribute groups,

according to positive correlation measure and defined threshold. In the third phase, it

partitions the attributes into concepts, and cluster the concepts by calculating the similarity

of each two concepts. At last, it ranks to discover matching and then use a greedy matching

selection algorithm to select the final matching results.

Komal Kumar Bhatia et al.[50]presented in his research literature where mapping is done

by using domain specific interface mapper in which search interface repository will work

for matching purpose. It includes extensible domain specific matcher library. The multi-

strategy interface matching is done in three steps: parsing, semantic matching and semantic

mapping generation. In step one, SI parser is used to extract interface schema. In second

step, each tuple is mapped by fuzzy matching, domain specific thesaurus and data type

matching. Finally, SVM generator creates matrices of mapping that are identified by the

Page 51: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

32

matching library. DSIM also uses mapping knowledge base for avoiding repetition in map

effort.

Table 2.3: Summary of Different Schema Mapping Techniques

Future work may include testing the schema extraction algorithm on real world data and

testing the efficiency of schema matching and schema merging over variety of query

interfaces.

Domain ontology identification

Ontology is a formal specification of a shared conceptualization [51]. This step is required

for analyzing area or specialization of web page so that in further steps appropriate data set

will be efficiently placed in query part of the page. This can be taxonomically classified

into four different groups.

RDF annotations based ontology identification

Deitel et al. [52] presented an approach for learning ontology from resource description

framework (RDF) annotations of web resources. To perform the learning process, a

Name Techniques Advantages Limitations

Artemis Affinity-based

analysis and

hierarchical

clustering

Effective label extraction

technique, with high submission

efficiency.

Falls into the alignments as likeness

clues category.

Cupid Structural schema

matching techniques

Emphasize the name and data

type similarities present at the

finest level of granularity (leaf

level).

Lack of integrating cupid

transperently with an off-the-shelf

thesaurus using schema notations for

the linguistic matching and

automatic tuning of control

parameter.

DCM Hybrid n:m Identifies and clusters synonym

elements by analyzing the co-

occurrence of elements. DCM

framework can find complex

matching in many domains.

Ineffective against ‘noisy’ schema.

DCM cannot differentiate frequent

attributes from rare attributes. DCM

tries to first identify all possible

groups and then discover the

matching between them.

GLUE Composite n:m It uses a composite approach, as

in LSD, but does not utilize

global schema.

Accuracy of the element similarity

depends on training.

LSD Corpus-based

Matching

It provides a new set of machine-

learning based matchers for

specific types of complex

mappings expressions. It

provides prediction criterion for

a match or mismatch.

Performance depends upon training

data.

Page 52: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

33

particular approach of concept formation is adopted, considering ontology as a concept

hierarchy, where each concept is defined in extension by a cluster of resources and in

intension by the most specific common description of these resources. A resource

description is a RDF sub graph containing all resources reachable from the considered

resource through properties. This approach leads to the systematic generation of all possible

clusters of descriptions from the whole RDF graph incrementing the length of the

description associated to each particular concept in the source graph.

Metadata annotations based ontology identification

Stojanovic et al. [53] presents an approach for an automated migration of data-intensive

web sites into the semantic web. They extract light ontologies from resources such as XML

Schema or relational database schema and try to build light ontologies from conceptual

database schemas using a mapping process that can form the conceptual backbone for

metadata annotations that are automatically created from the database instances.

Table Analysis based ontology identification

The work [54] presents an approach called Table Analysis for Generating Ontologies

(TANGO) to generate ontology based on HTML table analysis. TANGO discovers the

constraints, match and merge mini-ontology based on conceptual modeling extraction

techniques. TANGO is thus a formalized technique of processing the format and content of

tables that can aim to incrementally build a appropriate reusable conceptual ontology.

DOM Based ontology identification

Zhiming Cui et al. published his research [55] which works in following steps.

Use the query interfaces information to generate a mini-ontology model.

To draw the instances from the intermediate result pages.

Use of various sources to generate ontology mappings and merging ontology.

In first step it employs a vision-based approach to extract query interface [56], and in

second step data region discovery is done by employing a DOM parser to generate the

DOM parsed trees from the result pages. Based on parent length, adjacent and normalized

edit distance, extraction of hierarchical data is done by vision-based page segmentation

algorithm. In final step merging is done by labelling instance pairs, mined from the result

pages, into the domain ontology. They also give an idea, of absent attribute annotation for

finding absent attributes in the data records. The next generation semantic web framework

is required to be able to handle knowledge level querying and searches. Main area to be

focused for research is concept relations learning to increase the efficiency of the system.

Page 53: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

34

2.7.4 VALUES ALLOTMENT

Values allotment techniques can be taxonomically classified in the following groups.

A. Integrating the Databases for Values mapping

Integration of the databases with the query interfaces is done in this process. The search

form interface brings together the attributes and this step will analyze appropriate data

values by structure characteristics of the interface and the order of attributes in the area as

much as possible. The integration of query interfaces can provide a unified access channel

for users to visit the databases which belong to the same area. For integrating interfaces,

the core part is dynamic query translator, which can translate the users’ query into different

form [57][58]. There is also some scope for improvement by using open directory hierarchy

to detect more hypernymy relationship.

B. Fuzzy comprehensive evaluation methods

In this, mapping is done by fuzzy comprehensive evaluation [59] which maps the attribute

of the form to the data values. First it analyzes a form mapping with a view and then it

selects the optimum matching result. Further scope of work in this area includes finding a

model to detect the optimal configuration parameter to produce the results with high

accuracy.

C. Query Translation Technique

Query translation technique is used to get query across different deep web sources to

translate queries to sources without primary knowledge. The framework takes source query

form and target query form as input and output a query for target query.

Some methods can be concerned such as type-based search-driven translation framework

by leveraging the “regularities” across the implicit data types of query constraints.

D. Patterns Based

He and Zhang et al. [60] found that query constraints of different concepts often share

similar patterns i.e. same data type (page title or author name etc.) and encode more generic

translational knowledge for each data type. This indicates explicit declaration of data type

that localities the translatable patterns. Therefore, getting translation knowledge for each

data types are more generic rather than identifying translation based on source of

information. For translation purpose it uses extensible search-driven mechanism which uses

type based translation.

Page 54: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

35

E. Hierarchical relations based

Other approach published by Hao Liang et al. [61] will map on the basis of three

constraints:

i. The same word in different schemas of query forms generally has the same semantics,

with or without the same formalizations.

ii. The words of two different forms may have same meaning. For this purpose use of

thesaurus or dictionaries is required. In addition to this for dealing some special subjects

like computer and electronics, some specialized dictionaries related to that subject is

required.

iii. There may be some hierarchical relations between the words, e.g., X is a hypernym of Y

if Y is a kind of X, and on the other hand Y is hyponym of X. For example, bus is a vehicle

it indicates that vehicle is hypernym of bus, vehicle is also equivalent to automobile. Thus

bus is equivalent to automobile semantically.

Domain ontology is widely used in different areas. There is a scope for lot of work to be

done for building ontology. One of the issues is to make automatic domain ontology

inspection for some particular domain to gain information about the domain.

F. Type-based predicate mapping

Type-based predicate mapping method [32] proposed by Z. Zhang focuses on text type

attribute with some constraint. The constraint is restricted to the query condition, for

example, any, all or exactly. Minimum search space is computed but the cost of the query is

not considered.

G. Cost model based

Another method of query translation is proposed by Fangjiao Jiang, Linlin Jia, Weiyi

Meng, Xiaofeng Meng which is based on a cost model for range query translation [62] in

deep web data integration. This work proposes a multiple regression cost model based on

statistical analysis for global range queries that involve numeric range attributes. It works

on the basis of following concepts.

i. Using a statistical-based approach for translating the range query at the global level after

proposing a multiple-regression cost model (MrCoM).

ii. For selecting significant independent variables into the MrCoM, a pre-processing-based

stepwise algorithm is defined.

iii. Global range queries are classified into three types and different models are proposed

for each of the three types of global range query.

iv. Experimental process is done to verify the efficiency of the proposed method.

Page 55: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

36

After going through the above process conclusion is that MrCoM has good fitness and

query strategy selection of MrCoM is highly accurate.

2.7.5 RESPONSE ANALYSIS AND NAVIGATION

Techniques for Response analysis and navigation can be taxonomically categorized in the

following parts.

A. Data Extraction Algorithm

Data extraction is another important aspect of deep web research, which involves in

extracting the information from semi-structured or unstructured web pages and saving the

information as the XML document or relationship model. The work [63] has done a lot of

work in this field. Additionally, in some works, such as [64] and [65], researchers have paid

more attention to influence the semantic information on deep web.

Jufeng Yang et al. [66] have published his literature on data extraction in which web page is

converted into a tree. In this the internal nodes represent the structure of the page and the

leaf nodes preserve the information and compared with configuration tree. Moreover,

structure rules are used to extract data from HTML pages and the logical and application

rules are applied to correct the extraction results. The model has four layers, among which

the access schedule, extraction layer and data cleaner are based on the rules of structure,

logic and application. Proposed models are tested to three intelligent system i.e. scientific

paper retrieval, electronic ticket ordering and resume searching. The results show that the

proposed method is robust and feasible.

B. Iterative Deepening Search

Generally the dynamic web search interface generates some output if they are given with

some input. The generation of output by some input can be visualized as a graph which is

based on the keyword relationship. After getting the interface information about the

targeted resource, primary query keywords are applied to generate new keywords, which

can be used for further extraction. Iterative deepening search [67] was proposed by Ahmad

Ibrahim et al. His work is based on probability, iterative deepening search and graph theory.

It has two phases. The first phase is about classification or identification of a resource

behind search interface into some certain domain and in the second phase, each resource is

queried according to its domain. Even if the deep web contains a large database, there is a

need of an efficient technique for extracting information from deep web in relatively short

Page 56: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

37

time. Presently most of the techniques do not work on real time domain and lot of time is

consumed in processing to find the desired result.

C. Object Matching Method

Object matching process has vital role for integration of deep web sources. For integration

of database information, a technique was proposed [68] to identify the same object from

variety of sources using well defined specific matching rules. This work gives the solution

of the merge/ purge problem i.e. the problem of merging data from multiple sources in an

effective manner. The sorted neighbourhood method is proposed for the solution of

merge/purge problem. An alternative technique based on clustering method is also

proposed in comparison with sorted neighbour method.

D. String transformation Based Method

A technique to compare the same parameters of similar objects was proposed in literature

[69] through string transformation which is independent of application domain but uses

application domain to gain the knowledge about attributes of weights through a very little

user interaction. There are several future research areas in regard to this algorithm such as

how to minimize the noise in the labels given by the user.

E. Training Based Method

A technique called PROM was defined in [70] to increase the accuracy of matching by

using the constraint among the attributes available through training procedure or expert

domain. It uses the objects from different sources having different attributes, using the

segmentation of pages into small semantic blocks defined on the basis of HTML tags. The

future work in this regard can be to implement the profilers generated in matching task to

other related matching task to see the effect of transferring such knowledge.

F. Block Similarity Based Method

This technique was proposed by Ling, Liu et al. [71] which segments pages into small

semantic blocks based on html tags and change the problem of object matching into

problem of block similarity. This method is based on high accuracy record and attributes

extraction. Due to the limitation of existing information extraction technology, extracted

object data from html pages is often incomplete.

G. Text Based Method

A new method of object matching was proposed by Zhao et al. [72] which is text-based

standard TF/IDF cosine-similarity calculation method to calculate the object similarity, and

further expended his framework to record-level object matching model, attribute-level

object matching model and hybrid object matching model, which considers structured and

Page 57: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

38

unstructured features and multi-level errors in extraction. This work compare the

performance of the unstructured, structured and hybrid object matching models and

concludes that hybrid method has the superior performance.

2.7.6 RELEVANCE RANKING

Surface web crawlers normally do the page-level ranking but this does not fulfil the

purpose of vertical search. The need of entity level ranking for deep web resource has

initiated a large amount of research in the area of entity-level ranking. Previously, most of

the approaches were concentrated on ranking the structured entities based on the global

frequency of relevant documents or web pages. Method of relevance ranking can be

taxonomically categorized in the following parts.

A. Data Warehouse Based Method

Many researchers such as [73] initiated the use of web data warehouse to pre-store all of

entities having the capability of handling structured queries. This work proposed various

language models for web object retrieval such as an unstructured object retrieval model,

structural object retrieval model and hybrid model having both structured and unstructured

features. This work concludes that hybrid model is the superior one with extraction errors at

changing labels.

B. Global Aggregation and Local Scoring Based Method

Literature survey indicates that various search engines are built for focusing on clear

indication of entity type and context pattern in user request as illustrated in reference [74].

This work concentrates on the ranking of entities by extracting its underline theoretical

model and producing a probabilistic ranking framework that can be able to smoothly

integrate both global and local information in ranking.

One of the latest techniques named as LG-ERM was proposed by Kou et al. [75] for the

entity-level ranking is based on the global aggregation and local scoring of entities for deep

web query purpose. This technique uses large number of parameters affecting the rank of

entities such as relationship between the entities, style information of entities, the

uncertainty involved in entity retrieval and the importance of web resources. Unlike

traditional approaches, LG-ERM considers more rank influencing factors including the

uncertainty of entity extraction, the style information of entities and the importance of Web

sources, as well as the entity relationship. By combining local scoring and global

aggregation in ranking, the query result can be more accurate and effective to meet users’

Page 58: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

39

needs. The experiments demonstrate the feasibility and effectiveness of the key techniques

of LG-ERM.

In query interface analysis, heuristic rules based techniques are the simplest one but does

not guarantee even a solution. IRFA eliminates the problem of classification of high

dimensional and sparse search interface data. Future work is to identify other techniques for

feature waiting for generation of random forest. Various techniques of search form schema

mapping are described and can be categorized as heuristic and formal techniques. In this

regard an appropriate measure has to be defined to filter out false matching. In domain

ontology identification of the main area to be focused for research is concept relation

learning to increase the efficiency of the system. In values allotment there is a scope for

improvement by using hierarchy to detect more hypernymy relationship. In fuzzy

comprehensive evaluation methods, the future research area includes finding a model to

detect the optimal configuration parameter to produce the results with high accuracy but

MrCOM model has good fitness with highly accurate query strategy selection. In response

analysis and navigation, most of the techniques do not work on real time domain and a

large time is consumed in processing to find the desired result. In relevance ranking one of

the latest techniques is LG-ERM which considers a larger number of ranking influencing

factors such as the uncertainty of entity extraction, the style information of entity and the

importance of web sources as well as entity relationship. By combining the techniques of

local scoring and global aggregation in ranking, the query result can be made more accurate

and effective. There exists an improper mapping problem in the process of query interface

analysis. Over traffic load problem arises from unmatched semantic query. Data

integration suffers due to the lack of cost models at the global level in the process of proper

values allotment. In distributed websites there is challenge for query optimization and

proper identification of similar contents in the process of response analysis and navigation.

The ranking of deep web contents are difficult due to the lake of reference to other sites.

2.8 FEW PROPOSED PROTOCOL FOR DEEP WEB CRAWLING

PROCESS

Some examples of frameworks designed for extraction of deep web information are given

below.

Page 59: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

40

2.8.1 SEARCH/RETRIEVAL VIA URL

Search/Retrieval via URL (SRU) protocol [76] is a standard XML-focused search protocol

for internet search queries that uses contextual query language (CQL) for representing

queries.

SRU is very flexible. It is XML-based and the most common implementation of SRU via

URL, which uses the HTTP GET for message transfer. The SRU uses the representational

state transfer (REST) protocol and introduces sophisticated technique for querying

databases, by simply submitting URL-based queries For example

URL?version=1.1&operation=retrieve &query=dilip&maxRecords=12

This protocol is only considered useful when the information about the resource is

predictable i.e. the query word is already planned from any source. With reference to our

previous discussion, the task of SRU comes under values allotment and data extraction.

2.8.2 Z39.50

Z39.50 [77] is an ANSI/NISO standard that specifies a client/server-based protocol for

searching and retrieving information from remote databases. Clients using the Z39.50

protocol can locate and access data in multiple databases. The data is not centralized in any

one location. When a search command is initiated, the search is normally sent

simultaneously in a broadcast mode to multiple databases. The results received back are

then combined into one common set. In a Z39.50 session, the Z39.50 client software that

initiates a request for the user is known as the origin. The Z39.50 server software system

that responds to the origin’s request is called the target. This protocol is useful for

extracting data from multiple sources simultaneously but here the search phrase must also

be defined using any other knowledge source. The task of the protocol can be considered in

our values allotment and data extraction phase.

2.8.3 OPEN ARCHIVES INITIATIVE PROTOCOL FOR METADATA

HARVESTING

The open archives initiative (OAI) protocol [78] for metadata harvesting (OAI-PMH)

provides an interoperability framework based on the harvesting or retrieval of metadata

from any number of widely distributed databases. Through the services of the OAI-PMH,

the disparate databases are linked by a centralized index. The data provider agrees to have

metadata harvested by the service provider. The metadata is then indexed by the harvesting

service provider and linked via pointers to the actual data at the data provider address. This

protocol has two major drawbacks. It does not make its resources accessible via

Page 60: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

41

dereferencable URIs, and it provides only limited means of selective access to metadata.

This protocol not only provide the proper values allotment for query but also provide

knowledge harvested from different source so it increases the accuracy by decreasing the

unmatched query load.

2.8.4 PROLEARN QUERY LANGUAGE

The ProLearn Query Language (PLQL) developed Campi et al. [79] by the PROLEARN

"Network of Excellence", is query language, for repositories of learning objects. PLQL is

primarily a query interchange format, used by source applications (or PLQL clients) for

querying repositories or PLQL servers. PLQL has been designed with the aim of effectively

supporting search over LOM, MPEG-7 and DC metadata. However, PLQL does not assume

or require these metadata standards. PLQL is based on existing language paradigms like the

contextual query language and aims to minimize the need for introducing new concepts.

For a given XML binding form, all relevant metadata standards for learning objects was

decided to express exact search by using query paths on hierarchies by borrowing concepts

from XPath. Thus, PLQL combines two of the most popular query paradigms, allowing its

implementations to reuse existing technology from both fields i.e. approximate search

(using information retrieval engines such as Lucene) and exact search (using XML-based

query engines). This is simple protocol and work as a mediator for transforming data

extracted from one object to other. It is applicable in the phase of querying the resource

with predefined query words.

2.8.5 HOST LIST PROTOCOL

The Host-List Protocol (HLP) model [80] is a periodical script designed to provide a way to

inform web search engines about hidden hosts or unknown hosts. The virtual hosting

feature, applied in apache web server allows one Apache installation to serve different

actual websites. This virtual hosts feature will be the target during the design process of this

model. The algorithm of the HLP model is such that it extracts hidden hosts, in the form of

virtual hosts from apache web server using one of the PHP scripting language based open

source technologies which is utilizing an open standard technology in the form of XML

language, building a frontier of extracted hosts then sending such hosts frontier to the web

search engines that support this protocol via HTTP request in an automatic fashion through

a cron job. Hosts frontier is an XML file that lists virtual hosts extracted from the

configuration file of the apache web server "httpd.conf" after verifying its configuration to

take a decision about from where to extract virtual hosts from "httpd.conf". This protocol is

Page 61: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

42

designed to reduce the task or identifying virtual host on any server. It generate host list in

XML format for crawler and provide the path for data extraction.

2.8.6 REALLY SIMPLE SYNDICATION

RSS stands for "Really Simple Syndication" [81]. It is a technique to easily distribute a list

of headlines, update notices, and sometimes content to a wide number of people. It is used

by computer programs that organize those headlines and notices for easy reading. Most

people are interested in many websites whose content changes on an unpredictable

schedule. RSS is a better technique to notify the new and changed contents. Notifications of

changes to multiple websites are handled easily, and the results presented to user are well

organized and distinct from email. RSS works through the website author to maintain a list

of notifications on their website in a standard way. This list of notifications is called an

"RSS Feed". Producing an RSS feed is very simple and lakhs of websites like the BBC, the

New York Times and Reuters, including many weblogs are now providing this feature. RSS

provides very basic information to do its notification. It is made up of a list of items

presented in newest to oldest order. Each item usually consists of a simple title describing

the item along with a more complete description and a link to a web page with the actual

information being described. This mechanism of extracting contents is very effective for

site updating their contents daily but the loop whole is again generating proper feed from

combination of database and generated page links. It comes into the category of values

allotments and data extraction.

2.8.7 SITEMAP PROTOCOL

The sitemaps protocol [82] allows a webmaster to inform search engines about URLs on a

website that are available for crawling. A sitemap is an XML file that lists the URLs for a

website. It allows webmasters to include additional information about each URL such as

when it was last updated, how often it changes and how important it is in relation to other

URLs in the website. This allows search engines to crawl the site more intelligently.

Sitemaps are a URL inclusion protocol and complement robots.txt, a URL exclusion

protocol. Sitemaps are particularly advantageous on websites where some contents of web

site is not linked with public pages and webmasters use rich Ajax or Flash contents that are

not normally processed by search engines. Sitemaps helps to find out the hidden contents

when submitted to crawler and it do not replace the existing crawl-based mechanisms that

search engines already use to discover URLs. Sitemap protocol will come under the

Page 62: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

43

taxonomical categorization of values allotment and data extraction. Table 2.4 depicts the

summary of different protocols for deep web information extraction.

Table 2.4: Summary of Different Protocols in the Context of Deep Web Information Extraction

Name Techniques Advantages LimitationsSRU Uses the REST

protocol, send encoded query words through http get request.

Simple xml based request. Independent of underlining database.

Limitation of 256 character query. Responding server must be equipped for analyzing query.

Z39.50 ANSI/NISO client/server-based protocol.

The data can be extracted from multiple locations. Access data from multiple databases. Various results combined into one result set, regardless of their original format.

Complex technique and the searching is limited to the speed of the slowest server, No updates and support.

OAI-PMH XML response over http.

Disparate databases are linked by a centralized index, Simple http based request. Much faster and independent of database.

Evaluation on xml. limitation in object access, exchange and transfer, No technique available for the user to know when the metadata was last harvested.

PROLEARN QUERY LANGUAGE

XML XQuery and Xpath based

Best suited for approximate search and exact search, supports hierarchical metadata structures.

Complex to implement and application development model is not mature.

HOST LIST PROTOCOL

Apache and xml Retrieval of url from virtual host. Extracting hidden hosts are limited to the root.

RSS XML Short bunch of information, real time update, generation of RSS and reading mechanism is easy to implement.

Dependent upon site administrator to generate these feed. Because auto feed generator will read only anchor tag.

SITEMAP PROTOCOL

XML based , Site administrator’s protocol

Easy to generate, Simple structure and node hierarchy.

Site administrator’s involvement is needed, if the pages are hidden by Ajax or flash, explicit mentioning must required.

2.9 COMPARATIVE STUDY OF DIFFERENT SEARCH ENGINES

We have conducted search on different search engines some of them are surface web search

engine which crawler frontier that retrieves only anchor tag and other are deep search

engines which retrieves deep web information. The results are shown below in Table 2.5

and Table 2.6. Fig. 2.15 depicts the variation of results count versus query words for

different search engines. Fig. 2.16 depicts the snap shots showing results by different

surface web search engines in response to query words. Fig. 2.17 depicts the snap shots

showing results by different deep web search engines in response to query words.

Page 63: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

44

Table 2.5: Query Words versus Results Counts for Surface Web Search Engines

Query words

Surface Web Search Engine

Google.com Yahoo.com Bing/live.com Ask .com

cloud computing 34,200,000 1,510,000 15,300,000 13,100,000

global warming 30,200,000 2,800,000 14,100,000 7,582,000

optical modulation 1,800,000 57,800 1,600,000 1,290,000

Walmart 20,800,000 194,000,000 16,300,000 3,340,000

best buy 296,000,000 44,200,000 551,000,000 188,000,000

Dictionary 191,000,000 7,880,000 68,400,000 27,400,000

Astrology 25,300,000 40,400,000 22,600,000 7,790,000

Insurance 363,000,000 41,600,000 262,000,000 49,730,000

Table 2.6: Query Words versus Results Counts for Deep Web Search Engines

Query words

Deep Web Search Engine

science.gov deepdyve.com biznar.comWorldwidescience.org

completeplanet.com

cloud computing 32,318,709 1,229,954 108,609 52,763 5000

global warming 14,436,593 563,694 380,240 214,452 3702

optical modulation 1,774,195 1,513,665 116,407 442,284 621

Walmart 5,960,514 87,143 234,180 244 41

best buy 315,040,211 72,574 2,613,762 111,272 2208

Dictionary 56,975,239 101,835 397,173 122,385 1087

Astrology 14,101,342 87,964 132,384 3,404 290

Insurance 92,056,043 170,433 2,795,986 291,304 5000

Fig. 2.15: Variation of results count versus query words for different search engines

Page 64: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

45

Fig. 2.16: Snap Short Showing Results by Different Surface Web Search Engine in Respect to Query Word

Fig. 2.17: Snap Short Showing Results by Different Deep Web Search Engine in Respect to Query Word

From the analysis of these results, it is clear that surface search engines show more number

of results compared to deep web search engine because still at present deep web search

engine are not more efficient due to the lack of technological advancement and standards.

Page 65: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

46

Results count for query words related to technical literature are relatively more in number

for deep web search engines while results count of general query words are more in surface

web search engines due to the numerous public posting and search engine optimization

work.

2.10 EXISTING DEEP WEB CRAWLERS

Deep web store their data behind the html forms. Traditional web crawler can efficiently

crawl the surface web but they cannot efficiently crawl the deep web. For crawling the deep

web contents various specialized deep web crawlers are proposed in the literature but they

have limited capabilities in crawling the deep web. A large volume of deep web data

remains to be undiscovered due to the limitations of deep web crawler. In this section

existing deep web crawlers are analyzed to find their advantages and limitations with

particular reference to their capability to crawl the deep web contents efficiently.

2.10.1 APPLICATION/TASK SPECIFIC HUMAN ASSISTED APPROACH

Various crawlers are proposed in literature to crawl the deep web. One of the deep web

crawler architecture was proposed by Sriram Raghavan and Hector Garcia-Molina [4]. In

this work, a task-specific, human-assisted approach is used for crawling the hidden web.

Two basic challenges are associated with deep web search. Firstly the volume of the hidden

web is very large and secondly there is a need of such type of user friendly crawler which

can handle search interfaces efficiently.

The advantages of HiWE architecture is that its application/task specific approach allows

the crawler to concentrate on relevant pages only and automatic form filling can be done

with the human assisted approach. Limitations of this architecture are that it is not precise

with response to partially filled forms and it is not able to identify and respond to simple

dependency between form elements.

2.10.2 FOCUSED CRAWLING WITH AUTOMATIC FORM FILLING

BASED APPROACH

A focused crawler architecture was proposed by Luciano Barbosa and Juliana Freire. This

work [5] suggests a strategy which deals with the problem of performing a wide search

while avoiding the crawling of irrelevant pages. The best way is to use a focused crawler

which only crawl pages which are relevant to a particular topic. It uses three classifiers to

focus its search i.e. Page classifier, which classifies pages, belonging to topics, Form

Page 66: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

47

classifier which is used to filter out useless forms and Link classifiers, used to identify links

that are likely to lead to pages that contain searchable form interfaces in one or more steps.

Advantages of this architecture are that only topic specific forms are gathered and

unproductive searches are avoided because of application of stopping criteria. A limitation

of this architecture is that quality of forms is not ensured since thousands of forms are

retrieved at a time. Further scope of improvement in this type of crawler is that focused

crawlers can be used for making domain specific crawlers like hidden web database

directory.

2.10.3 Automatic Query Generation for Single Attribute Database Based

Approach

A novel technique [83] for downloading textual hidden web contents was proposed by

Alexandros Ntoulas , Petros Zerfos and Junghoo Cho. There are two basic challenges in

implementing a hidden web crawler. Firstly the crawler should understand and model a

query interface. Secondly, the crawler should generate meaningful queries to issue to the

query interface. To address the above mentioned challenges, this work suggests how a

crawler can automatically generate queries so that it can discover and download the hidden

web pages. It mainly focuses on textual databases that support single-attribute keyword

queries (see Fig. 2.18). In contrast, a structured database often contains multi-attribute

relational data (e.g., a book on the Amazon Web site may have the fields title=‘Data

Structure using C, author=‘A.K.Sharma’ and isbn=‘9788131755662’) and supports multi-

attribute search interfaces (see Fig. 2.19).

Fig. 2.18: A Single Attribute Search Interface

Fig. 2.19: A Multi-attributes Search Interface

An advantage of this technique is that query is generated automatically without any human

intervention, therefore crawling is efficient and limitation of this technique is that it focuses

Page 67: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

48

only on single attribute databases. Further an extension can be made to this work by

including multi-attribute databases.

2.10.4 TASK SPECIFIC AND DOMAIN DEFINITION BASED APPROACH

A technique for deep web crawling based on task-specific approach was proposed by

Manuel Alvarez et al. In this approach [84], a set of domain definition is provided to the

crawler. Every domain definition defines a particular data collecting task. The deep web

crawler recognizes the relevant query forms by using domain definition. The functioning of

this model is based on a shared list of routes (URLs). The overall crawling process consists

of several sub crawling processes which may run on different machines. Each crawling

process selects a route from the route list, analyzes it and downloads the relevant

documents.

The advantage of this algorithm is that the results are very effective against the various real

words data collecting tasks. Further scope in this regard is to make this algorithm to be

capable of automatically generating new queries from the results of the previous ones.

2.10.5 FOCUSED CRAWLING BASED APPROACH

A focused crawler named as DeepBot [12] used for accessing hidden web content was

proposed by Manuel Álvarez et al. In this work an algorithm is developed for developing

the DeepBot, which is based on focused crawling for extracting the deep web contents.

Challenges behind the crawling of the deep web contents can be broadly classified into two

parts i.e. Crawling the Hidden Web at server-side and crawling the Hidden Web at client-

side.

Advantages of this DeepBot crawler are that a form may be used in this crawling but it

may have some field that do not correspond to any attribute of the domain, Accuracy of

DeepBot crawler is high when more than one associated text are present in the field. The

context related to the whole form is also considered in the crawling mechanism and

DeepBot is fully compatible with java-script sources. Disadvantage of such type of

crawling is that all the attributes of the form should be filled completely and precisely.

Problems in crawling arise when one uses sources with session mechanisms. As a future

work, modification can be done in the DeepBot crawler so that it can be able to generate

new queries in automatic fashion.

2.10.6 SAMPLING BASED APPROACH

An approach [85] to deep web crawling by sampling based technique was proposed by

Jianguo Lu et al. One of the major challenges while crawling the deep web is the selection

Page 68: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

49

of the queries so that most of the data can be retrieved at a low cost. A general method is

proposed, which maximize the coverage of the data source, while minimizing the

communication cost. The strategy behind this technique is to minimize the number of

queries issued, by maximizing the possibility of the unique returns of each query.

An advantage of this technique is that it is a low cost technique that can be used in practical

applications and limitation of this algorithm is that the efficiency of the crawler reduces

when the sample and pool size is very large. As a future work, modification can be done to

lower the overlapping rate so that the maximum web pages can be downloaded.

2.10.7 DOMAIN SPECIFIC SEARCH WITH RELEVANCY BASED

APPROACH

An architectural framework [11] of a crawler for locating deep web repositories using

learning multi-agent systems was proposed by J Akilandeswari et al. This work uses multi-

agent web mining system to discover pages from the hidden web. Multi-agents system is

used when there are troubles in storing large amount of data in the database indices. The

proposed system has variety of information agents interacting with each other to learn about

their environment so that they can retrieve desired information effectively.

Advantages of this framework are that crawling through this technique is efficient because

the searching is concentrated on a specific domain. The crawling technique of this

framework extract the relevant web forms by using the learning multi-agents and it learns

effectively which reduces form retrieval time. Limitation of this framework is that it is not

easy to maintain multiple agents. In future, this framework can be extended so that genetic

algorithms can be implemented in the crawler to perform a broad search to improve the

harvest rate.

2.10.8 DOMAIN-SPECIFIC DEEP WEB SOURCES DISCOVERY BASED

APPROACH

A technique [86] of domain-specific deep web sources discovery was proposed by Ying

Wang et al. It is difficult to find the right sources and then querying over them online in

huge collection of useful databases. Hence, this work presents a new method by importing

focused crawling technology to automatically accomplish deep web sources discovery.

Firstly, websites are located for domain-specific data sources based on focused crawling.

Secondly, it is judged where the website exists in deep web query interface. Lastly,

judgment is done to find whether the deep web query interface is relevant to a given topic.

Implementation of focused crawling technology facilitates the identification of deep web

Page 69: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

50

query interface located in a specific domain and capturing of relevant pages associated with

the topic. This method has dramatically reduced the quantity of pages for the crawler to

crawl the deep web.

Advantage of this technique is that fewer numbers of pages need to be crawled since it

applies focused crawling along with the relevancy search of the obtained results about the

topic. Limitation of this technique is that it does not take into account the semantics i.e. a

particular query result could have several meanings. As a future work this technique can be

extended to include semantics while querying.

2.10.9 INPUT VALUES FOR TEXT SEARCH INPUTS BASED APPROACH

A technique [87] for surfacing deep web contents was proposed by Jayant Madhavan et al.

Surfacing the deep web is a very complex task because html forms can be associated with

different languages and different domains. Further large quantities of forms are associated

with text inputs which require the submission of valid input values. For this purpose authors

have proposed a technique for choosing input values for text search inputs by which

keywords can be accepted.

Advantage of this technique is that it can efficiently navigate for searching against various

possible input combinations. The limitations as well as future work in this regard can be to

modify the technique to deal with forms associated with java script and to analyze the

dependency between the values in various inputs of a form in more depth.

2.10.10 LABEL VALUE SET (LVS) TABLE BASED APPROACH

A framework [26] of Deep Web Crawler was proposed by Xiang Peisu et al. The proposed

framework processes the actual mechanics of crawling of deep web. This work deals with

the problem of crawling a subset of the currently uncrawled dynamic web contents. It

concentrates on extracting contents from the portion of the web which is hidden behind

search forms in large searchable databases. This proposed framework presents a model of

form with form submission facility. One of important characteristics of this proposed

crawler is that it uses Label Value Set (LVS) table. Access and additions to the LVS table is

done by LVS manager. The LVS manager also works as an interface for different

application specific data sources.

Advantage of this deep web crawler is that it uses the additional modules like LVS table

which help the crawler to design the model of frame. If the crawler uses the LVS table than

the number of successful form submissions increase. Limitation of this framework is that

crawler is unable to extract label (E) and the value of domain (D) in LVS is repeated.

Page 70: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

51

2.10.11 MINIMUM EXECUTING PATTERN (MEP) & ADAPTIVE QUERY

BASED APPROACH

A technique for crawling deep web contents through query forms was proposed by Jun Liu

et al. The proposed technique [88] of crawling the deep web content is based on Minimum

executing pattern (MEP). The query in this technique is processed by deep web adaptive

query method. Query interface is expanded from single text box to MEP set by using deep

web adaptive query method. A MEP, associated with keyword vector is selected for

producing an optimum local query.

Advantages of this technique are that it can handle a different web forms very effectively

and its efficiency is very high as compared to that of non prior knowledge methods. Further

it also minimizes the problem of “data islands” to some extent. The limitation of this

technique is that it does not produce good results in case of the deep web sites which have

limited size of result set. Further, boolean logic operators such as AND, OR, NOT cannot

be used in queries which can be a part of the future work.

2.10.12 ITERATIVE SELF ORGANIZING DATA ANALYSIS (ISODATA)

BASED APPROACH

A novel automatic technique for classifying deep web sources was proposed by Huilan

Zhao [89]. The classification of deep web sources is a very critical step for integrating the

large scale deep web data. This proposed technique is based on iterative self organizing

data analysis (ISODATA) technique. This technique is based on hierarchical clustering

method. Advantage of this technique is that it allows the user to browse the relevant and

valuable information. Method for extraction of characteristics in web pages can be further

improved to browse the valuable information.

2.10.13 CONTINUOUSLY UPDATE OR REFRESH THE HIDDEN WEB

REPOSITORY BASED APPROACH

A framework for incremental hidden web crawler was proposed [90] by Rosy Madaan et al.

In a world of rapidly changing information, it is highly required to maintain and extract the

up-to-date information. For this, it is required to verify whether a web page has been

changed or not. The time period between two successive revisits needs to be adjusted based

on probability of updation of the web page. In this work, architecture is proposed that

introduces a technique to continuously update or refresh the hidden web repository.

Advantages of this incremental hidden web crawler is that information fetched is updated

even if web pages change and limitation of this incremental hidden web crawler is that

Page 71: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

52

efficient indexing technique is required to maintain the web pages in the repository. In

future, a modified architecture of a search engine based on incremental hidden web crawler

using some indexing technique can be designed to index the web pages that stored in the

repository.

2.10.14 REINFORCEMENT LEARNING BASED APPROACH

An efficient deep web crawling technique using reinforcement learning was proposed by

Lu Jiang et al. [91]. In the reinforcement learning technique, the deep web crawler works

as agent and deep web database plays a role of environment. The deep web crawler

identifies a query to be submitted into a deep web database, depending upon Q-value.

The advantage of this technique is that deep web crawler itself decides about a crawling

technique to be used by using its own experience. Further, it also permits the use of

different characteristics of query keywords. Further scope in this technique is that an open

source platform can be developed for deep web crawling.

2.11 SUMMARY OF VARIOUS DEEP WEB CRAWLER

ARCHITECTURES

By going through the literature analysis of some of the deep web crawlers, It is concluded

that every crawler have some relative advantages and limitations. A tabular summary is

given below in Table 2.7, which summarizes the techniques, advantages and limitations of

some of important deep web crawlers.

Table 2.7: Summary of Various Deep Web Crawler Architectures

Authors, Year

Technique Advantages Limitations

Raghavan et al., 2001

Extraction is application/task specific.

Extraction of irrelevant pages is minimized.

Crawling is not precise due to possibility of missing of some pages.

Barbosa et al., 2005

Focused crawling with automatic form filling.

Crawling is highly relevant, which saves time and resources.

Quality of forms is not ensured and form verification is a complex task.

Ntoulas et al., 2005

Based on automatic query generation form.

Efficient crawling due to crawler generated query.

Does not involve frequently used multi-attribute database.

Alvarez et al., 2006

Task specific approach. A set of domain definition is provided to the crawler. Every domain definition defines a particular data collecting task. The deep web crawler recognizes the relevant query forms by using domain definition.

Results are very effective against the various real words data collecting task.

Algorithm can be modified to be able to automatically generate new queries from the results of the previous ones.

Page 72: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

53

Alvarez et al., 2007

Focused crawling for extracting deep web contents

Accuracy is high with the field having more than one associated text. The context of the whole form is used. Fully compatible with java-script sources

The form should be filled precisely and completely. Difficulty with sources having session mechanism.

Lu et al.,2007

Sampling data from the database.

Low cost, Efficient in practical applications.

Efficiency is less in case of large sample and pool size.

Akilandeswari et al., 2008

Use of multi-agent system on a large database.

Time efficient, fault tolerant and easy handling due to multi-agents.

Cost may be high due to maintenance of multi-agents.

Wang et al., 2008

Focused crawling and results are located in a specific domain.

Crawl fewer numbers of pages due domain specific technique.

Sometimes semantics may be wrong due to crawling of useless pages.

Madhavan et al., 2008

Input values for text search inputs are selected. Identification of inputs for a particular type of values.

It can efficiently navigate for searching against various possible input combinations.

Technique can be modified to deal with forms associated with java script and the dependency between the values in various inputs of a form can be analyzed in more depth.

Peisu et al., 2008.

Proposes a model of form. Form submission with four additional modules with LVS table.

Successful form submissions increase with the use of LVS table.

If crawler is unable extract label (E) then the value of domain (D) in LVS is repeated.

Liu et al.,2009.

Based on the concept of minimum executable pattern (MEP).

Effective handing of different web forms. Higher efficiency against non prior knowledge method. Reduces the problem of “data islands”.

Results are not good with websites having limited size of result set. Boolean logic operators (AND,OR,NOT) cannot be used.

Zhao, 2010. Based on iterative self organizing data analysis (ISODATA) technique.

Allows the user to browse the relevant and valuable information.

Extraction method of characteristics in web pages can be further improved to browse the valuable information.

Madaan et al., 2010

Regularly updates web repository.

Fetched web pages are regularly updated.

Indexing of web page is required.

Jiang et al., 2010

Based on reinforcement learning technique. Deep web crawler works as agent and deep web database plays a role of environment. Identify a query to be submitted into a deep web database, using Q-value.

Deep web crawler itself decides about a crawling technique to be used by using its own experience. Permits the use of different characteristics of query keywords.

Can be developed as an open source platform for deep web crawling.

2.12 COMPARISON OF THE VARIOUS DEEP WEB CRAWLER

ARCHITECTURES

By going through literature analysis with their relative comparison with reference to

efficient deep web crawling, it is observed that each deep web crawler has limitations in

Page 73: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

54

efficient crawling of the deep web. Based on the literature analysis, a comparison [92] of

some of various deep web crawlers’ architectures is shown in Table 2.8 and in Table 2.9.

Comparison is done on the basis of some parameters such as technique used, need of user

support, reflection of change in web page, automatic query selection; accuracy of data

fetched, database sampling and focused crawling.

Table 2.8: Comparison of the Various Deep Web Crawler Architectures

Crawling technique

Crawling the hidden web

Searching for hidden web databases

Downloading Textual Hidden Web Contents

Domain specific deep web sources discovery

Approach to deep web crawling by sampling

Framework for incremental hidden web crawler

Locating deep web repositories using multi-agent systems

Parameter

Technique used

Application/task specific human assisted approach.

Focused crawling with automatic form filling.

Automatic query generation for single attribute database.

Domain specific relevancy based search.

Sampling of data from the database.

Frequent updation of web repository.

Multi-agent system is helpful in locating deep web contents.

Need of the user’s support

Human interface is

needed in form filling.

No human interface is needed in

form filling.

User monitors the

filling process.

It doesn’t require

user’s help.

Not mentioned.

It doesn’t require

user’s help.

Users monitor the

process.

Reflection of web page

changes It doesn’t reflect

such a change

No concept involved for dealing with

it.

It doesn’t reflect such a

change.

Such changes are

not incorporated

It doesn’t reflect such a

change.

It keeps refreshing

the repository for such changes.

It doesn’t reflect such a change.

Automatic query

selection

Such feature has not been

incorporated

Nothing is mentioned

about such a feature.

Query selection is automatic.

Nothing is mentioned

about such a feature.

Automated query

selection is done.

Nothing is mentioned about such

feature.

Nothing is mentioned about such

feature.Accuracy

of data fetched Data fetched

can be wrong if the web pages

change.

Data fetched can be

wrong if the web pages

change.

Data fetched can be wrong

if the web pages change.

Data fetched can be

wrong if the web pages

change.

Data fetched can be

wrong if the web pages

change.

Data fetched can be

wrong if the web pages

change.

Only correct data is

obtained since

repository is refreshed at regular time

interval.Database sampling Such feature is

not incorporated.

Such feature is not

incorporated.

Such feature is not

incorporated.

Such feature is not

incorporated.

Large database is

sampled into

smaller units.

Such feature is not

incorporated.

Such feature is not

incorporated.

Focused crawling

Focused crawling is not

involved although it is task specific.

Focused crawling is the basis of this work.

Nothing is mentioned

about focused crawling.

Focused crawling is

done.

Nothing is mentioned about such concept.

Nothing is mentioned about it.

Nothing is mentioned about it.

Page 74: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

55

Table 2.9: Comparison of the Various Deep Web Crawler Architectures

Crawling technique

Parameters

A Framework of Deep Web Crawler

Google’s Deep-Web Crawl

Efficient Deep Web Crawling Using Reinforcement Learning

Crawling Deep Web Contents through query forms

Study of Deep Web Sources ClassificationTechnology

A Task-Specific Approach for Crawling the Deep Web

Technique Used

Proposes a model of form with form submission facility with four additional modules with LVS table.

Input values for text search inputs are selected and identification of input for a particular type of values.

Based on reinforcement learning technique. Deep web crawler works as agent and deep web database plays a role of environment. Identify a query to be submitted using Q-value.

Minimum executable pattern (MEP) and based adaptive query technique.

Based on iterative self organizing data analysis (ISODATA) technique.

Task specific approach. A set of domain definition is provided to the crawler. Every domain definition defines a particular data collecting task. The deep web crawler recognizes the relevant query forms by using domain definition.

Need of user’s support

Human interaction is not needed for modeling the form.

Human interface is needed in form filling.

Human interaction is required

Need of user support.

No need of user support.

No need of user support.

Reflection of web page change

No effect of web page changes in the result.

There is effect of these changes in the result because it is based on the content of the pages.

There is effect of these changes in the result but changes may generate an error.

Yes Yes Yes

Automatic query selection

Yes No Yes Yes - No

Accuracy of data fetched

Good

Data can be wrong if the web pages change.

High Average - Average.

Database sampling

No Yes Yes Yes - Yes

Focused crawling

Yes Yes Yes Yes -It is task based crawling.

A critical look at the available literature indicates that the following issues need to be

addressed while designing an efficient deep web crawler. After going through analysis of

deep web crawlers, it is concluded that each deep web crawler has certain limitations with

reference to their capabilities of efficient crawling of the deep web contents.

Page 75: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

56

1. Collection of knowledge base query words must be done in optimized, effective and

inter-operative manner.

2. Effective context identification for recognizing the type of domain.

3. A crawler should not overload any of the web servers that are used for extracting the

deep web information.

4. Overall extraction mechanism of deep web crawling must be compatible, flexibility,

inter-operative and configurable with reference to existing web server architecture.

5. There must be a procedure to define the number of query words to be used to extract the

deep web data with element value assignments. For example date wise fetching mechanism

can be implemented for limit the number of query words.

6. Architecture of deep web crawler must be based on global level value assignment. For

example inter-operative element value assignment can be used with surface search engine

and clustering of query word can be done based on ontology as well as quantity of

keywords.

7. For sorting process, the automated information extraction from deep web must be

analyzed and resolved through two major activities ie exact context identification and

enrichment of collection in knowledge base at search engine.

8. A technique must be implemented on server, to identify search deep domain value

allotment through GET method of http protocol. So that the further step of context

identification becomes possible.

9. The deep web crawler must be robust against unfavourable conditions. For example if

resource is removed after the crawling process, it is impossible to set exact result pages of

the specific query words, even the existence of relation between url and query word.

10. The crawler must have a functionality tool to enabling the quicker detection of failure,

error or unavailability of any resources.

In this work the above issues have been addressed. The detail of each is given in the later

chapters of this thesis.

Page 76: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

57

CHAPTER 3

QUERY WORD RANKING ALGORITHMS AND

CROSS-SITE COMMUNICATION: A REVIEW

3.1 INTRODUCTION

Web is expanding day by day and people generally rely on search engine to explore the

web. In such a scenario it is expected from the service provider to provide proper, relevant

and quality information to the internet user against their query submitted to the search

engine. It is of course a challenge for service provider to provide proper, relevant and

quality information to the internet user by using the web page contents and hyperlink

between the web pages. The subsequent section deals with analysis and comparison of web

page ranking algorithms based on various parameters has been carried out with a view to

find out their advantages and limitations for the ranking of the web pages. Based on the

analysis of different web page ranking algorithms, a comparative study is done to find out

their relative strengths and limitations to find out the further scope of research in web page

ranking algorithm. Fig. 3.1 shows a working of a typical web search engine, which shows

the flow graph for a searched query by a web user.

Fig. 3.1: Working of Web Search Engine

An efficient ranking of query words has a major role in efficient searching for query words.

There are various challenges associated with the ranking of web pages such that some web

Page 77: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

58

pages are made only for navigation purpose and some pages of the web do not possess the

quality of self descriptiveness. For ranking of web pages, several algorithms are proposed

in the literatures [93][94].

3.1.1 WEB MINING

Web mining is the technique to classify the web pages and internet users by taking into

consideration the contents of the web page and behaviour of internet user in the past. Web

mining helps the internet user about the web pages to be viewed in future. Web mining is

made of three branches i.e. web content mining (WCM), web structure mining (WSM) and

web usage mining (WUM). WCM is responsible for exploring the proper and relevant

information from the contents of web. WSM is used to find out the relation between

different web pages by processing the structure of web. WUM is responsible for recording

the user profile and user behaviour inside the log file of the web. The WCM mainly

concentrates on the structure of the document whereas WSM explore the structure of the

link inside the hyperlink between different documents and classify the pages of web. The

number of out links i.e. links from a page and the number of in link i.e. links to a page are

very important parameter in the area of web mining. The popularity of the web page is

generally measured by the fact that a particular page should be referred by large number of

other pages and the importance of web pages may be adjudged by a large number of out

links contained by a page. So WSM becomes a very important area to be researched in the

field of web mining [93][94][95][96][97]. Fig. 3.2 shows the general classification of web

mining.

Fig. 3.2: Classification of Web Mining

3.1.2 EXISTING IMPORTANT WEB PAGE RANKING ALGORITHMS

Two graph based page ranking algorithms i.e. google Page Rank proposed by Brin and

Page in 1998 and Kleinberg’s hypertext induced topic selection (HITS) algorithm proposed

Page 78: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

59

by Kleinberg in 1998 are used successfully and traditionally in the area of web structure

mining. Both of these algorithms give equal weights to all links for deciding the rank score.

A. Page Rank Algorithm

Page Rank algorithm is the most commonly used algorithm for ranking various pages.

Working of the Page Rank algorithm depends upon link structure of the web pages. The

Page Rank algorithm is based on the concepts that if a page contains important links

towards it then the links of this page towards the other page are also to be considered as

important pages. The Page Rank considers the back link in deciding the rank score. If the

addition of the all the ranks of the back links is large than the page then it is provided a

large rank [93][98][99]. An example of back link is shown in Fig. 3.3 below. A is the back

link of B & C and B & C are the back links of D.

Fig. 3.3: Illustration of Back Links

A simplified version of PageRank is given by equation 3.1:

(3.1)

Where the PageRank value for a web page u is dependent on the PageRank values for each

web page v out of the set Bu (this set contains all pages linking to web page u), divided by

the number L(v) of links from page v.

B. HITS Algorithm

HITS algorithm ranks the web page by processing in links and out links of the web pages.

In this algorithm a web page is named as authority if the web page is pointed by many

hyper links and a web page is named as HUB if the page point to various hyperlinks. An

illustration of HUB and authority are shown in Fig. 3.4.

Page 79: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

60

Fig. 3.4: Illustration of Hub and Authorities

HITS is technically, a link based algorithm. In HITS [100] algorithm, ranking of the web

page is decided by analyzing their textual contents against a given query. After collection of

the web pages, the HITS algorithm concentrates on the structure of the web only,

neglecting their textual contents. Original HITS algorithm has some problems which are

given below.

(i) High rank value is given to some popular website that is not highly relevant to the given

query.

(ii) Drift of the topic occurs when the hub has multiple topics as equivalent weights are

given to all of the outlinks of a hub page. Fig. 3.5 shows an illustration of HITS process.

Fig. 3.5: Illustration of HITS Process

To minimize the problem of the original HITS algorithm, a clever algorithm is proposed

[101]. This algorithm is the modification of standard original HITS algorithm. This

algorithm provides a weight value to every link depending on the terms of queries and

endpoints of the link. An anchor tag is combined to decide the weights to the link and a

large hub is broken down into smaller parts so that every hub page is concentrated only on

one topic.

Page 80: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

61

Another limitation of standard HITS algorithm is that it assumes equal weights to all the

links pointing to a webpage and it fails to identify the facts that some links may be more

important than the other. To resolve this problem, a probabilistic analogue of the HITS

(PHITS) algorithm is proposed by reference [102]. A probabilistic explanation of

relationship of term document is provided by PHITS. It is able to identify authoritative

document as claimed by the author. PHITS gives better results as compared to original

HITS algorithm. Other difference between PHITS and standard HITS is that PHITS can

estimate the probabilities of authorities compared to standard HITS algorithm, which can

provide only the scalar magnitude of authority [93].

C. Weighted Page Rank Algorithm

Weighted Page Rank [93] Algorithm was proposed by Wenpu Xing and Ali Ghorbani.

Weighted page rank algorithm (WPR) is the modification of the original page rank

algorithm. WPR decides the rank score based on the popularity of the pages by taking into

consideration the importance of both the in-links and out-links of the pages. This algorithm

provides high value of rank to the more popular pages and does not equally divide the rank

of a page among it’s out-link pages. Every out-link page is given a rank value based on its

popularity. Popularity of a page is decided by observing its number of in links and out

links. Simulation of WPR is done using the Website of Saint Thomas University and

simulation results show that WPR algorithm finds larger number of relevant pages

compared to standard page rank algorithm. As suggested by the author, the performance of

WPR is to be tested by using different websites and future work include to calculate the

rank score by utilizing more than one level of reference page list and increasing the number

of human user to classify the web pages.

D. Weighted Links Rank Algorithm

A modification of the standard page rank algorithm was given by Ricardo Baeza-Yates and

Emilio Davis [103] named as weighted links rank (WLRank). This algorithm provides

weight value to the link based on three parameters i.e. length of the anchor text, tag in

which the link is contained and relative position in the page. Simulation results show that

the results of the search engine are improved using weighted links. The length of anchor

text seems to be the best attributes in this algorithm. Relative position, which reveals that

physical position that is not always in synchronism with logical position is not so result

oriented. Future work in this algorithm includes, tuning of the weight factor of every term

for further evolution.

Page 81: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

62

E. EigenRumor Algorithm

As the number of blogging sites is increasing day by day, there is a challenge for service

provider to provide good blogs to the users. Page rank and HITS are very promising in

providing the rank value to the blogs but some limitations arise, if these two algorithms are

applied directly to the blogs The rank scores of blog entries as decided by the page rank

algorithm is often very low so it cannot allow blog entries to be provided by rank score

according to their importance. To resolve these limitations, a EigenRumor algorithm [104]

was proposed for ranking the blogs. This algorithm provides a rank score to every blog by

weighting the scores of the hub and authority of the bloggers depending on the calculation

of eigen vector.

F. Distance Rank Algorithm

An intelligent ranking algorithm named as distance rank was proposed by Ali Mohammad

Zareh Bidoki and Nasser Yazdani [105]. It is based on reinforcement learning algorithm. In

this algorithm, the distance between pages is considered as a punishment factor. In this

algorithm, the ranking is done on the basis of the shortest logarithmic distance between two

pages and ranked according to them. The advantage of this algorithm is that it can find

pages with high quality and more quickly with the use of distance based solution. The

limitation of this algorithm is that the crawler should perform a large calculation to

calculate the distance vector, if a new page is inserted between the two pages.

G. Time Rank Algorithm

An algorithm named as TimeRank, for improving the rank score by using the visit time of

the web page was proposed by H Jiang et al.[106] Authors have measured the visit time of

the page after applying original and improved methods of web page rank algorithm to know

about the degree of importance to the users. This algorithm utilizes the time factor to

increase the accuracy of the web page ranking. Due to the methodology used in this

algorithm, it can be assumed to be a combination of content and link structure. The results

of this algorithm are very satisfactory and in agreement with the applied theory for

developing the algorithm.

H. TagRank Algorithm

A novel algorithm named as TagRank [107] for ranking the web page based on social

annotations was proposed by Shen Jie,Chen Chen,Zhang Hui,Sun Rong-Shuang,Zhu Yan

and He Kun. This algorithm calculates the heat of the tags by using time factor of the new

data source tag and the annotations behavior of the web users. This algorithm provides a

better authentication method for ranking the web pages. The results of this algorithm are

Page 82: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

63

very accurate and this algorithm index new information resources in a better way. Future

work in this direction can be to utilize co-occurrence factor of the tag to determine weight

of the tag. This algorithm can also be improved by using semantic relationship among the

co-occurrence tags.

I. Relation Based Algorithm

Fabrizio Lamberti, Andrea Sanna and Claudio Demartini [108] proposed a relation based

algorithm for the ranking of the web page for semantic web search engine. Various search

engines are presented for better information extraction by using relations of the semantic

web. This algorithm proposes a relation based page rank algorithm for semantic web search

engine that depends on information extracted from the queries of the users and annotated

resources. Results are very encouraging on the parameter of time complexity and accuracy.

Further improvement in this algorithm can be the increased use of scalability into future

semantic web repositories.

J. Query Dependent Ranking Algorithm

Lian- Wang Lee, Jung- Yi Jiang, ChunDer Wu and Shie-Jue Lee [109] have presented a

query dependent raking algorithm for search engine. In this approach, a simple similarity

measure algorithm is used to measure the similarities between the queries. A single model

for ranking is made for every training query with corresponding document. Whenever a

query arises, then documents are extracted and ranked depending on the rank scores

calculated by the ranking model. The ranking model in this algorithm is the combination of

various models of the similar training queries. Experimental results show that query

dependent ranking algorithm is better than other algorithms.

K. Ranking and Suggestive Algorithm

M Vojnovic et al. [110] have proposed a ranking and suggestive algorithm for popular

items based on user feedback. User feedback is measured by using a set of suggested items.

Items are selected depending on the preferences of the user. The aim of this technique is to

measure the correct ranking of the items based on the actual and unbiased popularity.

Proposed algorithm has various techniques for suggesting the search query. This algorithm

can also be used for providing tag suggestion for social tagging system. In this algorithm

various techniques for ranking and suggesting popular items are studied and the results are

provided based on their performance. Results of this algorithm demonstrate that

randomized update and light weight rules having no special configurations to provide better

accuracy.

Page 83: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

64

L. Comparison and Score Based Algorithm

NL Bhamidipati et al. [111] have proposed a more common approach whereby the scoring

scheme may be perceived to be dissimilar if they induce identical ranking. In this

algorithm a metric has been proposed to compare score vectors and the similarity and

dissimilarity are measured on the basis of score fusion. Experimental results are in

agreement with the theory applied and results demonstrate the various applications of the

metric used in the theory applied for the proposed algorithm.

M. Algorithm for Query Processing in Uncertain Databases

Xiang Lian and Lei Chen [112] have proposed an algorithm for ranked query processing in

uncertain databases. Uncertain database management is used in various areas such as

tracking of mobile objects and monitoring of sensor data, due to intrinsic difference

between certain and uncertain data. To remove these limitations authors have proposed a

novel algorithm. Uncertain database are not exact points and generally occurs within a

limited region. Authors have proposed two effective techniques named as probabilistic and

spatial to reduce the PRank search space. Exhaustive experiment results show that

proposed algorithm is very effective and efficient with respect to number of PRank

candidates to be refined and wall clock time.

N. Ranking of Journal based on Page Rank & HITS Algorithm

Su Cheng, Pan YunTao, Yuan JunPeng,Guo Hong,Yu ZhengLu and u ZhiYu [113] have

used page rank and HITS algorithm for the ranking of the journal and also compared ISI

impact factor with page rank and HITS algorithm for ranking the journal. The advantages,

limitations and scope of various algorithms used are discussed for ranking the journal.

Impact factor is very popular for ranking the journal but it has intrinsic limitations that the

ranking is based on counting the in degrees of the nodes in the network and it does not

consider the impact of prestige of the journal in which the citations are present. To

minimize these limitations, authors have used page rank and HITS algorithm for ranking

the journal. Fundamentally the ranking of the journal is very similar to the ranking of the

web page. So page rank and HITS algorithm can be used for ranking of the journal.

3.1.3 SUMMARY OF VARIOUS WEB PAGE RANKING ALGORITHMS

By going through the literature analysis of some of the important web page ranking

algorithms, it is concluded that each algorithm has some relative strengths and limitations.

A tabular summary [114] is given below in Table 3.1, which summarizes the techniques,

advantages and limitations of some of important web page ranking algorithms.

Page 84: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

65

Table 3.1: Summary of Various Web Page Ranking Algorithms

Author/Year Technique Advantages Limitations

S. Brin et al. 1998

Graph based algorithm based on link structure of web pages. Consider the back links in the rank calculations.

Rank is calculated on the basis of the importance of pages.

Results are computed at the indexing time not at the query time.

Jon Kleinberg,1998

Rank is calculated by computing hub and authorities score of the pages in order of their relevance.

Returned pages have high relevancy and importance.

With less efficiency and problem of topic drift.

Wenpu Xing et al.2004

Based on the calculation of the weight of the page with the consideration of the outgoing links, incoming links and title tag of the page at the time of searching.

It gives higher accuracy in terms of ranking because it uses the content of the pages.

It is based only on the popularity of the web page.

Ricardo BaezaYates et al. 2004

This algorithm ranks the page by providing different weights based on three attributes i.e. relative position in page, tag where link is contained & length of anchor text.

It has less efficiency with reference to precision of the search engine.

Relative position was not so effective, indicating that the logical position not always matches the physical position

Ko Fujimura et al. 2005

Use of the adjacency matrix, constructed from agent to object link not by page to page link. Three vectors i.e. hub, authority and reputation are needed for score calculation of the blog.

Useful for ranking of blog as well as web pages because input and output links are not considered in the algorithm.

Specifically suited for blog ranking.

Ali Mohammad Zareh Bidoki et al. 2007

Based on reinforcement learning which consider the logarithmic distance between the pages.

Algorithm consider real user by which pages can be found very quickly with high quality.

A large calculation for distance vector is needed, if new page inserted between the two pages.

Hua Jiang et al.2008

Visitor time is used for ranking. Use of sequential clicking for sequence vector calculation with the uses of random surfing model.

Useful when two pages have the same link structure but different contents.

It does work efficiently when the server log is not present.

Shen Jie et al. 2008

The algorithm is based on the analysis of tag heat on social annotation web.

Ranking results are very exact and new information resources are indexed more effectively.

Co-occurrence factor of tag is not considered which may influence the weight of the tag.

Fabrizio Lamberti et al. 2009

Ranking of web pages for semantic search engine. It uses the information extracted from the queries of the user and annotated resources.

Effectively manage the search page. Ranking task is less complex.

In this ranking algorithm every page is to be annotated with respect to some ontology, which is the very tough task.

Lian-Wang Lee et al.2009

Individual models are generated from training queries. A new query ranked according to the combined weighted score of these models.

It gives the results for user’s query as well as results for similar type of query.

Limited numbers of characteristics are used to calculate the similarity.

Milan Vojnovic et al.2009

Suggest the popular items for tagging. It uses the three randomized algorithms i.e. frequency proportional sampling, move-to-set and frequency move-to-set.

Tag popularity boost up because large number of tag are suggested by this method.

Does not consider alternative user choice model, alternative rules for ranking and alternative suggestive rules.

Narayan L Bhamidipati et al.2009

Based on the score fusion technique. It is used when two pages have same ranking.

Does not consider the case when score vector T generated from specific distribution.

Xiang Lian et al.2010

Retrieval of moving object in the uncertain databases. It uses the PRank (probabilistic ranked query) and J- Prank (probabilistic ranked query on join).

It is fast because it uses the R-Tree.

Experimental results are very promising only with limited number of parameters such that. wall clock time and number of Prank candidate.

Page 85: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

66

Table 3.2 Comparison of Various Web Page Ranking Algorithms

Algorithm Page Rank HITS Weighted Page Rank

Eigen Rumor Web Page Ranking using Link Attributes

Main Technique

Web Structure Mining

Web Structure Mining, Web Content Mining

Web Structure Mining

Web Content Mining

Web Structure Mining, Web Content Mining

Methodology This algorithm computes the score for pages at the time of indexing of the pages.

It computes the hubs and authority of the relevant pages. It relevant as well as important page as the result.

Weight of web page is calculated on the basis of input and outgoing links and on the basis of weight the importance of page is decided.

Eigen rumor use the adjacency matrix, which is constructed from agent to object link not page to page link.

it gives different weight to web links based on 3 attributes:Relative position in page, tag where link is contained, length of anchor text.

Input Parameter

Back links Content, Back and Forward links

Back links and Forward links.

Agent/Object Content, Back and Forward links

Relevancy Less (this algo. rank the pages on the indexing time)

More (this algo. Uses the hyperlinks so according to Henzinger, 2001 it will give good results and also consider the content of the page)

Less as ranking is based on the calculation of weight of the web page at the time of indexing.

High for Blog so it is mainly used for blog ranking.

more (it consider the relative position of the pages )

Quality of results

Medium Less than PR Higher than PR Higher than PR and HITS

Medium

Importance High. Back links are considered.

Moderate. Hub & authorities scores are utilized.

High. The pages are sorted according to the importance.

High for blog ranking.

Not specifically quoted.

Limitation Results come at the time of indexing and not at the query time.

Topic drift and efficiency problem

Relevancy is ignored.

It is most specifically used for blog ranking not for web page ranking as other ranking like page rank, HITS.

Relative position was not so effective, indicating that the logical position not always matches the physical position.

3.1.4 COMPARISON OF VARIOUS WEB PAGE RANKING ALGORITHMS

Based on the literature analysis, a comparison of some of various web page ranking

algorithms is shown in Table 3.2 and in Table 3.3. Comparison [114] is done on the basis of

some parameters such as main technique use, methodology, input parameter, relevancy,

quality of results, importance and limitations.

Page 86: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

67

Table 3.3: Comparison of Various Web Page Ranking Algorithms

Based on the algorithm used, the ranking algorithm provides a definite rank to resultant

web pages. A typical search engine should use web page ranking techniques based on the

specific needs of the users. After going through exhaustive analysis of the algorithms for

ranking of web pages against the various parameters such as methodology, input

parameters, relevancy of results and importance of the results, it is concluded that existing

Algorithm Distance Rank

Time Rank Tag Rank Relational Based Page Rank

Query Dependent Ranking

Main Technique

Web Structure Mining

Web Usages Mining

Web Content Mining

Web Structure Mining

Web Content Mining

Methodology Based on reinforcement learning which consider the logarithmic distance between the pages.

In this algorithm the visiting time is added to the computational score of the original page rank of that page.

Visitor time is used for ranking. Use of sequential clicking for sequence vector calculation with the uses of random surfing model.

A semantic search engine would take into account keywords and would return page only if both keywords are present within the page and they are related to the associated concept as described in to the relational note associated with each page.

This paper proposed the construction of the rank model by combining the results of similar type queries.

Input Parameter

Forward links Original Page Rank and Sever Log

Popular tags and related bookmarks

Keywords Training query

Relevancy Moderate due to the use of the hyperlinks.

High due to the updation of the original rank according to the visitor time.

Less as it uses the keyword entered by the user and match with the page title.

High as it is keyword based algorithm so it only returns the result if the keyword entered by the user match with the page.

High (because the model is constructed from the training quires).

Quality of results

High Moderate Less High High

Importance High. It is based on distance between the pages.

High., Consideration of the most recently visited pages .

High for social site.

High. Keyword based searching.

High because it gives the results for user’s query as well as results for similar type of query.

Limitation If new page inserted between two pages then the crawler should perform a large calculation to calculate the distance vector.

Important pages are ignored because it increases the rank of those web pages which are opened for long time.

It is comparison based approach so it requires more site as input.

In this ranking algorithm every page is to be annotated with respect to some ontology, which is the very tough task.

Limited number of characteristics are used to calculate the similarity.

Page 87: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

68

techniques have limitations particularly in terms of time response, accuracy of results,

importance of the results and relevancy of results. An efficient web page ranking algorithm

should meet out these challenges efficiently with compatibility with global standards of

web technology.

In the next part of chapter a technical review of various web analytics tools has been

presented.

3.2 WEBSITE ANALYTICS TOOLS

In today’s scenario internet is changing in a way of presentation to social connectivity.

Website analytics is the process of measurement, collection, analysis and reporting of

internet data for purposes of understanding and optimizing web use. Website analytics is

not only used for the estimating the website traffic but can also be used for commercial

applications. Results of website analytics applications can also be very helpful for

companies for advertising their products. Website analytics collects and analyzes

information about the number of visitors, page views etc of the websites which can be

further utilized for various purposes.

Web analytics tools conventionally have been used to help website owners to study more

about clients’ online behaviour in order to improve website structural design and online

marketing actions. Most of today’s web analytics solutions offer a range of analytical and

statistical data, ranging from fundamental traffic reporting to individual-level data that can

be linked with price, response and profit data. Understanding web navigational data is

crucial tasks for web analysts as it can influence the website upgrading process. There are

two broad categories of website analytics i.e. off-site and on-site website analytics. Off-site

website analytics refers to web measurement and analysis of website irrespective of

whether one own or maintain a website. It includes the measurement of a

website's prospective audience (opportunity), share of voice (visibility), and buzz

(comments) that is happening on the internet as a whole. On-site website analytics measure

a visitor's journey once on the website. On-site website analytics requires the use of drivers

and conversions. One illustrations of this process can be to know that which landing pages

facilities the subscribers for purchasing. On-site website analytics measures the

performance of the website in a commercial context [115][116][117].

Page 88: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

69

The administrator user registers himself on the website, then statistical website analytics

provides the one line script code to the administrator user then he adds that into the body of

their all web pages which help to calculate the statistics by using the Ajax technology etc.

Cross-site communication becomes possible whenever the user visit the administrator

website. The code which the administrator users paste in his website gets executed by

website analytics tool. Therefore calculated statistics get stored in database which update

to the administrator user on time to time. Statistical website analytics help to calculate the

statistics of other website like entry, exit pages, location of user, colour depth, IP addressed

these features are sent to administrator user through email or administrator user can login in

his/her account to see the statistics of their website . Fig. 3.6 shows the general framework

for web analytics process.

Fig. 3.6: General Framework for Web Analytics Process

3.2.1 NEED FOR WEBSITE WEB ANALYTICS TOOLS

The current need of any successful web portal is to identify visitors preference and for this

purpose various amount of information is required such as most popular pages, entry pages

of portal, exit pages of portal, the URL from where they come on website, searched

keywords by which they came from any search engine , the pages viewed by visitor in one

session, duration of visit on per page , no of revisiting users and their duration , visitors

geographical location such as country/state/city/ISP, visitor’s browsers/system/IP address

etc [118[119].

First of all, administrator of any website registers for web analytics service for analyzing

statistics of website. It is like a normal registration and authentication mechanism by which

every client gets its separate profile. After that clients provide the project details for which

they require statistics. Finally they download JavaScript service code which must be

embedded with that website.

Page 89: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

70

Now for every unique instance of web page fetched by visitor with java script enabled

browser service code is executed and it sends information about the visit to service engine

which do the following functions:

i. Count the number of visits of a particular page lead to compute popular pages.

ii. Note the reference of previous website from which visitors comes to compute the

entered pages.

iii. Note the hit on external link through which its exit lead to compute the exit page.

iv. Note the country through which it entered into the website.

v. Extraction of keywords from search engine reference URL.

vi. Count the difference of time in between initial request of page and next link visit to

calculate visitor duration.

vii. Client site cookies used for identifying re-visit of user.

viii. IP to location database are used for finding out the geographical information of

visitors.

ix. Suggestive schedule mailing to the client is done for enhancing visitor satisfaction.

3.2.2 COMPARISON OF DIFFERENT EXISTING WEBSITE ANALYTICS

TOOLS

A comparison of different existing website analytics tools is presented in Table 3.4.

Table3.4: Comparison of Different Existing Web Analytics Tools

Site NameFeatures

Statcounter.com www.phpmyvisites.us

2enetworx.com Google analytics

Thefreecountry.com

Code Language

In javascript and php.

In php. In asp.net. Unknown In java.

Security Reliable invisible web tracker.

PhpMyVisites has maximum protection against intrusions and external attacks.

powerful than a simple page counter.

Complete Statics

Highly reliable due to inbuilt security features of object oriented language.

Proprietary

Detailed web stats. Free.

Free open source. Filtered by year, month, week, day or hour. No code is available.

Many filter and clean interface, free.

Periodically deliver the details but no code is available.

Language Statcounter is available in more than 30 languages.

phpMyVisites is available in more than 30 languages.

2enetworx .com is available only in English.

Multiple Language

Thefreecountry.com is available only in English.

Visibility Invisible Tracking, no advertisements on the website.

Invisible Tracking, no advertisements on the website.

Invisible tracking is not possible.

Invisible Tracking, no advertisements on the website.

Invisible tracking is not possible.

Page 90: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

71

Markets want to know if their websites are attracting visitors and whether or not their

investment is paying off. With web analytics, one can identify website trends and also able

to understand how visitors interact with their website. One can identify the navigational

chokepoints that prevent visitors from completing their conversion goals. All vendors are

improving their own web analytics capabilities with respect to extended functionalities.

This accelerates competition among pure-play (dedicated) web analytics, customer

relationship management, Enterprise software companies or new comers that may enter the

market through acquisitions. The challenge for vendors is to accurately and usefully address

and analyze not only web data in greater depth but also provide multichannel activity in real

time. The huge amount of data to be processed drives the web analysts to demand more

visual and explorative systems to get a deeper insight in their data.

In the next part of chapter some of the important algorithmic models for ranking the query

word have been presented.

3.3 QUERY WORDS RANKING ALGORITHM

Ranking of a query word has a very significant role in providing authentic and useful

information to the web user. Same concept is applicable in the case of deep web. A deep

web is a part of World Wide Web, where information is generated on the basis of supplied

query. The deep web [3] has higher complexity compared to traditional hyperlink web

pages as reflected by higher number of ontology modeling languages. Bharat and Broder

[121] proposed a technique for handing random query to a set of search engine for

calculating the relative size and overlapping of their indices. Barbosa and Freire [33]

analyze the technique for generating multi keywords queries. A very large fraction of

document collection is returned by these multi keywords queries. This work analyzes

different existing algorithms for ranking of query words. An elaborate analysis is carried on

the concept of query words ranking so as to come up with an improved algorithm with

enhanced efficiency and one in conformance with the global standards.

One of the complex tasks in ranking of query words is to find out the relevance of query

words with query interface. Some of the important algorithmic models for ranking the

query word are discussed below.

3.3.1 DENSITY MEASURE TECHNIQUE

A user try to find a certain degree of detail in the representation of knowledge related to the

concept during the searching for the associated concept. This process may involve finding

Page 91: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

72

the details of the specification of the concept to be searched. The example of the

specification for this purpose is the number of siblings etc. Density measure techniques

consider all of these specifications in its searching process. Density measure is proposed to

approximate the representational density or information content of classes and on the basis

of this, it approximates the level of knowledge detail. Density measure technique has

limitations with respect to the numbers of siblings, subclasses, super-classes, and direct

relations. It does not consider the number of instances from this measure because it may

deviate the results with reference to populated ontologies, which may not be the true

reflection of the quality of the schema itself [122][123][124].

Advantage of density measure is that it improves the quality of the information content

related to the concept to be searched. Limitation of the density measure is that in some

cases irrelevant words are provided with good density measure in the document.

3.3.2 TERM WEIGHTING TECHNIQUE

Many techniques for term weighting are proposed in the literature [125] [126] [127] [128]

[129][130]. Term Weighting techniques are proposed using the probabilities models, so the

term weighting techniques highly rely on accurate calculation of various probabilities.

Some of the term weighting techniques are developed using the vector space model and

generally these techniques are the outcome of experiments based on the large scale

experiment with the system. In both of these, term weighting techniques which are based

on probabilistic and vector space model, three main parameters are used in the final term

weight formulation. These parameters are document length, document frequency and term

frequency. One of the examples of term weighting based on document score is given below

[130].

(3.2)

(between 1.0-2.0), b(usually 0.75), and (between 0-1000) are constants.

Pivoted normalization weighting based document score [128]. Okapi weighting based

document score [127].

(3.3)

s is a constant (usually 0.20).

Where tf is the term’s frequency in document

qtf is the term’s frequency in query

Page 92: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

73

N is the total number of documents in the collection

df is the number of documents that contain the term

dl is the document length (in bytes), and

avdl is the average document length

Advantage of this technique is that it does not respond to the repeated words in the

documents and it provides better weighting matrix. Limitation of this technique are that

lengthy document get high score due to presence of large number of words and words with

multiple repetitions and accuracy of this method is highly dependent on the estimation of

probability.

3.3.3 ADAPTIVE TECHNIQUE

Adaptive technique [83][131] recognize the keyword that is most probabilistic to return the

large number of document by analyzing the documents returned against the previous query

submitted to the deep web database. One can extract out the most important query using

this process by repeating process. For extracting out the most significant query, this

process calculates the number of new documents to be downloaded if it issues the query qi

as the next query. Assuming that issued queries ranges from q1 to qi, then one have to

calculate P(q1 to qi), for each potential next counting of how many times qi appears in the

pages from q1 to qi. That means one have to calculate only P(qi) to estimate P(q1 to qi). In

this process, efficiency is calculated by the following relation:

Efficiency (qi) =Pnew(qi)/Cost(qi) (3.4)

Here, Pnew(qi) represents the amount of new documents returned for qi (the pages that

have not been returned for previous queries). Cost (qi) corresponds to the cost of issuing the

query qi. Advantage of adaptive technique is that this algorithm extracts new terms and

keywords from the downloaded contents and it results in the large number of documents

using a lesser number of queries. Limitation of this technique is that it identifies the

keywords related to the topic of the website in response to the specific topic.

3.3.4 GENERIC ALGORITHM BASED TECHNIQUE

The generic algorithm for ranking the query word based on the analysis of a generic

document quantity which is collected, say from the web. It calculates the generic frequency

distribution of each keyword. Based on this generic distribution, results are derived by

issuing the most frequent keyword to the deep web database. Further it is required to

continue to the second-most frequent keyword and repeat this process until all downloaded

resources are exhausted. The aim behind this process is that the frequent keywords in a

Page 93: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

74

generic corpus will also be frequent in the deep web database, which returns various

matching documents [83][131]. Advantage of this technique is that, keywords are selected

with reference to their decreasing frequency in the document. Limitation of this technique

is that performance of the generic algorithm degrades compared to the adaptive technique

in the case of sets with specific topic.

By going through literature review, it can be concluded that a ranking can be improved by

including newer and more parameters for ranking of query words.

In the next section, different aspects of technological implementations of different available

cross-site communication techniques are analyzed with their relative advantages,

limitations and security concerns.

3.4 CROSS SITE COMMUNICATION

In today’s scenario, web is expanding exponentially and internet is evolving towards

presentation to social connectivity. The web is in the phase of inter operative extensive data

diffusion necessitating proper information dissemination. Cross-site is a technique to

provide communication between different sites by using various client or server side

methods. The purpose of this analysis is to find out the appropriate solution for query words

extraction from web pages so that query words can be efficiently stored on the web server

[27].

In recent times, the mashups concept has appeared as a core technology of the Web 2.0 and

social software movement. In the past few years, web mashups have become a popular

technique towards creating a new generation of customizable web applications. Mashups is

the technique to create web applications that are used to combine contents from multiple

sources. It strengthens the creativity and functionality of web applications by adding some

values as per the need of applications. It is implemented through the cross-site

communication but the security in the browsers, designed for the cross site communication

is compromised against the functionality. World Wide Web is a very scary place. The most

common problem on World Wide Web is cross-site scripting. Cross-site scripting permits a

user to fetch data from another web site, which can result in a data breach. All the

techniques available for cross-site communication are dependent on browser’s

implementation, but none has established a common standard for it.

There are two main types of architectures for implementing cross site communication,

namely, server-side and client-side architectures. Server side mechanism illustrated by Fig.

Page 94: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

75

3.8, which integrates data from different sites by requesting contents from site at the server-

side and sends the composed page on request. For example, mapping API’s such as Google

maps, Yahoo maps etc. are mainly based on server-side integration [132]. In client-side

architecture, as illustrated in Fig. 3.7, the browser works as the medium for direct

communication between the consumer and services such as Face book API. Client side

cross site communication technique is more useful when the web master of third party’s

web site does not want to modify server side script or the site is static in nature [133].

3.4.1 CLIENT-SIDE MASHUPS

The content or service addition takes place at the client in a client-side mashups. There

must be a client-based solution for handling data with respect to size and format that the

other web site returns. The client in this case is normally a web browser. The data and code

contents from multiple content providers inside the same page are merged and mixed in

client side mashups as shown in Fig. 3.7. The client side mashup is easy to implement and

uses JavaScript library provided by originating site to facilitate the use of its service. It can

perform better because a service request and response go and back directly from the

browser to the mashup server. It can be implemented without any specific plug-in support.

It can reduce processing load on the server because a service request and response is not

passed through a server-side proxy. There is no need of modifying server side script. It will

not depend on individual server failure. It allows the use of service specific load balancing

techniques. Client side mashups has no buffer to protect it from the problems associated

with other sites. Browsers can only send or receive one or two simultaneous

XMLHttpRequests. The secure implementation is difficult due to direct script execution on

the client side [134] [135].

Fig. 3.7: Client-Side Mashups

Page 95: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

76

3.4.2 SERVER-SIDE MASHUPS

All the requests from the client go to the server in a server-side mashups. In server-side

mashups, server works as a proxy to make calls to the other website. So, the work of web

application client is transferred to the server as shown in Fig. 3.8. The server-side mashups

have some advantages and limitations. In the server side mashups, the small chunk of the

data is send to the clients. It needs to transform the data returned by a service into a

different format. The proxy works as a buffer in a server-side mashups to take care of

problems associated with other sites. In server side, mash up server can cache data returned

by the service. It allows manipulation of the data returned by a service or combines it with

other contents before returning it to the client. Security requirements can be handled much

more easily from server to server using secure protocols. But pre extraction and the

requirement of complete trust on the mashup server by the client are some inherent

limitations of server-side mashups. The implicit implementation of the server side proxy is

required. There is a delay in receiving response because of two network hopes in

communication [136] [137].

Fig. 3.8: Server-Side Mashups

As stated, the motive of this study is to fetch query words from different sites and store it

on the knowledge base server after processing. The server side solution requires

modification in the server side script which is a complex process so client side solutions are

preferred. Various techniques for client side cross-site communication are illustrated in next

section.

3.4.3VARIOUS EXISTING TECHNOLOGIES FOR CROSS-SITE

COMMUNICATION

As the main purpose, this work finds out efficient, safe and secure cross-site mechanism for

knowledge base server so that different existing technologies for cross-site communication

Page 96: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

77

is technically analyzed and implemented to find out their relative strengths and limitations

for cross-site communication.

A. Cross-document Messaging (XDM)

XDM is a powerful cross-site communication feature that makes a backend request to a

remote web service. XDM applies for semi-trusted environments by permitting software

developers to decide whether to receive the message from the site or to reject it if the

contents are unsecure or anticipated. It provides simple implementation for bi-directional

cross-document communication. The advantages of XDM are that it has high performance

and reliability because developers have to concentrate on task rather than technology. This

technique also removes the need for embedding third-party script in the page, lessening the

chance of potential information disclosure vulnerabilities like the disclosure of any

sensitive data [138] [139].

B. Window.name Property

The window.name transport is a new technique for secure cross-site browser based data

transfer and can be utilized for creating secure mashups with un-trusted sources.

Window.name works by loading a cross-site HTML file in an iFrames. The windlow.name

is set to the string content by the HTML file that should be delivered to the requester. The

window.name is retrieved as the response by the requester. The requested environment is

not accessible by requested response such as cookies, JavaScript variables and document

object model. It supports GET/POST request and error handling like JSONP. It requires

only one iFrames and one local file. It does not require any plug-ins (like Flash) or alternate

technologies (like Java). The only drawback is that iFrames have some security issues

because frames can change source location of other frames, but this is not a much serious

issue [140] [141].

C. IFrame

An inline frame [136] [142] [143] [144] places another HTML document in a frame inside

a HTML document. An inline frame work as “target" frame for the links defined in other

elements. JavaScript in iFrame is created with same site link as host can navigate parent

interior document object model structure and their properties. Conversely, the parent

frame’s JavaScript also navigate iFrame’s elements and can see all of its contents. When

the source for iFrame is different site than that of the host of parent frame, then host cannot

see the iFrame's contents and vice-versa. Host can change the source of iFrame but it will

not get notified that content are loaded or not. The drawbacks of this technique are that

firstly, there is no acknowledgment if the iframe successfully receive the data or not,

Page 97: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

78

secondly there is no provision for identification of the content that it has loaded completely,

so it doesn't know when it's safe to send a new request. The data packet carried by URL can

be 40 k in Firefox and 4k in IE. The data from any source can come as contents but this

creates security vulnerability. Global state cannot be managed by iFrame because every

time the page is reloaded, iFrame go through every state of loading a page.

D. Flash Plug-in

The use of Flash plug-in can be highly simple technique to accomplish cross-site

communication. Cross-domain access is enabled by placing policy file “crossdomain.xml”

on the server [136] [143] [145] [146]. Security.loadPolicyFile (path) is utilized to load

policies from non-root location. Communication of .swf files with different sites is allowed

due to security feature. The flXHR API can be used to replace native XHR with powerful

new features like cross-site communication using the features of server policy authorization

such as robust error callback handling, timeouts and easy configuration / integration /

adaptation [147].

E. Document.domain Proxy

Domain name property is used to specify server address. It can be set to super-domain in all

common browsers. Two documents with same domain and transport protocol can access

each other's contents by altering document.domain property [140] [143] [148]. The

document.domain proxy can work fairly well in the case of cross-sub-domain intranet

mashups. It depends on the browser and browser settings so it gets complicated when

mashups integrates data from more than one provider. The drawback of this technique is

that application require full domain name for loading.

F. Microsoft Silverlight

Microsoft Silverlight [149] framework is similar to Adobe Flash and used to integrate

multimedia, graphics, animations and interactivity into a web application. It can be

integrated in browser by using plug-in. It enables creation of sophisticated rich applications

for web. The access is controlled by “clientaccesspolicy.xml” or “crossdomain.xml’.

Microsoft Silverlight implements security measures for cross-site communication. It uses

threads for making the request. It also support socket based enabled cross-site

communication.

G. JSONP

JSONP [150][151] is abbreviation of “JSON with padding". It is a JSON extension, where a

prefix is used to specify an input argument of the call. It is now used by many web 2.0

applications such as dojo toolkit applications, google toolkit applications and web services.

Page 98: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

79

Further extensions of this protocol have been proposed by considering additional input

arguments such as the case of JSONP supported by S3DB web services. The server is

expected to return JSON data but with a slight twist. It has to prepend the callback function

name, which is passed as part of the url. Script tags and calls are used by JSONP that are

opened to the world so JSONP is not suitable to handle sensitive data. Some drawbacks are

that script tags from remote sites allow the remote sites to inject any content into a website.

If the remote sites have vulnerabilities that allow java script injection, the original site may

also be affected. It is easy to implement but the SCRIPT tag request is only GET, which

limits URL length. There is No error handling capability like XHR and JS Response is

executed directly, which can be dangerous.

H. JSONRequest

JSONRequest. is a client side mechanism which will be implemented as a new browser

service and allows for two-way data exchange with any JSON data server. It provides

complete security in data exchange. Browser makers have to implement this mechanism

into their products in order to enable the next advance in web application development.

Three methods i.e. post, get and cancel are provided by JSONRequest which is a global

JavaScript object. It can only send/receive JSON-encoded data. Content-type in both

directions can be either application or JSONRequest. The JSONRequest is secure because it

does not send any authentication information or cookies to third party. The JSON parser

provides a safe alternative to the eval() method, which can allow execution of arbitrary

script functions to execute [152].

I. Fragment Identifier

A fragment identifier [153] [154] is a short string which is appended to the URL and used

to identify a portion of HTML document. In the HTML applications,

http://www.qiiiep.org/serve.html#words refers to the element with the id attribute that has

the value words (i.e. id="words") in the document identified by the URI

http://www.qiiiep.org/serve.html, which is typically the location of the document.

Fragment identifier provides a secondary resource within the primary resource. The target

iFrame polling or onload event handler can be used to detect changes in the fragment. In

Internet Explorer 8 a HTML 5 event “hashchanged” is implemented to identify the changes.

The deficiency of this technique is that the URL length limits the request/response length.

J. Google Gears

Gears is a plug-in that extends browser to create a richer platform for web applications.

Gears can be used on any sites. A number of web applications currently make use of Gears

Page 99: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

80

including two Google products i.e. Google Docs and Google Reader. In addition to this,

Gears are used by rememberthemilk.com and zoho.com since its launching time. It uses

workers which act as a thread for information retrieval from a URL. Gears provide a secure

way for communication between different sites. The createWorkerFromUrl API can be

used for that purpose [136] [155].

K. XDomainRequest (XDR)

XDR allows page to be connected anonymously with any server and exchange data by

using XdomainRequest object that requires an explicit acknowledgement for allowing

cross-site calls from the client script and the server. Cross-site requests require mutual

approval between the web page and the server. Page can communicate with other domain

by initiating a cross-site request by creating an XDomainRequest (XDR) object with the

window object and opening a connection to a URL. When the request is send, Internet

Explorer 8 includes an XDomainRequest HTTP request header and in response server sends

XDomainRequestAllowed response header. The data strings are transmitted by the send method

for further processing after a connection is initiated [151] [156]. There is no need to merge

scripting from third parties and no need for frames and proxying. It also support relative

path naming and restricted access to HTTP and HTTPS destinations.

L. CSSHttpRequest

CSSHttpRequest is a mechanism that can be used to allow cross-site requests. It can be

implemented by using the CSSHttpRequest.get(url, callback) function. Data is sterilized

into the CSS@import rules which is appended after the URI that is encoded in 2KB chunks.

The response is decoded and returned to the callback function as a string [157]. This works

well because CSSHttpRequest does not obey the same-origin policy similar to JavaScript or

JSONP. CSSHttpRequest is limited to making GET requests only but in this, untrusted

third-party JavaScript cannot be executed in the context of the calling page like JSONP.

M. Mozilla Signed Scripts

The creation of signed scripts for Netscape and Mozilla browsers involves acquiring a

digital certification from www.thawte.com or www.verisign.com. The signing tool

packages the scripts into a .jar file and then this file is signed by the signing tool. The

signature on the file guarantees that owner of the certificate is the author of the file. Users

trust script that is signed because, when the script does something malicious, they can claim

legally on the signing party. When a Netscape or Mozilla browser encounters a signed

script by a .jar file, it checks the signature and allows the scripts to execute in extended

privileged environment [158]. Signed scripts have a few main restrictions including limited

Page 100: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

81

browser accessibility and security warning. The limitation of this technology is that only

Firefox or browser based on Mozilla framework can use it. But it can act as an alternative

technique after detecting the browser of the client.

N. WebSocket

HTML 5 WebSocket represents the next advancement in HTTP communications. The

HTML 5 WebSocket specifications define a single-socket bi-directional communication

channel for sending and receiving information between the browser and server. Thus, it

avoids the connection and portability issues of other technique and provides a more

efficient solution than Ajax polling. At present HTML 5 WebSockets is the leading means

for facilitating full-duplex, real-time exchange of data on the Web. Web Socket provides

simple traverse from firewalls and routers and it is compatible with binary data. It also

allows duly approved cross-site data exchange with cookie-based authentication [159].

3.4.4 COMPARISON OF DIFFERENT TECHNIQUES FOR CROSS-SITE

COMMUNICATION

The Table 3.5 given below compares [160] the different parameters for different

technologies for cross-site communications. Fourteen technologies are compared against

different parameter such that cross-browser compatible, cross-site browser security

enforcement, http status codes i.e. error check, support of http get and post, compatibility in

html version, plug-ins requirement, cookies access and data origin identification.

After critical analysis of some of available techniques to implement client side cross-site

communication, it is concluded that there can be two techniques that are suitable for

implementing query word submission to knowledge base server, that are iFrame and

JSONP because the basic requirement for knowledge base server query word posting is to

send the content and not to receive the response unless knowledge base server site certify it,

so there is no risk of vulnerability. However, following issues need to be addressed for

cross-site communication: In this work the above issues have been addressed. The detail of

each is given in the later chapters of this thesis.

Page 101: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

82

Table 3.5: Comparison of Different Available Technologies for Cross Site Communication

ParametersTechniques

Cross-browser

compatible

Cross-site browser security enforced

Capability to receive

HTTP statuscodes.(Error check)

Support for HTTP GET and

POST

CompatibilityIn HTML Version

Plug -insRequirement

Cookies Access

Capability to identify Data

origin

XDM No Yes Yes Yes HTML 4 No No YesWindow.name property

Yes No No Yes HTML 4 No Yes No

iFrame Yes No No Yes HTML 4 No No NoFlash Plug-in Yes Yes Yes Yes HTML 4 Yes Own

Cookies

Yes

Document.domain proxy

Yes No Yes Yes HTML 4 No Yes Yes

Microsoft Silverlight

Yes Yes Yes Yes HTML 4 Yes Yes Yes

JSONP Yes (Server depend

ent)

No No No HTML 4 No No Yes

JSONRequest No Yes Yes Yes HTML 4 No No YesFragment Identifier

Yes No Yes Yes HTML 4 No No Yes

Google Gears Yes No Yes Yes HTML 4 No Yes YesXDR No

(i.e. 8,ff 3.5)

Yes Yes Yes HTML 4 No No Yes

CSSHttpRequest Yes No No Get HTML 4 No No NoMozila Signed Scripts

No Yes Yes Yes HTML 4 No Yes Yes

WebSocket No Yes Yes Yes HTML 5 No Yes Yes

In the next part of chapter some of the important algorithmic models for privatized

information retrieval have been presented.

3.5 PRIVATIZED INFORMATION RETRIEVAL

As the web is expanding day by day and there is enormous amount of data stored and

available on the web in present scenario. Numbers of persons who use websites for storing

their private information through their various types of personal accounts available on

different websites are also increasing. In today’s scenario website are not directly navigated

through their web address but search engines are used more and more for referring and

exploring the websites. Current search engines have limitations in exploring the privatized

information particularly from deep web with facility of combining search results from

various sources. A very few attempt is done for providing the facility for combining the

Page 102: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

83

search results from different sources about privatized information retrieval particularly in

deep web scenario.

3.5.1 ORACLE ULTRA SEARCH CRAWLER

Oracle ultra search is based upon oracle text technologies and oracle database. It provides

uniform search and location ability over different repositories like oracle database, other

compliant database html documents provided by a web server, IMAP mail servers and files

on the disk etc. Oracle ultra search consists of a crawler that collects the documents.

Crawler can be scheduled for the websites to be searched. Documents are stored in the

repositories and an index is made by using the collected information. The index is stored in

the firewall in a specific oracle database. API is also provided by oracle ultra search for

producing content management solution. Oracle ultra search consist of following three

main components.

Oracle ultra search crawler

Oracle ultra search backend

Oracle ultra search middle layer

Oracle ultra search crawler is a java application. Its working is handled by an oracle server.

During activation period, oracle ultra search crawler generates various processor threats.

These generated processor threats extract document from different data sources and arrange

them according to an index utilizing oracle text. This generated index is used for further

querying process. The sources of data from where the documents are extracted may be

database table, files, websites, and groups of server portal pages. The oracle ultra search

backend can be divided into two parts i.e. oracle text and oracle ultra search repository.

Oracle text does two jobs i.e. indexing of text is done with the help of oracle text and it also

provides search ability to indexing and querying process. Whereas oracle ultra search

middle layer consist of API, administration tools for oracle ultra search and query

application. Oracle ultra search middle layer work as a interface between oracle ultra search

crawler and backend [161].

3.5.2 IBM LOTUS EXTENDED SEARCH TECHNOLOGY

It is a scalable technology based on server and does similar attempt in this area. It has the

capability to search various parallel multiple contents and data sources and the results are

returned against a query in the form of web application or notes. This technology formerly

known as domino extended search, issued in 1997 was used to enhance the capability of

domino search to various back ends data source such as oracle, DB2 and MS SQL Server

Page 103: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

84

database. IBM lotus extended search consist of various interconnected component. It

handles various functions related to search request of client such as verification of data

security, ranking of results, required number of search result, interpretation of source

language, scaling of server and load balancing [162].

3.5.3 FORM BASED AUTHENTICATION PROCESS

Authenticated crawling is the major issue in the privatized information retrieval in deep

web scenario. One of the techniques for authenticated crawling is based on form based

authentication (FBA). In form based authentication process, the authentication information

is submitted by filling an html form by the user. It is an authentication mechanism for

crawling, which uses a web form to enter the authentication information. There is no single

universal standard available for FBA and various standards exist for FBA. Some techniques

or standards send authentication information to the server and return a cookie in response

whereas other techniques are more complex and they involve various redirects to various

pages or they need that user get a cookie before supplying information about authentication

to a server [163].

3.5.4 KERBEROS AUTHENTICATION MECHANISM

Kerberos authentication mechanism gives enhanced security over NTLM. It is the default

authentication mechanism for MS Office share point server 2007 web applications. In this

technology crawler is not able to crawl the sites, authenticated by Kerberos mechanism if

the sites are configured over a non standards port. Any port other than SSL port 443(https)

and TCP port 80(http) are non standard ports. Crawling process is affected by polling order

of web applications. The crawler starts its process by polling the default zone. If the default

zone is authenticated by Kerberos mechanism and it does not use SSL port 443 or TCP port

80, then in such situations crawler does not authenticate by utilizing the further zone in the

polling order sequence and no content of web application is crawled by the crawler, which

means contents are not returned or indexed in the search results against the web query

[164].

3.5.5 DYNAMIC URL TAB TECHNIQUE

Dynamic URL tab [165] is a technique, developed for authenticated websites, which allow

a site owner to supply yahoo search information with respect to dynamic parameters,

appear on the site. Parameter of URL can be used in the private site to do the various jobs

such as

The changing of the content

Page 104: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

85

Tracking the user session through session ids

Tracking of the sources which are responsible for providing referrals to the private

pages and private sites through source tracker

Printing of format through format modifier

By going through literature survey, we have concluded that a very few attempts are being

done on the privatized information retrieval with the facility to combine the search results

from different sources, stored and distributed on different websites.

A detailed discussion on the query intensive interface information extraction protocol

technique is given in next chapter.

Page 105: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

86

CHAPTER 4

QUERY INTENSIVE INTERFACE INFORMATION

EXTRACTION PROTOCOL

4.1 INTRODUCTION

Large part of the Web is “hidden” behind search forms and is accessible only by submitting

a set of keywords, or queries, to the forms. A collection of such web sites is termed as the

Hidden Web or the Deep Web. The general search engines normally cannot download and

index the information hidden behind search interface. In fact, searching the deep web is a

difficult process because each source has a unique method of access leading to the issue of

how best to equip crawler with the necessary input values for use in constructing search

queries. To overcome these issues an open framework based Query Intensive Interface

Information Extraction Protocol (QIIIEP) for deep web retrieval process is being proposed

[27].

It supports the current trends in the field of deep web information retrieval process which

consist of four steps i.e. Query interface analysis, values allotment, “response analysis &

navigation” and relevance ranking. The proposed protocol reduces complexity in these

activities except relevance ranking of deep web data querying. Nobel mechanisms such as

auto query word extraction and auto form unification procedure have been developed. It

offers great advantages in deep web crawling without over burdening the requesting server.

The conventional deep web crawling procedures result in heavy communication processing

loads and procedural complexity for applying either schema matching or improper ontology

based query. This makes it difficult to crawl entire contents of deep web. Therefore in

QIIIEP, the trade-off between correct query response and communication loads is solved

by generating knowledge base at QIIIEP server. Further, the protocol can realize flexible

and highly efficient data extraction mechanism after deploying QIIIEP server on deep web

domain. It enables not only the one stop information retrieval process but also provides auto

authentication mechanism for supplied domain.

Page 106: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

87

4.1.1 SPECIFICATION OF INFORMATION EXCHANGE BETWEEN QIIIEP AND CRAWLER

The task of identification of form element is implemented by using parser which extract

different form element from HTML code after fetching it from web server, after querying

from QIIIEP server it submitted the associated query word to form element in iterative

manner. The specification of QIIIEP which is based on request- response semantic is as

follows:

1. Request format

Requests must be submitted using the GET or POST methods of HTTP and qiiiep server

support both methods. The request can content one or many key=value pair ie.

do=RequestType (where RequestType is some type of request such as listquerywords)

depend on the request type.

Example for GET request: http://qiiiep.org/qiiep?do=listquerwords

http://qiiiep.org/qiiep?do=listquerwords&url=formurl&formid=37723d89ce0b05185c3e052

af8e4b6d6

http://qiiiep.org/qiiep?do=listquerwords&url=formurl&formid=37723d89ce0b05185c3e052

af8e4b6d6&querywordrang=100-200

http://qiiiep.org/qiiep?do=listquerwords&url=formurl&formid=37723d89ce0b05185c3e052

af8e4b6d6&daterang=2009/5/1-2009/6/1

http://qiiiep.org/qiiep?do=listquerwords&url=formurl&formid=37723d89ce0b05185c3e052

af8e4b6d6&noofqueryword=150

2. Response format

Responses are formatted as HTTP responses. The content type must be text/xml. HTTP-

based status codes, as distinguished from QIIIEP errors, such as 302 (redirect) and 503

(service not available) may be returned. The response format must be well-formed XML

with markup as follows:

1. XML declaration (<?xml version="1.0" encoding="UTF-8" ?>)

2. Elements are

i. responseDate (UTC datetime)

ii. request (the request that generated this response)

iii. a) error code (in case of an error or exception condition)

b) query words with the name of the QIIIEP request

Page 107: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

88

Records

A record is the metadata of a resource in a specific format. A record has three parts: a

header and metadata, both of which are mandatory, and an optional about statement. Each

of these is made up of various components as set out below.

header (mandatory)

datestamp (mandatory: 1 only)

level value elements (optional: 0, 1 or more)

status attribute for deleted item

remaining items

Identifier

It must contents the signature of qiiiep server provided by qiiiep.org so that requesting party

can identify qiiiep servers database specialization and protocol version.

Datestamps

A datestamp is the date of last modification of a knowledge base record. Datestamp is a

mandatory characteristic of every item. It has two possible levels of granularity:

YYYY-MM-DD or YYYY-MM-DDThh:mm:ssZ.

The function of the datestamp is to provide information of query words newly stored in

database using daterang arguments. Its applications are in incremental update mechanisms.

It gives either the date of creation, last modification, or deletion.

Level value elements

It represents the form element name and associated query words for that. If the form having

more than one input element response may contents many level value elements depending

upon the form.

Description

Knowledge base must be able to disseminate further arbitrary information such as storing

procedure. Any returned metadata must comply with an XML namespace specification.

Returned Element Set:

Title, Format Identifier, Storing procedure (1,2,3), Date, Language, Context of domain

Flow control

For any request types there may be chance of return a list of huge entries. QIIIEP supports

partitioning. QIIIEP configurations make the decisions on partitioning: whether to partition

and how.

The response to a request includes:

incomplete list

Page 108: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

89

resumption token

+ expiration date,

size of complete list,

cursor (optional)

The response includes the next (which may be the last) section of the list and a continuation

token. That resumption token is empty if the last section of the list is enclosed.

Errors and exceptions

Knowledge base must indicate QIIIEP errors by the inclusion of one or more error

elements.

The defined error identifiers are:

idDoesNotExist

badArgument

noQuerywords

noRecordsMatch

Request types

There are two request types:

Identify: It returns Y if query words found else N

Listquerywords: It returns the list of query words.

4.1.2 ARCHITECTURE AND WORKING STEPS OF QIIIEP

The QIIIEP is an application-level protocol for semantic ontology based query word

composition, identification and retrieval systems. The employs he protocol on

request/response semantics and works on port 55555 on http server. It generates response

encoded by using XML. Basic Architecture of QIIIEP is shown in Fig. 4.1.

Module wise description of basic architecture of QIIIEP is given below.

A. HTTP Server

It is a regular http server on which the web site is deployed.

B. Auto form id generation module

This module parses every form of a web site and merges the form id with QIIIEP server

query word list so that at the time of crawler request the crawler is able to properly identify

the search interface of that site for sending the request of keywords to the QIIIEP server.

Page 109: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

90

C. Auto query word extraction module

This module extracts the query words supplied by users of that site so that QIIIEP server

can extend the query word list by exploiting advantage of human curiosity to find relevant

information on that domain.

Fig. 4.1: Basic Architecture of QIIIEP

D. Query word ranking module

When normal user visit a hidden web site and fills the form for searching, the supplied

keyword are transferred to the QIIIEP server through cross-site communication and QIIIEP

server updates its query word list and rank them by applying ranking algorithm. This

module is responsible for providing the best match query word assignment to the form

filling process and by filtering out the less relevant keywords.

E. User Authentication to domain mapper

This module of crawler is responsible for mapping the login credentials provided by users

to the crawler with the information provider domain. The main benefit of using this mapper

is to overcome the hindrance of information retrieval between result link and information.

The crawler uses the mapping information to allow the specific person to receive

information contents directly from the domain by automatic login procedure and reduce the

extra step of evolving separate login for user.

Page 110: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

91

F. Agent for authenticated crawling

This module works for those sites where the information hidden by the authentication form.

It stores authentications credentials of every domain in the knowledge base of the QIIIEP

server, which is provided by the domain administrator. At the time of crawling, it

automatically provides the authenticated information on the domain for downloading the

hidden contents. These contents are used only to indexing keywords and not available by

search service as catch because of privacy issue.

G. Query word to URL relation manager

This module stores each and every query word associated with specific element of form by

creating reference of domain path, so that at the time of link generation in response to

search, query word can be mapped to provide the contents by sending query word in post

request to the domain

The working steps of this protocol (see Fig. 4.1) are given below.

Step 1. Requests to a web server for fetching a page.

2. Analyse the form to identify search interface. Search interface must include “rel

tag” to describe the QIIIEP server address and form id.

3. Send the request to QIIIEP server for getting the semantic query word list defined

by the site administrator or QIIIEP auto query word extractor to correlate the form

fields.

4. Receive information (entries of the form) from QIIIEP server.

5. Submit the filled form by placing received query words to the HTTP server.

6. Get response page from the http server.

7. The QIIIEP works independently. The algorithm is given below.

Algorithm: QIIIEP()

do

{ Monitor the interface {

1.1 If query submitted then extract query words, store the results in QIIIEP

database for analysis and gets the response generated by the deep web site.

1.2 Store the result into the QIIIEP server.

1.3 It merges the form ids to the forms at the time of form generation.

1.4 Fetch the form id to query word table relationship from QIIIEP server.

} }

forever

Page 111: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

92

4.1.3 IMPLEMENTATION ISSUES OF QIIIIEP

The working mechanism of the framework is not only very simple to implement but also

imposes lesser amendments in existing infrastructure. All the functionality is implemented

over http server. The crawler sends the request for keywords in XML encoded document

consisting of URL and form-id pair. QIIIEP sever also responds this request in XML

encoded document through secure network. In fact, the system has inherited feature from

OAI -PMH for achieving request- response XML document formulation. The QIIIEP server

can be deployed on to the target server on a separate server as well.

The web is in the phase of inter operative extensive data diffusion necessitating proper

information dissemination. QIIIEP protocol fulfills this need. The way of cross-site

communication, a technique that provides communication between different sites by using

various client or server side methods. A detailed discussion on cross-site communication

for QIIIEP protocol has been given in following section.

4.2 CROSS SITE COMMUNICATION ON QIIIEP

In this work, the different aspects of various cross-site communication techniques with

reference to QIIIEP protocol have been analyzed with their relative advantages, limitations

and security concerns. With the purpose to find out an appropriate solution for query words

extraction from web pages leading to efficiently storage of query words on the QIIIEP

server [27].

iFrame and JSONP are two techniques which were identified in the previous chapter 3

because the basic requirement for QIIIEP query word posting is to send the content and not

to receive the response unless QIIIEP site certify it, so the risk of vulnerability is

minimized.

4.2.1 IMPLEMENTATION APPROACH

Implementation of QIIIEP is based on cross-site communication [160], which is a technique

to provide communication between different sites by using various client or server side

methods. One of the QIIIEP modules for query word extraction which extract the query

word at the time of form submission by user is based on the proper implementation of cross

site communication. This cross-site communication must be on client-side because server-

side cross site communication requires changes to be made at server- side code deployed on

server. However, as the solutions on server side result in considerable degradation of

browsing experience as data propagation consume extra time, on the other hand, the client

Page 112: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

93

side solutions result in a security threat. So there is need of an efficient client side solution

that does not degrade the performance and must be secure enough.

Auto query word extraction module extracts the query words supplied by users of site so

that QIIIEP server can extend the query word list by exploiting advantage of human

curiosity of finding relevant information on that site. It is composed of two sub modules, as

described below.

A. Script provider

In order to establish QIIIEP cross-site communication, it is necessary to embed a script in

the query interface of the third party web page so that at the time user submits supplied

query word can be received. This module generates a script and embeds it in the query

interface at the time of form design.

B. Query word extractor

The query word submitted on a form interface by user is forwarded to the target server and

to the QIIIEP server as well through cross-site communication mechanism. After extraction

and analysis, these are stored into the knowledge base by auto query word extractor

module. All the words in user query are stored on QIIIEP server only when the user enables

sharing option of keywords to QIIIEP server. The block diagram of auto query word

extraction module is shown in Fig. 4.2. It uses cross-site communication for extraction of

query word from third parties web form interface.

Fig. 4.2: Auto Query Word Extraction Module

Module wise working of auto query word extractor module is given below.

a. Query word fetcher

It retrieves the words and string entered by user at the time of manual form submission.

When query words are posted on originating server by a user a simultaneous request also

posts the query words to the QIIIEP server.

Page 113: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

94

b. Query word analyzer

It analyzes the retrieved query words and also adds form id, element id and other properties

like time stamp, IP address and submitted URL and stores then into the QIIIEP knowledge

base.

The designed system provides an effective way to extract query word by using dynamic

script element in the web page. Where, a JavaScript is loaded from QIIIEP server by

pointing to the URL of QIIIEP server. When the script is executed, it sends back the query

words to the QIIIEP server due to the same-origin policy the server doesn't prevent

dynamic script insertions rather treats the scripts as if they were loaded from the site that

provide the web page.

JSONP Based

The document object model (DOM) allows to dynamically create an element that can be on

a page, so that it can further be used to create a SCRIPT tag. After setting a source, the

DOM browsers try to load the script file from the source. At the time of call loadContent,

the source javascript file is specified as source and this file is used to transfer the query

words to the QIIIEP Server. JSONP is an efficient cross-site communication method that

bypasses the same-origin policy boundaries. JSONP gives the remote side arbitrary control

over the page content. But the limitation of this solution is that only ‘get request’ (one of

the method specified in http protocol for receiving data from server by querying) can be

used with JSONP. In the scenario of QIIIEP query word submission, all the query words of

form must be encoded in ‘get request’ and posted through JSONP.

iFrame based

As an alternative solution, forms can use the ‘POST method’ (one of the method specified

in http protocol for sending data to the server) and JS to submit forms. A form can be

dynamically generated with the help of action attribute set to any URL (including pages on

other sites) and encode all query parameters as hidden <input> fields. The frame size is so

small that it is not visible on page and when user submits the form, a JavaScript function

postdata() automatically submits the form to QIIIEP server. Since this approach does not

get any response data from the frame, it’s truly “fire and forget” approach. The first part is

the crossDomainPost() JS function. This is the function called from the script to initiate a

POST request. The function accepts following two arguments:

writer_url – the URL of the script that will generate the form.

Page 114: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

95

parameters – query parameters for the POST request, formatted like this : {param1 :

'value1', param2 : 'value2', ...}

As a result, the final script contains the following three components i.e.

1. A JavaScript function that creates the iFrame.

2. Script runs inside the frame, generate and submit the form.

3. The php script that saves the key value on QIIIEP server.

The HTML script tag is the last frontier of unrestricted access for browser-based

applications. Depending on viewpoint, it either increases security risk, or a tool to make

rich clients even richer. The features of JSONP and iFrame implementation are used with

script for posting the query word to QIIIEP server.

4.2.2 SECURITY MEASURES

Cross-site scripting (XSS) is a kind of web security deficiency, generally found in web

applications that enable malicious attackers to insert client-side script into webpage viewed

by other users. QIIIEP server is deployed on premium hardware configuration and it uses

SSL for secure transmission so it is authentic and there is no chance of embedding any

malicious script on any third party web page [136] [138] [166].

4.2.3 IMPLEMENTATION RESULTS

The experiment was conducted on a machine having Intel Core 2 Duo T5870 @ 2.0 GHz

with 1 GB of RAM with Windows 7 OS. Tests were performed using WAMP server

equipped with php v. 5.2.6 and mysql v. 5.0.51b. All of the tests were performed on Firefox

3.5.4 with JRE (SUN) 1.6 and Shockwave Flash 10. To minimize measurement errors, all

tests were performed multiple times. Both the technologies work fine with configuration in

which two forms are used, first form has three input elements i.e. PID, Price, and Weight

while the second form has Name, Age and Gender. When the form is submitted to

originating server simultaneously a response is also forwarded to the QIIIEP server for

storing query words entered by user. These stored words are stored in Table 4.1 and in

Table 4.2.

Cross-site communication is very necessary for any web application like QIIIEP but the

security is a very prime concern to avoid the data breach due to unsecured cross-site

Page 115: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

96

communication. Various existing well established cross-site communication technologies

for QIIIEP with special reference to security feature are analyzed.

Table 4.1: Stored Words in Database-Case1 Table 4.2: Stored Words in Database-Case2

After analysis the client-side solutions including JSONP, iFrames, usage of the browser

window object, and Internet Explorer 8’s XdomainRequest etc., it has been concluded that

both JSONP and iFrames can proved to be the very powerful techniques for cross site

communication for QIIIEP. But the limitations of JSONP is that it does not have an error

handling capability for JSONP calls and it can only support get request and explicit cancel

without the possibility of restarting of the request. In iFrame, the error handling is quite

complex as well as the response is not traceable. However, both of the solutions are

satisfactory because for QIIIEP, as there is no need of explicit post request and there is no

chance of intrusion malicious code in reply due to the authenticated process in QIIIEP

server. After the simulation on pre composed test queries on QIIIEP server, it is found that

the results meet the particular requirement of sending query word to QIIIEP server.

Ranking of a query word has a very significant role in providing authentic and useful

information to the web user. The QIIIEP is a protocol for crawling contents from deep web

for search engine, where information is generated on the basis of supplied query. A query

word ranking algorithm is used for ranking the query words stored on QIIIEP server so that

lesser post request has to be generated for crawling the deep web. It should extract out the

authentic information against a specific query interface. In following section, a novel

technique for ranking of query words is presented with reference to QIIIEP server.

Page 116: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

97

4.3 RANKING ALGORITHM OF QUERY WORDS STORED IN

QIIIEP SERVER

In literature survey it was observed that ranking of query word requires more parameter

such as density of search term, position of search term, ordinality of search term and

proximity of search term for ranking. In this section, a novel technique for ranking is being

presented with special reference to QIIIEP server.

4.3.1 QUERY WORD RANKING ALGORITHM

In this algorithm [120], the ranking based on the relevance of context of web page with

respect to keyword supplied by the user is being proposed. It assigns a numerical weight to

each query word with the purpose of providing its relative importance within the set. For

initial grouping of query words, context of the words with reference to the web page is

used. Based on the category of a word (Say Ci ), word is placed in a group. A single query

word is randomly selected from the group. The priority of group is finalized by assigning a

rank for each group based on the following relevance criteria. The relevance is computed in

three steps i.e.

(1) Root words are created for query terms.

(2) The identification of context is takes place.

(3) Relevance weighting is conducted for a number of parameters [167].

Randomized probabilistic selection algorithm is used for selecting the query word after

analyzing the exact search term and stemmed search term. A random number is used as an

arbitrary input to channelize its behaviour for producing desired result. The priority

identification assimilated with various quality measures, which have been contemplated out

in order to improve the results. All these measures tend to affect the ranking results in real

time.

Let Cij denotes the number of query words in ith category at jth position.

It can be defined as:

=n (4.1)

Where, the number of words is n in a given query.

The initial selection Si can be defined as:

Si= Rand (x, (p Ci)) (4.2)

Where x is seed, p is element in category.

Page 117: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

98

The next selection Si+1 can be defined as:

Si+1== Rand (x, (p Ci)) (4.3)

Where Ci category having high priority derived from context seed

The algorithm for randomized probabilistic selection and priority generation is give below:

Algorithm: R_P_SR(W,L)

Input:

W // List of Query Words

L // Link of Query interface

Assumptions:

1. Query q is a single word of English language.

2. QIIIEP server must have information of Query interface and the contents of web

document must be in HTML.

Output: ubiquitous

A list of words arranged by priority

1. Identify context of query interface through link L.

2. Divide list of query words in groups based of category generated on the basis of

context derived from query interface link.

3. Select random population of Si from groups containing Cij query words from

equation 4.2.

4. Evaluate the fitness f(x) by result analysis generated from querying the deep web

server of each query word p in the population

5. Repeat following steps to assign priority for each group until every word is used for

query.

5.1 Select those groups that produce better results and assign a higher priority

according to their fitness (the better is the fitness, the better are the chances

to be selected)

5.2 Select more query words from high priority group and randomly select the

group with low priority and select few query words from low priority group.

5.3 Evaluate the fitness f(x) by querying and analyze the result of each query

word p in the population.

Page 118: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

99

A Flow graph of the algorithm is divided into three sub processes that are, root-words

stemming generation, context Identification & clustering and fitness evaluation is given in

Fig. 4.3.

Fig. 4.3: Flow Graph of Query Word Selection Algorithm

A. Root-Words Stemming Generation

Words are converted to their base or root words through the stemming technique. In a very

fundamental way, it tries to find singular term with respect to plural form of search term,

which is in plural form and vice-versa. This process can be implemented by eliminating “s”

or “es” from the search words in English language. In most of the situations search terms

are stemmed before starting of the actual search. Frequently it is required to mention that

one is going to execute exact search or a stemmed search. There are various available

techniques based on linguistic analysis to extract the root form of a word. Examples of such

technique are ‘porter algorithm’ and ‘trivial algorithm’ which are basically based on affix

removal methods. Since deep web interface generally have singular entry label. The

stemming of root word becomes important as it reduces the redundancy of the word having

different modifications (such as –s,es,ing in the suffix) in the QIIIEP server. Moreover, it

helps in the formation of good cluster and enhances its efficiency of searching as well. The

reason may be that there are fewer words to be searched in the QIIIEP server after root

word stemming.

B. Context identification and clustering

This method exploits the techniques of context identification. A word context mainly made

of text surrounding the given words, hyperlink documents, documents title and so on. The

Page 119: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

100

characteristic of the context is very important because it helps to measure the relevance of

the search term. Term selection is one of the methods of extracting the terms that can be

used as a context for the query interface. In order to quantify the importance of a term t in a

surface web page collection, a weight is provided to the term. After calculating the weight

of a term, these words are stored according to their weight in context table. By

experimenting on test domain, context words having high weight like shop, market,

category, purchase and online are collected. In next section of this work, the cluster

occurrence of given term t is found with stored query word by using surface web search

engine. Context identification helps in finding the relevance of the search term in the

document and clustering help in making the groups of relevant words. Let us take an

example. Assume a set of words is stored in query word knowledge base for specific

element having Label Product i.e. printer, scanner, mp3 player, speaker, and monitor. In

this case, following three word groups (Context words, label and query words) are required

(See Table: 4.3).

Table 4.3: Clustering of Context Words, Label and Query Words

Context words Label Query word

Shop Product PrinterMarket Scanner

Category mp3 playerPurchase SpeakerOnline Monitor

C. Fitness evaluation

The occurrence of search term within search result is analyzed and consequently weights

are assigned for different factors such as density of search term, position of search term,

ordinality of search term and proximity of search term. Occurrence of stem terms and

exact terms is also taken into consideration. Relative weights can be assigned to different

result fields. Higher weights can be assigned to the collection with higher importance and

recent results. Priority of the groups is assigned for the ranking of the groups according to

their relative weight.

1. Density of search term -

Density of keyword is measured by counting the occurrence of a particular keyword in a

given page indicating about the probability of finding the relevant contents with respect to

given query.

The algorithm for measuring the density of search term is given below.

Algorithm DST(k,q) // k=Keywords, q=Query words{

Page 120: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

101

Step1: String str <- k.data String token[ ] <- parse(str,#)String qstr<-q.dataString qtoken[ ] <- parse(qstr,#) while (qtoken[j]==!NULL)while (token[i]==!NULL)a. if ( token [i] = qtoken[j])

c <- c + 1 b. i <- i + 1

end while j<-j+1 end while

return c }

2. Position of the search term –

Position of a search term refers to, where the search terms is located within a particular field. Special considerations are given if a search term is located at the first word position, last word position or relative position between first and last words.

The algorithm for measuring the position of search term is given below.Algorithm PST(k,q){Step1: String str <- k.data

String token[ ] <- parse(str,#)String qstr<-q.data String qtoken[ ] <- parse(qstr,#)while (qtoken[j]==!NULL)while (token[i]==!NULL) if ( token [i] = qtoken[j])

pos[j][c] <- ic <- c + 1i <- i + 1

end whilej<-j+1c <- 0

end whilereturn pos[ ][ ]

}

3 Ordinality of the search term –

Ordinality of the search term refers to the situation that if the search terms are in the same

order as given in search expression or search terms are not in order in conjunction with

search expression. Ordinality of search term can play an important role in searching the

Page 121: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

102

query words and should be provided a higher weight to the terms with the same order.

Similarly ordinarily related to multiple occurrences can also be an important factor.

The algorithm for measuring the Ordinality of search term is given below.

Algorithm OST(k,q){Step1: j=0,p=0

String str <- k.data String qstr<-q.data String qtoken[ ] <- parse(qstr,#)while(str=null)while (qtoken[j]!=NULL)y=str.indexof(qtoken[j])if(qtoken[j+1]!=!NULL )z=str.indexof(qtoken[j+1])if z>yord[p]=trueelseord[p]=falsep++endifendifj++str=str.substring(y+ qtoken[j]).length+1)endwhileendwhilereturn ord

}

4 Proximity of search term –

The proximity of search term refers to the facts that how closely the search terms are

related to each other. While estimating a search term proximity, the search terms within the

query expression and the distance between reoccurring of search terms are considered. The

proximity of search terms can play a significant role in searching the query words. The

algorithm for measuring the proximity of search term is given below.

Algorithm XST(k,q){Step1:j=0,p=0

String str <- k.data String qstr<-q.data String qtoken[ ] <- parse(qstr,#)while(str!=null)while (qtoken[j]!=NULL) y=str.indexof(qtoken[j])if(qtoken[j+1]!=!NULL )

Page 122: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

103

z=str.indexof(qtoken[j+1])diff[p]=z-y

p++ endif

j++str=str.substring(y+ qtoken[j]).length+1)endwhileendwhilereturn diff

}

These parameters are considered for the relevant ranking information which is used to

evaluate fitness function f(x).

4.3.2 EXPERIMENTAL RESULTS The experiment was conducted on a machine

having Intel Core 2 Duo T5870 @ 2.0 GHz with 3 GB of RAM with Windows 7 OS. Tests

were performed using WAMP server equipped with php v. 5.2.6 and mysql v. 5.0.51b. All

of the tests were performed on Firefox 3.5.4 with JRE (SUN) 1.6 and Shockwave Flash 10.

Web site http://www.isuvidha.com and http://shopping.indiatimes.com are used as a test

bed. In http://shopping.indiatimes.com/ site, there is product listing which is searched by

supplying different keywords according to their preference. 50 different keywords are used

for filling a single element of the form. These query words have been collected in the

knowledge base of QIIIEP server. There are one text elements in the search form for both

the case 1 & case 2, i.e. product and search respectively. Main interest lies in selecting

appropriate keywords through a proper ranking score for product by matching the context

with isuvidha.com.

The context words are stored according to the weights in given context table. Tests were

carried on the sites: http://isuvidha.com and http://shopping.indiatimes.com/, with a view to

collect context words having high weight. Consequents, the context words “shop”,

“market”, “category”, “purchase”, “online” and “computers and peripherals”, “mobile and

accessories”, “electronics”, “.movies” were collected. Further the performance of

randomized probabilistic selection algorithm is tested for ranking of key words. A set of

words stored on QIIIEP server is clustered first into groups so that the similar words are in

the same cluster. It is done by querying each pair of words to a search engine, which results

a co-occurrence matrix. Web counts of the query words are shown in Table 4.4 for case-1.

Page 123: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

104

Case 1 Example:

Site name: http://isuvidha.com

Context of the site is:

1. Shop

2. Market

3. Category

4. Purchase

5. Online

Suppose the query words are:

Speaker, Printer, scanner, mp3player, Speaker, Monitor

Form element label: Product

Table 4.4: Number of Web Counts versus Query Words for Case 1

Query Words Web Count Query Words Web Count

shop product scanner 368,000 category product speaker 32,300,000shop product mp3 player 7,000,000 category product monitor 45,600,000shop product speaker 9,430,000 purchase product printer 39,500,000shop product monitor 8,090,000 purchase product scanner 270,000

market product printer 8,040,000purchase product mp3 player 7,690,000

market product scanner 289,000 purchase product speaker 19,300,000market product mp3 player 286,000 purchase product monitor 12,900,000market product speaker 8,360,000 online product printer 12,400,000market product monitor 10,600,000 online product scanner 359,000category product printer 65,500,000 online product mp3 player 7,030,000category product scanner 3,490,000 online product speaker 17,300,000category product mp3 player 30,900,000 online product monitor 21,600,000

For clustering the words two algorithms i.e. PMI and Chi-square algorithm can be used.

Here The Chi-square algorithm is used because cluster of the words made by it are more

relevant than the PMI method. Here web counts values are divided by one thousands (See

Fig. 4.4: Counts of keyword found on surface web).

Fig. 4.4: Counts of Keyword Found on Surface Web

Page 124: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

105

In this algorithm, the chi-square (χ2) value is used. For computation clusters of words, the

chi-square value is calculated as follows: Denote the number of pages containing both

w1,w2 and w3 as a.

Thereby, the expected frequency of (w1, w2) is (a+c)(a+b)/N. Eventually, chi-square is

calculated as follows [168][169].

,

,

Where W represents a given set of words. Then chi-square (within the word list W) is

defined as

(4.4)

Where

w1 =shop product and w2= printer

w=Set of all the words

b'= sum of all web count words of in the row of shop products except w2=printer

c'= sum of all web count words of in the column of printer except w1=shop product

d'=sum of web count of all words except the ((sum of all web count words of in the row of

shop products)+( sum of all web count words of in the row of shop products )).

N'=sum of web count of all words

Computed Chi square value given in Table 4.5 for case-1.

Table 4.5: Chi Square Value Matrix

Context Words/ Query words Printer Scanner mp3player Speaker monitorshop product 736000 13000 1152000 348000 183000

market product 572000 10000 4137000 930000 2357000 category product 155000 1326000 3248000 4225000 32000purchase product 8330000 688000 1561000 103000 5113000 online product 6510000 235000 228000 1710000 4143000

Applying high value of chi square on the context words and the group formed thereof are

given below. The equation (4.5) was applied on the data given in Table 4.4 to obtain the

high values of the various terms as given in Table 4.5. The results are illustrated through

Fig. 4.5. Taking high value of chi square values for making the group for each context

words, Groups can be formed as:

G1: shop product - mp3player, printer

G2: market product -mp3 player, monitor

Page 125: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

106

G3: category product- speaker, mp3 player, scanner

G4: purchase product- printer, monitor, mp3player

G5: online product - printer, monitor, speaker

Fig. 4.5: Computed Chi-Square Values for Each Group for Each Context Word under Test for Case-1

Case 2 Example:

Site name: http://shopping.indiatimes.com/

Context of the site is:

1. computers and peripherals

2. mobile and accessories

3. electronics

4. movies

Suppose the query words are:

mouse, maachis, nokia, Bluetooth, radio

Form element label: search

Web counts of the query words are shown in Table-4.6 for case-2.

Table 4.6: Number of Web Counts versus Query Words for Case 2

Context Words/ Query Words Mouse Maachis nokia Bluetooth Radiocomputers and peripherals search 272000 41 1800000 260000 283000mobile and accessories search 67900000 22100 94600000 85900000 325000000electronics search 38900000 12700 7920000 11800000 76100000movies search 895000 20700 76100000 31900000 416000000

Page 126: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

107

Computed Chi square value is given in Table 4.7 for case-2.

Table 4.7: Computed Chi-Square value for Case-2

Context Words/ Query Words mouse Maachis nokia Bluetooth Radiocomputers and peripherals search 9000 1000 6181000 0.0 3582000mobile and accessories search 12925000 1000 3086000 22742000 42858000electronics search 76881000 7000.0 9226000 493000 6309000movies search 83992000 1000 7000 19059000 69966000

The equation (4.5) was applied on the data given in Table 4.6 to obtain the high values of

the various terms as given in Table 4.7. The results are illustrated through Fig. 4.6.

Taking high value of chi square values for making the group for each context words,

Groups can be formed as:

G1: computers and peripherals search -nokia ,radio

G2: mobile and accessories search -nokia, Bluetooth, radio, mouse

G3: electronics search- mouse, Bluetooth, radio

G4: movies search- mouse, Radio

Fig. 4.6: Computed Chi-Square Values for Each Group for Each Context Word under Test for Case-2

The major advantage of grouping of keywords is to minimize the useless query posting and

decreasing the load as well bandwidth of deep web server. The next parts of the algorithm

randomly select the query words from groups to post the actual query on deep web. The

returned results are analyzed to find out the relevance of that group’s keyword to the deep

web query interface. If the returned result is relevant, the group gets high priority.

Page 127: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

108

A very sophisticated ranking algorithm is required for selecting the appropriate query word

from QIIIEP server for posting to crawl deep web resource. The ranking algorithm for

deep web must consider a large number of parameters as far as possible. Some of the main

parameters that are considered in this algorithm are stemming, context of the query

interface, density of search term, position of search term, ordinality of search term and

proximity of search term. This algorithm assigns relative weights based on the relationship

between the search term, context of page and frequency of occurrence of simultaneous

words on surface web. In this algorithm, the weight assigning process heavily depends upon

the occurrence of context word, query label and simultaneous keyword on surface web. By

including all these factors, ranking can be improved to optimum value for a deep web

search as reflected in the experimental results of the algorithms.

In the next part of chapter the framework for privatized information retrieval in deep web

scenario is presented.

Page 128: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

109

CHAPTER 5

A FRAMEWORK FOR PRIVATIZED INFORMATION

RETRIEVAL IN DEEP WEB SCENARIO

5.1 INTRODUCTION

In today’s scenario web is working as a repository for sharing and storing information.

Generally all the personal information is delivered or distributed on different platforms

such as social networking sites, email providers and blogging sites. Whereas the service

providers do not have adequate internal searching facility and, therefore, it is tough for the

individual to search personal contents on these different sites. For instance, Internet users

employ Internet for different needs and store their privatized information in their private

area on the web. Searching of specific information in the private area depends on site-

specific search functionalities which sometimes, are inadequate for retrieving such

information specifically the deep web sites

Most of the individuals use more than one email service whereas the information is stored

and distributed on different websites. There exist possibilities of saving time and resources

if an authenticated user is provided combined search results about his/her privatized

information from different sources, which are stored and distributed on different websites.

The literature indicate that, the current web search engines are inadequate in the sense that

there is almost no technique available to combine search results from private areas, stored

and distributed on different websites in response to query word submitted to search engine.

In this work [170], a framework is proposed for privatized information retrieval in deep

web scenario for QIIIEP with the facility of combining search results about privatized

information from different sources distributed on different websites. The functioning of

proposed framework is divided into two parts. The mechanism of the first part is based on

“authenticated crawling” and the mechanism of the second part is based on “user

authentication to domain mapping”. From the literature survey, it was identified that very

less work has been reported regarding the privatized information retrieval and that are still

immature with the combining of the search results from different sources, stored and

distributed on different websites. Hence in this work an efficient technique that combines

Page 129: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

110

privatized search results from different sources, stored and distributed on different websites

is being proposed. Additionally it proposes mechanism for security and user authentication.

5.2 FRAMEWORK FOR PRIVATIZED INFORMATION RETRIEVAL

The framework of the proposed mechanism is shown in Fig. 5.1

Fig. 5.1: Framework for Privatized Information Retrieval

The working of the various components of the proposed framework is given below:

First of all, users supply credentials to the deep web search engine (DWSE) of its own

private area distributed on different sites as shown in Fig. 5.2.

Fig. 5.2: Form for Receiving User’s Credentials on Search Engine

These credentials are then used to crawl keywords from private area of the user by

authenticating on different sites. When the users provide the keyword after logging on the

search engine, it matches keyword with the crawled private keywords of user and provide

the connection link thereof. If user clicks on any of the returned link, search engine

Page 130: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

111

authenticates itself on specific site and return the original page in the real time there by

eliminating the manual authentication.

As shown in Fig. 5.1 every user who wants to use web searching for the privatized

information from the net must be a registered user of DWSE. After authentication, the users

have to supply their credentials, which are used for authentication purposes by the search

engine for entering information from different private sites at the time of crawling. Since

the crawler is now equipped with necessary information to crawl the privatized information

it can act as the authenticated user for the purpose of passing information to login on

different sites and retrieve entire content on behalf of the user. After analyzing entire

content, the crawler now stores some private keywords pertaining to the user for future

retrieval of information. Thereafter, whenever the user submits any query word from the

search interface, the deep web search engine not only matches these keywords with public

data but also matches with the private keywords and returns the related list of links. On the

users click on any of the links DWSE generates results page by traversing different links,

belonging to the user’s private content distributed on different service provider. Infect, the

DWSE authenticate itself on behalf of the user on that specific site on which the content

related to keyword matched in real time and provides original contents directly to the user.

Thus DWSE works as a tunnel between the user and http server.

5.2.1 USER AUTHENTICATION TO DOMAIN MAPPER

This module of crawler is responsible for mapping the login credentials, provided by user to

the crawler, with the information provider domain. The main benefit of using this mapper is

to overcome the hindrance of information retrieval as the crawler uses the mapping

information to allow the specific person to receive this information contents directly from

the domain by automatic login procedure, eliminating the step of separate login for the user.

Fig. 5.3: Framework for User Authentication to Domain Mapper

Page 131: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

112

As shown in Fig. 5.3, the user first provides its credentials to the deep web search engine.

When the user clicks on any link returned by the search engine against the search, DWSE

virtually logins itself on behalf of user to the user’s private site and provides the original

contents from that site by working as a proxy between user and the site.

5.2.2 DEPLOYMENT OF AGENT FOR AUTHENTICATED CRAWLING

This agent works (See Fig. 5.4) when the information in a site is hidden behind an

authentication form. It stores authentication credentials of every domain provided by the

individual user in its knowledge base situated at QIIIEP server /search engine. At the time

of crawling, it automatically authenticates itself on the domain for crawling the hidden

contents. The crawler extracts and stores keywords from the contents and makes available

to privatized search service due to privacy issue.

Fig. 5.4: Framework for Agent for Authenticated Crawling

5.3 IMPLEMENTATION OF THE PROPOSED FRAMEWORK

The implementation of the proposed framework can be divided into two parts. In the first

part, virtual authentication is done and during the second part, crawler acts as a tunnel

between the user and http server. Both the post and get method of http protocol is

considered with session management in authentication process.

5.3.1 IMPLEMENTATION OF USER AUTHENTICATION TO DOMAIN

MAPPER

The working of user authentication to domain mapper consists of three major steps.

1. The user login on the search engine and all the search terms evaluated against

keywords retrieved from private area and accordingly the link are generated. The

Page 132: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

113

ranking of links are based on search engine’s own ranking algorithm overridden by

private area keyword ranking.

2. When user clicks on any link, the search engine authenticates itself acting as a

virtual user on service provider’s site. In this process, real time authentication is

performed with above-mentioned procedure similar to agent for authenticated

crawling.

3. It is responsible for generating appropriate results to be visualized to the user. This

is done by extracting contents from user’s private area and sending it to the user’s

screen. In this process, the search engine works as a virtual tunnel in between user

and user’s site.

The user authentication to domain mapper is designed for providing exact information

wanted by the user. Its working is divided into three steps.

1. It provide the list of links from differnt private areas to the user during the search

process.

2. When the user click on a link, it authenticates search engine on the private area site

by using user credentials.

3. After authentication, the crawler works as the proxy between the user and the

private area site.

The authentication can be done by post or get request depending upon the user’s private

area site implimentation . Implementation of this module is based on post simulation using

httpWebRequest: The format of the request is given below:

HttpWebRequest = CType(WebRequest.Create(url), HttpWebRequest).

The code fregment is given below.

Dim requestStream As StreamTryrequestStream = httpWebRequest2.GetRequestStreamDim sChunks As Stream =httpWebRequest2.GetRequestStreamsChunks.Write(bBuffer, 0, iBytesRead)sChunks.Close()requestStream.Close()Dim webResponse2 As WebResponse = httpWebRequest2.GetResponseDim stream2 As Stream = webResponse2.GetResponseStreamDim reader2 As StreamReader = New StreamReader(stream2)Dim responce As String = reader2.ReadToEndIf responce != "" ThenActivatecrawler(url,bBuffer,response);End IfwebResponse2.Close()httpWebRequest2 = Nothing

Page 133: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

114

webResponse2 = NothingCatch ex As ExceptionEnd Try

It may be noted that the successful authentication, is identified by parsing the response data.

This module finds the best maching link from its data base for the keyword enganged by

ranking algorithm. It further iterates private area until the page having query word is not

found. At this moment, search engine starts working as a tunnel between user and

authenticated site, reducing the process of manual login as well as the manual iteration

process of user.

5.3.2 IMPLEMENTATION OF AGENT FOR AUTHENTICATED

CRAWLING

The agent for authenticated crawling is implemented by identifying session management

mechanism of server and automating the authentication process. Some sites uses a login

page which simply sends authentication information to the server by ‘post request’ and

returns cookies. The others mechanism are more complex and perform redirections to other

pages and require that the requesting party obtain a cookie before sending authentication

information to a server.

The code of simple implementation of HTML Web form that is used to send authentication

information to a server (HTML implementation of Login form) is given below:

<form name="loginform" action="logincheck" method="POST"> Username: <input type=”text” name="uname"> <br> Password: <input type=”password” name="passw’> <br> <input type=”submit’ value=”Login”><br> <input type=”reset’ value=”Reset”> </form>

In a simple authentication mechanism, this form sends the username and password to the

server and after matching with stored user name and password, server returns either login

failed page or returns a cookie (or a set of cookies) to the requesting party. But some

servers use more complex authentication process. They consist of redirecting the user to the

specific web form with cookies to enter the credentials. During some conditions, more than

one redirect arises for the user. In such conditions, user submits the authentication form

after entering the credentials of login. The browser sends the credentials and the cookie to

Page 134: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

115

the action URL on the server. A single cookie or multiple numbers of cookies are returned

back to the web browser by authentication server. The server then redirects the browser to

the requested page (See Fig. 5.5).

Fig. 5.5: Authentication by Search Engine in Private Area

The mechanism of the framework is given as follows. The agent accesses the root of the

web server and obtains a session cookie. The crawler then analyzes the form for form

method and action attribute. The crawler then forms a post query to the form action by GET

or POST request with pre-supplied credentials. The Http Server matches the incoming data

with database for validation. On a successful response, the agent stores the cookie

generated from the request. During crawling of pages form private area, the crawler

attaches these cookies to the page request for each new link crawling.

5.4 IMPLEMENTATION RESULTS

The experiment has been conducted on a machine having Intel Core 2 Duo T5870 @ 2.0

GHz with 1 GB of RAM with Windows 7 OS. Tests were performed using WAMP server

equipped with php v. 5.2.6 and mysql v. 5.0.51b. and Microsoft Visual Basic 2008 .net 3.5

All of the tests were performed on Firefox 3.5.4., All tests were performed multiple times to

minimize measurement errors.

Authentication is tested on some of the email service providers such as www.gmail.com,

www.rediffmail.com and www.yahoo.co.in with the help of three different user name

Page 135: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

116

password pairs. The results returned thereof were tabulated in Table 5.1, 5.2 and 5.3

respectively.

Table 5.1: Number of Links Returned against a Query Word for User 1

Table 5.2: Number of Links Returned against a Query Word for User 2

Table 5.3: Number of Links Returned against a Query Word for User 3

It may be observed from the above tables that most of the time users get some links in

response to the query but in few cases no link is returned either due to no account of user on

that specific site or no matched keyword found for that search string in user’s private

keyword database. There is a proper error handling procedure implemented for skipping

content extraction procedure call if authentication fails which eliminates use of captcha

submission on multiple failures of authentication. It may be noted that there is almost no

security threat in proposed framework because it is based on single sign-on at search engine

and it searches multiple private areas through keyword, uses streamline cookies and session

management on search engine.

Page 136: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

117

Proposed framework has been tested on three different sites i.e. gmail.com, rediffmail.com

and yahoo.com. It was found that DWSE successfully authenticated on these sites on the

behalf of the users with authenticating procedure and the fetched contents stored in private

keyword database. The limitation of the proposed framework is that it only stores keywords

and does not stores the entire content at search engine so the results definitely depend upon

the identification of keyword in contents of pages at the time of crawling from private area.

But it enables peculiarity of privacy for the user. A simple ranking algorithm is used for

link ranking, which is based on keyword density and updating time.

In the next chapter a novel architecture of deep web crawler is described.

Page 137: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

118

CHAPTER 6

A NOVEL ARCHITECTURE FOR DEEP WEB CRAWLER

6.1 INTRODUCTION

The World Wide Web consists of interlinked documents and resources that are easily

crawled by PIW crawler, known as surface web crawler. There are two different types of

forms which exist on a website. First is authentication form and second is searchable

interface. In fact, for efficient crawling of hidden web data, the discovery of search html

forms is very important step. In this work, a QIIIEP based domain specific hidden web

crawler has been designed (See Fig. 6.1). The main focus is to extract the dynamic hidden

web contents by automatic filling of forms of the target sites.

6.2 A QIIIEP BASED DOMAIN SPECIFIC HIDDEN WEB CRAWLER

There are various challenges associated with the discovery of searchable interfaces such as

to design an algorithm that could define the domain of the databases and to fill the forms

automatically to extract out the most useful information for the users. Moreover, the

crawler should not require human intervention to crawl the hidden web information.

In this work [171], first the initial links are taken into consideration using already existing

techniques to extract the urls then the crawler focuses on these links. The crawler discovers

the search interface for various domains. This accelerates the process of searching the

hidden search interfaces very efficiently. The crawler possesses a information for each

domain. So the users can get proper and relevant information as they want. The work of the

site administrator is only to provide the initial seed set. This initial seed set is taken into

consideration by the crawler for crawling of the relevant links. Crawled relevant links are

saved in a database. The database contains the form related pages in a form repository. The

framework of the proposed mechanism is shown in Fig. 6.1.

Page 138: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

119

Fig. 6.1: Domain Specific Hidden Web Crawler

The working of the various components of the framework is given below.

1. Multithread Web Page down loader

It is used to download the web content of the urls. The module is designed to be

multithreaded so that multiple threads could be generated for each url distinctly.

2. Web Page Analyzer

The web page analyzer is used to parse the contents of the web page. It analyzes the web

page for searchable interfaces.

3. Form id Manager

This module extracts the forms and correlates various different domains and provides help

to the QIIIEP server in maintaining knowledge base.

Page 139: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

120

4. Form Submitter

This module sends the filled form to the http server to inquire for further information.

5. Query Word Extractor

This module extracts meaningful queries and stores each and every query word associated

with specific element of the form.

6. Crawl Frontier

Crawl frontier contains all the links which are yet to be fetched from the HTTP server or

the links obtained after URL filter. It starts with a list of URLs to visit, called the seeds.

7. Link Extractor

Link extractor extracts the links or the hyper links from the text file for the further retrieval

from the http server. The extraction of links is done as per the link identified by the web

page analyzer, that are likely to lead to pages that contain searchable form interfaces in one

or more steps.

8. QIIIEP Server

The QIIIEP is an application-level protocol that does a task of semantic ontology based

query word composition. Basically it is a knowledge base which is made with the help of

query word extractor module and form manager. This knowledge base is useful for the auto

filling of the forms while submitting the query. Whenever a user fires a new query, it is

automatically saved into this knowledge base.

9. Link Ranker

It is required to rank the links accordingly so that more information is gathered from high

ranking links. This is based on the link ranking algorithm.

10. Link Indexer

This module plays an important role in the indexing of the generated keywords to the

content database. Indexer collects, parses, and stores data to facilitate fast and accurate

information retrieval.

11. Content Database

Content database stores all the generated links or keywords in the content database. When

user puts any query into the user interface, the indexes are matched with the corresponding

links and information is displayed to the user for further processing.

12.Form DB

Form database module is used to store the forms.

13. URL Repository

URL repository module is used for temporary storage of link.

Page 140: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

121

14. Link Composer

This module composes the links on querying by the user. On querying, link composer

extract links from the database. After that the links are presented to the user through the

user interface.

15. Interface Generator

Interface generator is used to give the view of the contents stored in the content database

after the search is completed.

The working steps of the proposed crawler are given below.

Step 1. A web page is fetched by the crawler after requesting to the web server.

2. Fetched web page is analyzed and parsed for finding the relevant links.

3. Web page analyzer analyzes the links for proper selection and filtering.

4. Filtered links are sent to the crawl frontier. Crawl frontier further selects a link and

sent it to the web page fetcher.

5. Crawler analyzes the form, containing the form id to identify the search interface.

6. If no form is associated with the required domain then the fetched web page are

indexed and saved in the database.

7. Form analyzer analyzes, extract and save the form in the form database.

Simultaneously the relevant query word is fetched and saved in the knowledge base

of the QIIIEP server.

8. Form is taken from the database and filled with the query words produced by

QIIIEP server.

9. Crawler forwards the filled form to the http server by placing the received query

words.

10. Crawler starts crawling the contents produced by that query word.

11. Finally extracted pages are saved in the database of the search engine.

The working steps of user interface are given below:

Step 1. User enters the query to search.

2. The validation of query words takes place.

3. The link composer then fetches links from the database.

4. The content is then produced to the user with the help of interface generator.

Page 141: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

122

An enhanced architecture for deep web crawler “DWC” was also proposed with features of

both privatized search as well as general search for the deep web data that is hidden behind

the html forms. A detailed discussion on DWC is given in the following section.

6.3 ARCHITECTURE OF PROPOSED DEEP WEB CRAWLER (DWC)

MECHANISM

The architecture of proposed novel deep web crawler mechanism is shown in Fig. 6.2.

Module wise description of proposed novel architecture of deep web crawler is given

below.

1. Page Fetcher

Page fetcher fetches the pages from the http server and sends them to the page analyzer to

check whether it is the required appropriate page or not based on the topic of search and the

kind of form, the page contains.

2. Page Analyzer/Parser Module

The page analyzer/parser is used to parse the contents of the web page. Texts and links are

extracted. The extraction of links and text is done on the basis of the concept of page

classifier. Form classifier link is used to topic wise arrange the pages. It also filters out

useless forms and identifies links that are likely to lead to pages that contain searchable

form interfaces.

3. Form id Generator

This module helps to implement QIIIEP (query intensive interface information extraction

protocol) on current architecture of website. It parses every form of that web site and merge

the form ID with QIIIEP server query word list so that at the time of crawling request, it

identifies the search interface properly for sending the request of keywords to the QIIIEP

server.

4. Form Submitter

After filling the form, the form submitter sends again the request to the HTTP server.

5. Crawl Frontier

Crawl Frontier contains all the links which are yet to be fetched from the HTTP server or

the links obtained after URL filter. It takes a seed URL for starting the procedure and

processes that page and retrieves all the forms and adds and rearranges them to the list of

URL and rearranges them. The list of those URL is called crawl frontier.

Page 142: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

123

Fig. 6.2: Architecture of Proposed Deep Web Crawler Mechanism

6. Link Extractor

Link Extractor extracts the links or the hyper links from the text file for the further retrieval

from the HTTP server. The extraction of links is done as per the link identified by the page

analyzer/parser that is likely to lead to pages that contain searchable form interfaces in one

or more steps. This dramatically reduces the quantity of pages for the crawler to crawl in

deep web. Fewer numbers of pages are needed to be crawled since it applies focused

Page 143: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

124

crawling along with searching the relevancy of obtained result to the topic and hence result

in limited extraction of relevant links.

7. QIIIEP Server

The QIIIEP is an application-level protocol for semantic ontology based query word

composition, identification and retrieval systems. It is based on request/response semantics.

This specification defines the protocol referred to as QIIIEP 1.0.The QIIIEP server work on

this protocol. The QIIIEP reduces complexity by using pre-information about the form and

its elements from QIIIEP server. The knowledge base is either generated by auto query

word extractor or it is provided by the site administrator.

8. Link Ranker

This module is responsible for providing the best match query word assignment to the form

filling process and reduce the over loading due to less relevant queries to the domain. It is

required to rank the link accordingly so that more information is gathered from each link.

This is based on link ranking algorithm.

9. Link Indexer

This module plays an important role in the indexing of the generated keywords to the

content database. Indexer collects, parses, and stores data to facilitate fast and accurate

information retrieval. It maps the keywords to URL for the fast access and retrieval.

10. Content Database

Content Data Base stores all the generated links or keywords in the Content Data Base.

When user puts any query into the user interface, the index is matched with the

corresponding links and information is displayed to the user for further processing.

11. Agent for Authenticated Crawling

This module works when the information in a site is hidden behind the authentication form.

It stores authentication credentials of every domain in its knowledge base situated at

crawler, which is provided by the individual user. At the time of crawling, it automatically

authenticates itself on the domain for crawling the hidden web contents. The crawler

extracts and stores keywords from contents and makes it available to privatized search

service to maintain the privacy issue.

12. Query Word to URL Relation Manager

This module generates meaningful queries to be issued to the query interface. It stores each

and every query word associated with specific element of the form by creating reference of

domain path, so that at the time of link generation, query word could be mapped to provide

the contents by sending query word in post request to the domain.

Page 144: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

125

14. Searching Agent

It provides the searching interface through which user places the query. This involves the

searching of keywords and other information stored in the content database which is

actually stored in it after the whole process of authenticated crawling.

15. Link Composer

This module takes the reference from the Query word to URL Manager for the purpose of

form submission.

16. Interface Generator

Interface generator is used to give the view of the contents stored in the content database

after the search is completed. For example, the interface generator shows the list of relevant

links indexed and ranked by link ranker module and link indexer module respectively.

17. Link Event Analyzer

Link event analyzer analyzes the link which is activated by the user so that it could forward

the request to display the page on the requested URL.

18. User Authentication to Domain Mapping Module

It is responsible for mapping the login credentials provided by the user to the crawler with

the information provider domain. The main benefit of using this mapper is to overcome the

hindrance of information retrieval between result link and information. The crawler uses the

mapping information to allow the specific person to receive information contents directly

from the domain by automatic login procedure and eliminates the step of separate login for

user.

Brief output results of the various modules of deep web crawler “DWC” is shown in Table

6.1.

Table 6.1: Brief Output of the Various Modules

MODULES OUTPUT 1. Page Fetcher The downloaded html content2. Page Analyzer Selected pages which contain forms.3. Form Id Manager A list of Search Interfaces (forms).4. QIIIEP Server Provide query word to the forms wherever possible & give result to

response analyzer.5. Form Submitter Send the filled form to the Http server.6. Link Extractor A list of extracted links those are relevant.7. Link Ranker Module A list of ranked links.8. Link Composer Module Composed list of links by using query word to URL relation manager.9. Interface Generator Shows the list of relevant links ranked by rank module & link indexer.10.Link Filter Fresh links are shown.

Page 145: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

126

6.3.1 FEATURES OF THE PROPOSED ARCHITECTURE

Features of the proposed architecture for deep web crawler are given below:

1. This proposed architecture crawls the deep web if the administrator of that site

follows the framework of QIIIEP based on focused crawling.

2. It removes the complex task of query interface identification and values allotment as

the huge majority of deep web query interfaces on the web are html forms.

3. The deep web query interface is classified according to different users.

4. Dynamic discovery of deep web sources are carried out as per to the user’s query.

5. Provide input i.e. auto of filling the form of search queries.

6. Auto form ID generation & auto query extraction modules are used by QIIIEP

server to extend its knowledge.

7. Privatized search plus general domain search are the two features which are based

on the overall protocol.

8. Authenticated crawling is provided for privatized search.

9. Forms are classified with various levels depending on the administrator.

10. Different domains will have their content links and indexes in different databases.

11. Implicit authentication takes place when registered users click on a link at domain

site.

12. The final content page opened after link is clicked which was crawled by post or get

request.

13. Only the meta data of private area is stored so there is no privacy issue for crawling

authenticated private content.

14. There is a huge list of query word generated through cross-site user query

submission module but the ranking algorithm choose most appropriate query word

for specific query interface to reduce the bandwidth wastage.

The overall working steps of the proposed architecture for deep web crawler are given

below:

Step 1. Crawler request to the web server to fetch a page.

2. a). After the page is fetched it is being analyzed and parsed for the relevant contents

(links and text).

b). The page is sent to the query word to URL relation manager.

Page 146: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

127

c). If the authentication of administrator credentials is required then the links are

sent to the agent for authenticated crawling.

3. After being analyzed by the page analyzer/parser, links are selected and filtered out.

4. Filtered URLs sent to crawl frontier, which again chooses a link and sends it to the

page fetcher.

5. Now, crawler analyzes the form to identify the search interface. The form must

include the form id.

6. Then the form id is used to send the request to query word to the QIIIEP (query

intensive interface information extraction protocol) server, where the extraction

takes place to correlate the form fields.

7. Now, the server replies to the crawler about each entry to that form.

8. Crawler sends the filled form by placing the received query words to the HTTP

server.

9. Crawler crawl the contents generated by that query word.

10. Finally fetched pages are ranked, indexed and stored in the search engines database

User interacts with user interface through the following steps:

Step 1. User enters the query about search.

2. The query words validation takes place and link composer fetches the link from

the data base according to the query word.

3. Now links are searched from the content database.

4. The content is then provided to the user with the help of interface generator.

5. The link i.e. chosen by the user divert the user for user authentication through

domain mapper so that the user can retrieve those authenticated contents without

explicit login.

6. Query word, submitted to URL relation manager use the post and get request to

generate the specific page.

7. The link opens the website in a browser.

The algorithmic details of major modules of DWC is given below

Page 147: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

128

The algorithm for the simple web crawler is given below:

Algorithm: Simple_ web_crawler()

{

Input initial URL = seed.

Step 1.Maintain a queue Q={u}.

2. While(Q!=NULL)

3. Pop a element from Q.(using FIFO)

4. Process that URL and fetch all the relevant URL’s.

5. Assign an unique index for each page visited.

6. For each relevant URL’s fetched (URL1,URL2,URL3…..)

7. If(URL1 is not indexed && URL1 does not belong to Q )

8. Add URL1 to Q.

9. End

}

The algorithm for the deep web data extraction module is given below:

Algorithm: Deep_ web_ data_ extraction()

{

Step1. Extract web page by using initial seed URL or crawl frontier.

2. Analyze for contents and form.

3. Extract query URL from page and store in crawl frontier after applying

filter.

4. If page content form.

5. Request query word for all the elements in form from QIIIEP server.

6. Submit and extract the deep web contents.

7. Manage query word to URL relation.

8. Rank and index the page and store in content database.

9. Go to step 1

}

The algorithm for search and result link generator module is given below:

Algorithm: Search_ and_ result_link_ generator()

{

Step1: User input in the form of query.

Page 148: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

129

2: If this is a valid query then goto step3

3: Search the content database as follows

3(a): Efficient query is generated

3(b): Check the indexed words and the corresponding links

3(c ): IF query words match then

Select the appropriate links & goto step 4.

else:

goto interface generator & display keywords did not match & stop.

4: If found then Interface generator will display the results.

5: Link event analyzer will take user authentication to domain mapping if

site want login.

6: If user’s credential found authenticated on website and open specific page

by using query word to URL relation manager.

else:

Display login form of that web site.

7: If web site does not want login.

8: Just open the deep web content by using query word to URL relation

manager.

}

6.3.2 BENEFITS ACCRUED THROUGH PROPOSED ARCHITECTURE

1. As we are using multithreaded downloading techniques so multiple threads run

simultaneously which eliminate the chances of a page miss. URL filter module

makes the crawling very precise, eliminating the duplicate links.

2. It involves multi-attribute database accessing.

3. Form verification can be done very easily with the help of our form manager which

extracts different domains attribute accordingly.

4. The form filling can be done very precisely and the user needs not to fill the

complete form.

5. Useless pages if crawled cannot affect the semantics of the deep web information to

be extracted.

6. The system is efficient in terms of time and cost owing to the use of multithreaded

downloading technique and automatic form filling by the QIIIEP server.

Page 149: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

130

7. Web page forwarding technique that is used while forwarding by submitting the

form has no chances of error at all. If the action part does not start with =”http: //”

then we attached the relative link part to the web site and is forward to related page.

The designed crawler was implemented and test run was carried on different domains. The

results obtained thereof are discussed in the next chapter.

Page 150: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

131

CHAPTER 7

IMPLEMENTATION & RESULTS

7.1 INTRODUCTION

In chapter 2 and chapter 3, it was identified that the following issues were needed to be

addressed while designing an effective deep web crawler to extract the deep web

information contents.

1. Query words that are collected in knowledge base must be done in optimized,

effective and inter-operative manner.

2. Effective context identification for recognizing the type of domain.

3. For the purpose of extracting the deep web information, a crawler should not

overload any of the web server.

4. Overall extraction mechanism of deep web crawling must be compatible, flexible,

inter-operative and configurable with reference to existing web server architecture.

5. There must be a procedure to define the number of query words to be used to extract

the deep web data with element value assignments. For example date wise fetching

mechanism can be implemented for limiting the number of query words.

6. Architecture of deep web crawler must be based on global level value assignment,

so that inter-operative element value assignment can be used with it. However, for

element value assignment, surface search engine and clustering of query word can

be done based on ontology as well as quantity of keywords found on surface search

engine.

7. For sorting and ranking, the process of query word information extracted from deep

web must be analyzed and resolved through two major activities i.e. exact context

identification and some ranking procedure so that enrichment of collection in

knowledge base at the search engine could be carried out.

8. The deep web crawler must be robust against unfavorable conditions. For example

if resource is removed after the crawling process, it is impossible to set exact result

pages of the specific query words, even if the existing record of relation between url

and query word is available.

Page 151: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

132

9. The crawler must have a functionality tool to enable the quicker detection of failure,

error or unavailability of any resources.

In this work, we have provided a solution for efficient and automatic crawling of the deep

web data. The implementation details and performance metrics are discussed in the next

section.

7.2 EXPERIMENTAL SETUP

The experiment was conducted on a machine having Intel Core 2 Duo T5870 @ 2.0 GHz

with 3 GB of RAM. This machine was running with Windows 7 OS. Tests were performed

using WAMP server equipped with php v. 5.2.6 and mysql v. 5.0.51b., Microsoft Visual

Basic 2008 .net 3.5 and Net beans IDE. All the tests were performed on Firefox 3.5.4. and

were performed multiple times to minimize measurement errors.

7.2.1 GRAPHICAL USER INTERFACE

Fig. 7.1 shows the homepage of deep web search engine based on QIIIEP protocol

implemented during this work. It has one text input box for writing query and a submit

button which posts data supplied by the user to the search engine.

Fig. 7.1: Homepage of Deep Web Search Engine

Fig. 7.2 shows the login form and the registation form where user can register and login for

searching their private area such as Gmail, Rediff and Yahoo. This interface contains

member login form and registration form. Member login form is for the previously

Page 152: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

133

registered user and the registration form is for new users, who want to register on this site.

User gets a username and password after being registered on this interface.

Fig. 7.2 : Login and Registration Page

The test has been carried out on three email service provider i.e gmail.com, rediffmail.com,

yahoo.co.in and accordingly the interfaces have been designed.

The form shown in Fig. 7.3 is used for submitting users crediatial for gmail account. These

crediantails will be used for collecting meta data from user’s email account.

Fig. 7.3: Gmail Crediantails Submission Form

Page 153: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

134

The form shown in Fig. 7.4 is used for submitting users crediatial for yahoo mail account.

The cawler uses these crediatals to crawl mails from users yahoo mail box.

Fig. 7.4: Yahoo Crediantails Submission Form

Fig. 7.5 shows the site name, number of indexed URL, date of last index and option to

modified different propeties related to each URL.

Fig. 7.5 : Administrator Interface For Deep Web Crawler

The interface shown in Fig. 7.6 reflects the authenticatad crawling functinalities of crawler,

where administrator can manage crawling of user’s private area.

Page 154: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

135

Fig. 7.6: Authenticated Crawling Management Interface

The Fig. 7.7 shows the options associated with crawling of each URL. Some of the feature

such as reindexing, browsing of crawled pages, deletion of crawled pages and statistics of

crawled page can be managed.

Fig. 7.7: Option Associated with Crawling of each URL

Database shown in Fig. 7.8 is used for storing the data of crawler such as keywords, sites,

URL.

Page 155: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

136

Fig. 7.8: MYSQL Database for Crawler

7.2.2 DATA SET

For experimental evaluation of the proposed work, the following three domains have been

considered:

1. Book

2. Job

3. Auto

Input Set

Administrative Supplied data

Crawling seed:

Book www.borders.com

www.bookshare.org

www.textbooks.com

www.ecampus.com

www.textbooksrus.com

www.bookfinder.com

Page 156: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

137

Job www.timesjob.com

www.monsterindia.com

www.freshersworld.com

www.naukri.com

www.jobsahead.com

Auto www.allworldautomotive.com

www.cars.com

www.carazoo.com

www.autos.com

www.carwale.com

User Supplied Data

User credentials for authenticated crawling

User query set for information retrieval

7.3 PERFORMANCE METRICS

To measure the performance of proposed deep web crawler “DWC”, three metrics i.e.

Precision, Recall and F-measure are defined [50]. The objectives of the performance metric

are given below.

1. Analytical data based on the number of sites visited which reflects the reach of the

crawler.

2. Assessment of the different forms which show the ratio of relevant forms processed.

3. Calculation of the performance metrics to check the overall efficiency of the deep

web crawler.

For defining these performance metrics first three terms are defined as:

CDWP =Number of correctly crawled deep web pages

WDWP =The number of wrongly crawled deep web pages

NCDWP =The numbers of deep web pages that are not crawled by Deep web

crawler

Page 157: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

138

Various performance metrics are now defined as follows:

1. Precision (PR) is defined as a fraction of correctly crawled deep web pages over all

the web pages crawled by the deep web crawler. Mathematically, the Precision is

defined as

PR = CDWP/ (CDWP + WDWP) (7.1)

2. Recall (RE) is defined as a fraction of correctly crawled deep web pages by the

deep web crawler over all the deep web pages as given by domain experts., then the

Recall of the proposed system is given by the expression given below

RE = CDWP/ (CDWP + NCDWP) (7.2)

3. F-measure (FM) is a function of both precision and recall. F-measure is given by

FM = 2PR.RE/ (PR + RE) (7.3)

Where Precision (PR) and Recall (RE) are weighted equally.

7.4 EXPERIMENT RESULTS

Simulation of proposed architecture of deep web crawler and their result are analyzed with

the help of performance metrics like Precision, Recall and F-measure. We have tested on

three domains i.e. book, auto, job for our analysis. Detailed analysis regarding the book

domain is presented in this section. We have analyzed the six commercial implementation

of book domain i.e. www.borders.com, www.bookshare.org, www.textbooks.com,

www.ecampus.com, www.textbooksrus.com, www.bookfinder.com for extracting the

content of deep web by querying through search interface. We have used QIIIEP server for

fetching query words related to domain for level value assignment. The GET and POST

method is analyzed through action tag and stored in query words to the url relation

manager. The method implemented at local host is named as fetchqueryinterface and the

query argument used as sdd, stands for search deep domain.

http:\\localhost\fetchqueryinterface?sdd=borders.com

A downloaded six search interfaces pertaining to Books domains are shown from Fig. 7.9 to Fig. 7.14.

Page 158: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

139

Fig. 7.9: Search Interface of Book domain: www.borders.com

Fig. 7.10: Search Interface of Book domain: www.bookshare.org

Page 159: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

140

Fig. 7.11: Search Interface of Book domain: www.textbooks.com

Fig. 7.12: Search Interface of Book domain: www.ecampus.com

Page 160: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

141

Fig. 7.13: Search Interface of Book domain: www.textbooksrus.com

Fig. 7.14: Search Interface of Book domain: www.bookfinder.com

The performance of the proposed work depends on the number of correctly identified

hidden web pages. Some of the correctly identified hidden web pages are shown in Fig.

7.15, 7.16, 7.17, 7.18, 7.19, 7.20 and a few rejected pages shown in Fig. 7.21, 7.22.

Page 161: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

142

Fig. 7.15: Result Page on www.borders.com

Fig. 7.16: Result Page on www.bookshare.org

Page 162: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

143

Fig. 7.17: Result Page on www.ecampus.com

Fig. 7.18: Result Page on www.textbooks.com

Page 163: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

144

Fig. 7.19: Result Page on www.textbooksrus.com

Fig. 7.20: Result Page on www.bookfinder.com

Page 164: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

145

Fig. 7.21: Result Page on www.ecampus.com

Fig. 7.22: Result Page on www.textbooks.com

It may be noted that most of the result pages returned after submission of queries are

relevant whereas a few of the pages are irrelevant. It is found that match of the successful

context extraction takes place at border.com with query word “data structure” and other in

few cases where there is nil or improper results appear for example “operating system A K

Sharma” at textbooksearch.com.

Rel tag is used in html language to identify form id i.e. rel=fc4597a57

dOcfcef9aa82162cab8e080. The request method is specified in action tag which may either

Page 165: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

146

GET or POST. It is parsed through string matching. User submits query words by cross

site scripting. Web master submitted query words and inter-operative query word stored on

knowledge base at QIIIEP/ Search engine can be used to fill element value iterative

submission through POST or GET method. The result are analyzed to check whether the

extracted content is appropriate or not and accordingly new query words are constructed.

An Open source community web based browser is used to simulate real request by

composing the header according to specifications.

Table 7.1 shows the form identification statistics and Table 7.2 shows the query words and

content extraction statistics.

Table 7.1: Form Identification Statistics

Domain Query Forms Context IdentificationAuto 5 5Book 8 8

Job 12 12

Table 7.2: Query Words and Content Extraction Statistics

Domain Form IdQuery Words Received

Successful Content

Extraction

Auto37723d89ceOb05185c3e052af8e4b6d6

34 3055c64ad2fDd6a6ef4388b33b54123868ff3d37f39e7067263529419c493261731709a96edeacd9d394781601e1b49d2355c64ad2fDd6a6ef4388b33c54123890

Book

fc4597a57 dOcfcef9aa82162cab8e080

56 42eObftfd062a28403d966261Obe421eOd8b555f930bfD1516abc059ba5f9c384e29c037c8d8d2349cebdc9b5ea4c5659ecfac0899526cbbd51c0bab4f98ae50804087fa5a026afcaa153f2684372a4ea6cf538f6efa30c062d93d0d3213d07075eOcftfd062a28403d966261Obe421eOe

Job

clc1841bfd2c4554235183bcOde2e836

131

96

de54cd34b8a2ebcd2b852f410331 df4a200d21fl6a9b95df5eaa746a02a67fldaa502d15a3135934a964640ddd7abecO61fab2718c890a61bcf87772dc66dfa240cacb611ba4ecc555ca1b1b9dc5a5d1ae4285f5b7ef199d4e19b7c96ae268363dc1e4c3bc5b2ae63520627ea44df7fd7112f39dfcf02fb8d032a18945479ac2bb2d1a99fc70551dab323d042deb6843383c88976a0c386d0eb92a1ef67545ed67c76add7110dbe02Obb401a4672565f

Page 166: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

147

The graph shown in Fig. 7.23 is plotted between received query words and successful

content extraction at different domain. It can be concluded from the graph that contents

extraction are close to query words received at a satisfactory level for all three domains.

The successful content extraction means relevant page have more than five query words

and minimum length of content is four hundred words.

Fig. 7.23: Comparison of success with number of query words at different domain

Table 7.3 shows the default values for some of the parameters that are used for

experiments.

Table 7.3: Parameters and Values Statistics

Parameter Default value

No. of sites visited No. of links stored No. of forms/elements encountered

157187055/114

As indicated in Table 7.3, the crawler encountered 55 forms during the crawling of the 15

sites, of which 2 were ignored, because form id manager did not found any associated id

corresponding to the sites.

7.4.1 BOOK DOMAIN ANALYSIS OF DEEP WEB CRAWLER “DWC”

To evaluate the performance of “DWC”, the large numbers of queries were fired for each

domain. The resultant pages are graphically analyzed for each type of domain. The graph

shows the number of query word received and accuracy (%) on X-axis and Y-axis

respectively. Three curves for precision, recall and f-measure are drawn for each domain to

measure the accuracy. The results obtained from “DWC” for the book domain has been

discussed below:

For Book domain, about 1400 query words received & fired and response pages were

analyzed. It may be noted from Fig. 7.24 that as the number of received queries word

increases for Site1:www.borders.com, the value of Precision, Recall and F-measure

Page 167: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

148

increases. When about 200 received query words were fired, the value of the Precision,

Recall and F-measure were 73%, 75%, and 74.03% respectively. As the number of query

word reached to 1400, the values of the Precision, Recall and F-measure reached at 97%,

89% and 92% respectively.

Fig. 7.24: Graph Showing Performance Measures of Crawler for Book Domain: Site 1

It may be noted from Fig. 7.25 that as the number of received queries word increases for

Site2: www.bookshare.org, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 67%, 70%, and 68% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 93%, 85% and 88%

respectively.

Fig. 7.25: Graph Showing Performance Measures of Crawler for Book Domain: Site 2

It may be noted from Fig. 7.26 that as the number of received queries word increases for

Site3: www.textbooks.com, the value of Precision, Recall and F-measure increases. When

Page 168: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

149

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 73%, 75%, and 73.9% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 92%, 90% and 90.9%

respectively.

Fig. 7.26: Graph Showing Performance Measures of Crawler for Book Domain: Site 3

It may be noted from Fig. 7.27 that as the number of received queries word increases for

Site4: www.ecampus.com, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 70%, 72%, and 70.9% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 93%, 87% and 89%

respectively.

Fig. 7.27: Graph Showing Performance Measures of Crawler for Book Domain: Site 4

It may be noted from Fig. 7.28 that as the number of received queries word increases for

Site5: www.textbooksrus.com, the value of Precision, Recall and F-measure increases.

Page 169: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

150

When about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 78%, 76%, and 76.8% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 94%, 92% and 92%

respectively.

Fig. 7.28: Graph Showing Performance Measures of Crawler for Book Domain: Site 5

The above graphs show the performance measures of proposed deep web crawler “DWC”

at different quantity of query words received by QIIIEP server that are used to fetch the

content from the deep web sites. From the analysis of this graph we can easily visualize that

the number of query words received for specific domain, the precision, recall and f-measure

of information extraction have continuously increased. So it can be concluded that when

enough knowledge base is collected at QIIIEP server, the result is obviously improved.

7.4.2 JOB DOMAIN ANALYSIS OF DEEP WEB CRAWLER “DWC”

To evaluate the performance of “DWC”, the large numbers of queries were fired for each

domain. The resultant pages are graphically analyzed for each type of domain. The graph

shows the number of query word received and accuracy (%) on X-axis and Y-axis

respectively. Three curves for precision, recall and f-measure are drawn for each domain to

measure the accuracy. The results obtained from “DWC” for the job domain has been

discussed below:

For job domain, about 1400 query words received & fired and response pages were

analyzed. It may be noted from Fig. 7.29 that as the number of received queries word

increases for Site1: www.timesjob.com, the value of Precision, Recall and F-measure

increases. When about 200 received query words were fired, the value of the Precision,

Page 170: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

151

Recall and F-measure were 92%, 69%, and 85% respectively. As the number of query word

reached to 1400, the values of the Precision, Recall and F-measure reached at 97%, 85%

and 85% respectively.

Fig. 7.29: Graph Showing Performance Measures of Crawler for Job Domain: Site 1

It may be noted from Fig. 7.30 that as the number of received queries word increases for

Site2: www. monsterindia.com, the value of Precision, Recall and F-measure increases.

When about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 80%, 70% and 75% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 96%, 84% and 83%

respectively.

Fig. 7.30: Graph Showing Performance Measures of Crawler for Job Domain: Site 2

It may be noted from Fig. 7.31 that as the number of received queries word increases for

Site3: www.freshersworld.com, the value of Precision, Recall and F-measure increases.

Page 171: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

152

When about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 85%, 75% and 80% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 96%, 80% and 91%

respectively.

Fig. 7.31: Graph Showing Performance Measures of Crawler for Book Domain: Site 3

It may be noted from Fig. 7.32 that as the number of received queries word increases for

Site4: www. naukri.com, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 90%, 69%, and 78.1% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 95%, 81% and 88%

respectively.

Fig. 7.32: Graph Showing Performance Measures of Crawler for Job Domain: Site 4

Page 172: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

153

It may be noted from Fig. 7.33 that as the number of received queries word increases for

Site5: www.jobsahead.com, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 93%, 77% and 82% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 97%, 79% and 87%

respectively.

Fig. 7.33: Graph Showing Performance Measures of Crawler for Job Domain: Site 5

The above graphs show the performance measures of proposed deep web crawler “DWC”

at different quantity of query words received by QIIIEP server that are used to fetch the

content from the deep web sites. From the analysis of this graph we can easily visualize that

the number of query words received for specific domain, the precision, recall and f-measure

of information extraction have continuously increased. So it can be concluded that when

enough knowledge base is collected at QIIIEP server, the result is obviously improved.

7.4.3 AUTO DOMAIN ANALYSIS OF DEEP WEB CRAWLER “DWC”

To evaluate the performance of “DWC”, the large numbers of queries were fired for each

domain. The resultant pages are graphically analyzed for each type of domain. The graph

shows the number of query word received and accuracy (%) on X-axis and Y-axis

respectively. Three curves for precision, recall and f-measure are drawn for each domain to

measure the accuracy. The results obtained from “DWC” for the auto domain has been

discussed below:

Page 173: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

154

For auto domain, about 1400 query words received & fired and response pages were

analyzed. It may be noted from Fig. 7.34 that as the number of received queries word

increases for Site1: www.allworldautomotive.com, the value of Precision, Recall and F-

measure increases. When about 200 received query words were fired, the value of the

Precision, Recall and F-measure were 86%, 65%, and 79% respectively. As the number of

query word reached to 1400, the values of the Precision, Recall and F-measure reached at

93%, 73% and 83% respectively.

Fig. 7.34: Graph Showing Performance Measures of Crawler for Auto Domain: Site 1

It may be noted from Fig. 7.35 that as the number of received queries word increases for

Site2: www.cars.com, the value of Precision, Recall and F-measure increases. When about

200 received query words were fired, the value of the Precision, Recall and F-measure were

80%, 67% and 75% respectively. As the number of query word reached to 1400, the values

of the Precision, Recall and F-measure reached at 95%, 78% and 86% respectively.

Fig. 7.35: Graph Showing Performance Measures of Crawler for Auto Domain: Site 2

Page 174: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

155

It may be noted from Fig. 7.36 that as the number of received queries word increases for

Site3: www.carazoo.com, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 80%, 72% and 75% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 89%, 81% and 84%

respectively.

Fig. 7.36: Graph Showing Performance Measures of Crawler for Auto Domain: Site 3

It may be noted from Fig. 7.37 that as the number of received queries word increases for

Site4: www.autos.com, the value of Precision, Recall and F-measure increases. When about

200 received query words were fired, the value of the Precision, Recall and F-measure were

82%, 69%, and 74% respectively. As the number of query word reached to 1400, the values

of the Precision, Recall and F-measure reached at 91%, 75% and 85% respectively.

Fig. 7.37: Graph Showing Performance Measures of Crawler for Auto Domain: Site 4

Page 175: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

156

It may be noted from Fig. 7.38 that as the number of received queries word increases for

Site5: www.carwale.com, the value of Precision, Recall and F-measure increases. When

about 200 received query words were fired, the value of the Precision, Recall and F-

measure were 82%, 65% and 76% respectively. As the number of query word reached to

1400, the values of the Precision, Recall and F-measure reached at 92%, 70% and 81%

respectively.

Fig. 7.38: Graph Showing Performance Measures of Crawler for Auto Domain: Site 5

The above graphs show the performance measures of proposed deep web crawler “DWC”

at different quantity of query words received by QIIIEP server that are used to fetch the

content from the deep web sites. From the analysis of this graph we can easily visualize that

the number of query words received for specific domain, the precision, recall and F-

measure of information extraction have continuously increased. So it can be concluded that

when enough knowledge base is collected at QIIIEP server, the result is obviously

improved.

7.5 COMPARISON OF ALL DOMAINS

The precision, recall and F-measure are stable for different domains as shown in Table 7.4

indicating that when enough data has been collected for the specific “domain context”

related to the domain, the precision, recall and F–measure increases and reach to a certain

threshold level.

Table 7.4: Precision, Recall and F-measure for Book, Job & Auto Domains

Domains Precision Recall F-measure

BOOK 86 83 84

JOB 92 76 82

AUTO 89 72 80

Page 176: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

157

After analyzing the simulation results for each domain, it was concluded that calculated

values of performance metrics such as precision, recall and f-measure are very consistent,

which implies that working of the proposed architecture is both efficient and consistent.

7.6 COMPARISON OF PROPOSED DEEP WEB CRAWLER “DWC”

WITH EXISTING HIDDEN WEB CRAWLER

In this work, a novel architecture of deep web crawler “DWC” was proposed that crawls

the hidden web information with high precision. It is compared with the two existing

hidden web crawler frameworks i.e. “Crawling the Hidden Web” [4] and “A Framework of

Deep Web Crawler” [26] . The comparative results are shown in the tabular summary in

Table 7.5. Comparison is done on the basis of some parameters such as technique used,

need of the user’s support, crawler mode, search form collection, collection of query words,

automatic forwarding to destination , form filling, automatic query selection, global level

value assignments, harvest ratio, precision, Recall, F-measure, extensible, scalable, band

width consumption and authenticated crawling.

Table 7.5: Comparison of Proposed Deep Web Crawler “DWC” with existing Hidden Web Crawlers

Characteristics Deep Web Crawler[26]

Crawling the hidden web [4]

Proposed Deep Web Crawler “DWC”

Technique Used Proposes a model of form with form submission facility with four additional modules with LVS table.

Application/task specific human assisted approach. Analyze the form, classify it and fills the form.

Fill the form using form id and level value allotment through context identifier.

Need of the user’s support

Human interaction is not needed for modelling the form.

Human interface is needed in form filling.

Automatic query word extraction , ranking and filling.

Crawler Mode - - Threaded

Search Form Collection

Doesn’t download forms

Doesn’t download forms

Download form for analysis but only store the form id.

Collection of query words

- - It uses cross-site communication technique to fetch user’s supply query words that are most useful for collecting valuable keywords.

Automatic Forwarding to Destination

- - In hidden web crawler, it is tough to generate same result page for the user, which is being extracted by crawler, due to the inefficiency of query word to url relation manager. In this work we have implemented this concept to remove this hindrance between user’s query result page and actual result page.

Page 177: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

158

Form Filling Not fully automatic Not fully automatic

Fully automatic form filling with the help of QIIIEP knowledge base.

Automatic query selection

Yes N/A Yes

Global Level Value Assignments

- - Context based selection of query words to by analyzing feedback loop (results look as well). Priority query word based graph based on clustering technique.

Precision Low Low HighRecall Low Low HighF-measure Low Low HighExtensible - - Yes Scalable - - Yes Band Width Consumption

Uses Large Band Width

Uses large band width

Lower bandwidth used due to proper identification of form types.

AuthenticatedCrawling

Not implemented Not implemented Domain specific authentication and crawling is implemented.

- Not claimed

From the comparison, it may be observed that, the proposed architecture is better than

“Crawling the Hidden Web” [4] and “A Framework of Deep Web Crawler” [26] in terms of

collection of query words, automatic forwarding to destination , form filling, automatic

query selection, global level values assignment with domain specific context evolution,

authenticated crawling. It may also be noted that the precision, recall and f-measure for the

proposed work is higher as compared to the other existing works. Moreover the proposed

work is also extensible, scalable and consumes less network band width.

Precisely the proposed architecture accrues the following benefits:

1. The proposed architecture deals with the overall search strategy of hidden query

interface, including the features of both privatized search i.e. authenticated search and

general search for the deep web data that is hidden html forms.

2. Proposed architecture performs better compared in terms of number of links crawled.

3. Since the crawler searches with global value label allotment for each form, the overall

search time has not only been reduced but the amount of quality information has also

increased.

4. Pre determination of the domain context provides effective results with more than 40%

effective links with forms extraction on every loop while generating links for the next

crawling in crawl frontier.

The conclusion and future scope of work are discussed in the next chapter.

Page 178: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

159

CHAPTER 8

CONCLUSION & FUTURE SCOPE OF WORK

8.1 CONCLUSION

Deep web has a very large volume of quality information compared to surface web.

Extraction of deep web information can be highly fruitful for a general user or a specific

user. Traditional web crawlers have limitations in crawling the deep web information so

some of the web crawlers are specially designed for crawling the deep web information yet

a very large amount of deep web information is yet to be explored due to inefficient

crawling of the deep web.

In this work, literature survey and analysis of some of the important deep web crawlers was

carried out to find their advantages and limitations. A comparative analysis of existing deep

web crawlers was also carried out on the basis of various parameters and it is concluded

that a new architecture for deep web crawler was required for efficient searching of the

deep web information by minimizing the limitations of the existing deep web crawlers as

well as incorporating the strengths of the existing deep web crawlers.

The proposed QIIIEP possesses all the features of existing deep web crawlers but tries to

minimize their limitations. Experimental results reflect that it is efficient both for privatized

as well as general search related to the deep web information, hidden behind the html

forms. Since the crawler employs appropriate domain specific keywords to crawl the

information hidden behind query interfaces, the quality of information is better as

established by the experimental results.

Further, the major challenges involved in developing a crawler for extraction of the deep

web information have been addressed and resolved. The following challenges have been

resolved with help of proposed Deep web crawler “DWC” architecture:

Identification of entry points- Identification of a proper entry point among the searchable

forms is a major issue for crawling the hidden web contents [4] [5][11].

Page 179: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

160

This issue is resolved by element level mapping with global knowledge base of QIIIEP

server through form generated by auto form id generator.

Restricted Entry Points-If entry to the hidden web pages is restricted only through

querying a search form, two challenges arises i.e. to understand and model a query interface

as well as handling of these query interfaces with significant quires.

To resolve this issue in the proposed architecture, the query word selection takes place

according to the context identification as well as verification of the web page crawled by

crawler. This feature also facilitates the identification of new query words.

Identification of the domains- There is a big challenge to access the relevant information

through hidden web by identifying a relevant domain among the large number of domains

present on World Wide Web.

For resolving this issue, similar domains are identified according to context and query

interfaces and categorized at element level so that a large number of deep web domains can

be crawled with proposed architecture.

Social Responsibility –A crawler should not affect the normal functioning of website

during crawling process by over burdening the website. Sometimes a crawler with a high

throughput rate may results in denial of service.

The proposed deep web crawler uses a QIIIEP server to fetch query words which are

sorted according to domain context. So only the high ranking query words are used to

extract the deep web data which reduces the overall burden on the server.

Interaction with the Search Interfaces- An efficient hidden web crawler should

automatically parse, process and interact with the form-based search interfaces. A general

PIW crawler only submit the request for the url to be crawled where as a hidden web

crawler also supply the input in the form of search queries in addition to the request for url

[4].

Page 180: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

161

The result shows that QIIIEP server knowledge base i.e. continuously updated for level

value mapping through user submitted cross-site query words extraction, web master

submitted query words and QIIIEP internal interoperable data mapping.

Handling of form-based queries – Many websites presents query form to the users to

further access their hidden database. The general web crawler cannot access these hidden

databases due to the inability to process these query forms.

To handle these types of challenges, an auto guided mechanism of crawling deep web is

presented which uses a form id based level value matching mechanism that correlates

relevant level with QIIIEP knowledge base.

Client-side scripting languages and session maintenance mechanisms- Many websites

are equipped with client-side scripting languages and session maintenance mechanisms. A

general web crawler cannot access these types of pages due to inability to interact with such

type of web pages [12].

For handling these types of pages we implemented authenticated crawling, which monitor

session and crawl authenticated contents.

The Deep Web Crawler “DWC” was developed on a machine having Intel Core 2 Duo

T5870 @ 2.0 GHz with 3 GB of RAM. This machine was running with Windows 7 OS.

Tests were performed using WAMP server equipped with php v. 5.2.6 and mysql v.

5.0.51b., Microsoft Visual Basic 2008 .net 3.5 and Net beans IDE. All of the tests were

performed on Firefox 3.5.4 and were performed multiple times to minimize measurement

errors. High values of precision, recall and f-measure for various tests conducted on Deep

Web Crawler “DWC” show that crawling of the deep web is efficient.

8.2 FUTURE SCOPE OF WORK

Some of the issues that can be further explored or extended are given as follows:

Implementation of distributed deep web crawler

As the size of the deep web is very large and is continuously growing, so deep web

crawling can be made more effective by employing distributed crawling.

Page 181: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

162

Updation of deep web pages

The deep web information is frequently updated. Therefore the deep web crawler

should be made more effective by compensating this issue by employing

incremental deep web crawling technique.

Indexing of the deep web pages

Search engine maintain the large scale inverted indices. Thus further work can be

done in the direction of efficient maintenance of indices of deep web.

Extension the coverage the crawlable form

As the deep web exist for variety of domains, the range of crawlable forms can be

extended further to make deep web crawling more efficient.

Improvement of content processing and storage mechanism

As the deep web information is very large so further improvement can be done in

deep web crawling by employing improved deep web data processing and storage

mechanism.

Page 182: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

163

REFERENCES

[1] Ricardo Baeza-Yates and Berthier Ribeirmobileo-Neto, “Modern Information

Retrieval”, Pearson Education, ISBN: 9788131709771, 2007.

[2] Dilip Kumar Sharma and A. K. Sharma, “Search Engine: A Backbone for

Information Extraction in ICT Scenario”, In International Journal of ICTHD, USA , Vol. 3,

No. 2, pp. 38-51, 2011.

[3] Michael K. Bergman, “The deep web: Surfacing hidden value”, Journal of

Electronic Publishing,7(1),2001, Retrieved from “http://www.press.umich.edu/jep/07-

01/bergman.html”.

[4] S. Raghavan and H. Garcia-Molina, “Crawling the Hidden Web”, In Proceedings of

the 27th International Conference on Very Large Data Bases, Roma, Italy, pp. 129–138,

2001.

[5] Luciano Barbosa and Juliana Freire, “Searching for Hidden-Web Databases”, In

Proc. of Web Database, 2005.

[6] Dilip Kumar Sharma and A. K. Sharma, “Deep Web Information Retrieval Process:

A Technical Survey”, In International Journal of Information Technology & Web

Engineering, USA, Vol 5, No. 1, pp.1-22, 2010.

[7] Sergei Brin and Lawrence Page, “The anatomy of a large-scale hypertextual Web

search engine”, Computer Networks and ISDN Systems, 30(1–7):107–117, 1998.

[8] Ricardo Baeza-Yates, “Challenges in the interaction of information retrieval and

natural language processing” In Proceedings of 5th international conference on

Computational Linguistics and Intelligent Text Processing (CICLing), Vol. 2945 LNCS,

pp. 445–456. Springer, 2004.

[9] T. M. Ghanem and W. G. Aref, “Databases deepen the Web”, Computer, 37(1):116

–117, 2004.

[10] R. Kumar et al., “Stochastic models for the web graph”, In Proceedings of the 41st

Annual Symposium on Foundations of Computer Science, pp. 57–65, IEEE CS Press,

2000.

[11] J. Akilandeswari and N. P. Gopalan, “An Architectural Framework of a Crawler for

Locating Deep Web Repositories Using Learning Multi-Agent Systems”, In Proceedings of

the 2008 Third International Conference on Internet & Web Applications and Services,

PP.558-562, 2008.

Page 183: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

164

[12] M. Alvarez et al., “DeepBot: A Focused Crawler for Accessing Hidden Web

Content”, In Proceedings of DEECS2007, San Diego CA, pp.18-25, 2007.

[13] Dipesh Bhattarai, “A Survey on the emerging Internet Architecture” Retrieved from

http://bit.csc.lsu.edu/~dipesh/website/downloads/Project-Report-Final.pdf.

[14] Gary Wan and Zao Liu,“Content-Based Information Retrieval and Digital

Libraries”, pp.41-47,

http://www.ala.org/lita/ital/sites/ala.org.lita.ital/files/content/27/1/wan.pdf

[15] World Wide Web, Retrieved from http://www.w3.org/Consortium/.

[16] Carlos Castillo, “Effective Web Crawling”, PhD Thesis, Dept. of Computer

Science, University of Chile, November 2004.

[17] The Robots Exclusion Protocol, http://www.robotstxt.org/robotstxt.html.

[18] Brian Pinkerton, “WebCrawler: Finding What People Want”, PhD Thesis,

University of Washington, 2000.

[19] S. Chakrabarti, M. Berg, and B. Dom, “Focused Crawling: A New Approach to

Topic-Specific Web Resource Discovery”, In Proceedings of the eighth international

conference on World Wide Web, 1999.

[20] D. Yadav, A.K. Sharma, and J.P. Gupta, "Parallel Crawler Architecture and Web

Page Change Detection Techniques". In WSEAS Transaction on Computers, Issue 7, Vol

7, PP. 929-941, ISSN: 1109-2750, 2008.

[21] Junghoo Cho and Hector Garcia-Molina, “Parallel Crawlers”, In WWW2002, May

7–11, 2002, Honolulu, Hawaii, USA.

[22] A. K. Sharma, J. P. Gupta, and D. P. Agarwal, “PARCAHYD: An Architecture of a

Parallel Crawler based on Augmented Hypertext Documents”, In International Journal of

Advancements in Technology, Vol. 1, No. 2 , pp.270-283, 2010.

[23] J. Hammer and J. Fiedler, “Using Mobile Crawlers to Search the Web Efficiently”,

In International Journal of Computer and Information Science, pp.36-58, 2000.

[24] P. Boldi, B. Codenott, M. Santini, and S. Vigna, "UbiCrawler: A Scalable Fully

Distributed Web Crawler”, In Software: Practice & Experience, Vol. 34, No. 8, 2004.

[25] J., Galler, S. A. Chun, and Y. J. An, “Toward the Semantic Deep Web”, In the IEEE

Computer, pp. 95-97, 2008.

[26] X. Peisu, T. Ke, and H. Qinzhen, “A Framework of Deep Web Crawler”, In

Proceedings of the 27th Chinese Control Conference, Kunming,Yunnan, China, 2008.

Page 184: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

165

[27] Dilip Kumar Sharma, and A.K. Sharma, “Query Intensive Interface Information

Extraction Protocol for Deep Web”, In Proceedings of IEEE International Conference on

Intelligent Agent & Multi-Agent Systems, pp.1-5, IEEE Explorer, 2009.

[28] Dilip Kumar Sharma, and A.K. Sharma, “Analysis of Techniques for Detection of

Deep Web Search Interface”, In CSI Communication, Mumbai Vol.34, Issue 9, pp.30-33,

ISSN: 0970-647X, Dec 2010.

[29] L. Breiman, “Random Forests”, In Machine Learning, Vol.45, No.1, pp. 5-32,

Kluwer Academic Publishers, 2001.

[30] P. Lage et al., “Collecting Hidden Web Pages for Data Extraction” In Proceedings

of the 4th International workshop on Web Information and Data Management, pp. 69-75,

2002.

[31] J. Cope, N. Craswell, and D. Hawking, “Automated Discovery of Search Interfaces

on the web”, In Proceedings of the Fourteenth Australasian Database Conference

(ADC2003), Adelaide, Australia, 2003.

[32] Z. Zhang, B. He, and K. Chang, “Understanding Web Query Interfaces: Best-Effort

Parsing with Hidden Syntax”, In Proceedings of ACM International Conference on

Management of Data, pp.107-118, 2004.

[33] L. Barbosa and J. Freirel, “Siphoning Hidden-Web Data through Keyword-Based

Interface” , In Proceedings of SBBD, 2004.

[34] X. B. Deng, Y. M. Ye, H. B. Li, and J. Z. Huang, “An Improved Random Forest

Approach For Detection Of Hidden Web Search Interfaces”, In Proceedings of the 7th

International Conference on Machine Learning and Cybernetics, China. IEEE, 2008.

[35] Y. Ye et al., “Feature Weighting Random Forest for Detection of Hidden Web

Search Interfaces”, In Computational Linguistics and Chinese Language Processing , Vol.

13, No. 4, pp.387-404, 2009.

[36] P. Bai, and J. Li, “The Improved Naive Bayesian WEB Text Classification

Algorithm”, In International Symposium on Computer Network and Multimedia

Technology, IEEE Explorer, 2009.

[37] Anuradha and A.K. Sharma, “A Novel Approach For Automatic Detection and

Unification of Web Search Query Interfaces Using Domain Ontology”, In International

Journal of Information Technology and Knowledge Management , Vol. 2, No. 2, 2010.

[38] J. Madhavan, P. A. Bernstein, and E. Rahm, “Generic Schema Matching with

Cupid”, In the 27th VLBB Conference, Rome, 2001.

Page 185: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

166

[39] H. H. Do, and E. Rahm, “COMA-a System for Flexible Combination of Schema

Matching Approaches”, In Proceedings of the 28th Intl. Conference on Very Large

Databases (VLDB), Hong Kong, 2002.

[40] A. H. Doan, P. Domingos, and A. Levy, “Learning source descriptions for data

integration”, In Proceedings of the WebDB Workshop, pp. 81-92, 2000.

[41] A. Doan, P. Domingos, and A. Halevy, “Reconciling schemas of disparate data

sources: A machine learning approach”, In Proceedings of the International Conference on

Management of Data (SIGMOD), Santa Barbara, CA, New York: ACM Press, 2001.

[42] S. Melnik, H. Garcia-Molina, and E. Rahm, “Similarity Flooding: A Versatile

Graph Matching Algorithm”, In Proceedings of the l8th International Conference on Data

Engineering (ICDE), San Jose, CA,2002.

[43] O. Kaljuvee, O. Buyukkokten, H. G. Molina, and A. Paepcke, “Efficient Web form

entry on PDAs”, In Proceedings of the 10th International Conference on World Wide Web,

pp. 663- 672, 2001.

[44] H. He, W. Meng, , C. Yu, and Z. Wu, “Constructing interface schemas for search

interfaces of Web databases”, In Proceedings of the 6th International Conference on Web

Information Systems Engineering (WISE’05),pp. 29-42, 2005.

[45] W. Wu, C. Yu, A. Doan, and W. Meng, “An interactive clustering-based approach

to integrating source query interfaces on the Deep Web”, In Proceedings of the ACM

SIGMOD International Conference on Management of Data, pp. 95-106, 2004.

[46] B. He, and K. C. C. Chang, “Automatic complex schema matching across web

query interfaces: A correlation mining approach”, In Proceedings of the ACM Transaction

on Database Systems, Vol.31, pp.1-45, 2006.

[47] B. He, and K. C. C. Chang, "Statistical schema matching across web query

interfaces”, In the SIGMOD Conference, 2003.

[48] X. Zhong, et al., “A Holistic Approach on Deep Web Schema Matching", In

Proceedings of the International Conference on Convergence Information Technology,

2007.

[49] B. Qiang, J. Xi, B. Qiang, and L. Zhang, “An Effective Schema Extraction

Algorithm on the Deep Web. Washington, DC: IEEE, 2008.

[50] K. K. Bhatia, and A. K. Sharma, “A Framework for Domain Specific Interface

Mapper (DSIM)”, International Journal of Computer Science and Network Security,8,12,

2008.

Page 186: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

167

[51] M. Niepert, C. Buckner, and C. Allen, “A Dynamic Ontology for a Dynamic

Reference Work”, In Proceedings of the (JCDL’07), Vancouver, Canada, 2007.

[52] A. Deitel, C. Faron, and R. Dieng, “Learning ontologies from rdf annotations”, In

the IJCAI Workshop in Ontology Learning, 2001.

[53] L. Stojanovic, N. Stojanovic, and R. Volz, “Migrating data intensive web sites into

the semantic web”, In Proceedings of the 17th ACM Sym. on Applied Computing, 2002.

[54] Y. A. Tijerino, D. W. Embley, D. W. Lonsdale, Y. Ding, and G. Nagy, “Towards

ontology generation from tables” In World Wide Web Journal, 261-285, 2005.

[55] Z. Cui, P. Zhao, W. Fang, and C. Lin, “From Wrapping to Knowledge: Domain

Ontology Learning from Deep Web”, In Proceedings of the International Symposiums on

Information Processing, IEEE, 2008.

[56] P. Zhao, Z. Cui, Z., et al., “Vision-based deep web query interfaces automatic

extraction”, In Journal of Computational Information Systems, 1441-1448, 2007.

[57] X. Meng, S.Yin, and Z. Xiao, “A Framework of Web Data Integrated LBS

Middleware”, In Wuhan University Journal of Natural Sciences, 11(5), 1187-1191,2006.

[58] H. He, W. Meng, C.T. Yu,and Z. Wu, “WISE-Integrator: An Automatic Integrator

of Web search interfaces for e-commerce”, In Proceedings of the 29th International

Conference on Very Large Data Bases (VLDB’03), pp. 357-368, 2003.

[59] S. Chen, L. Wen, J. Hu, and S. Li, “Fuzzy Synthetic Evaluation on Form Mapping

in Deep Web Integration”, In Proceedings of the International Conference on Computer

Science and Software Engineering. IEEE, 2008.

[60] B. He, Z. Zhang, and K.C.C. Chang, “Meta Querier: Querying Structured Web

Sources On the-fly”, In SIGMOD, System Demonstration, Baltimore, MD, 2005.

[61] H. Liang, W. Zuo, F. Ren,C. Sun, “Accessing Deep Web Using Automatic Query

Translation Technique”, In Proceedings of the Fifth International Conference on Fuzzy

Systems and Knowledge Discovery, IEEE, 2008.

[62] F. Jiang et al., “A Cost Model for Range Query Translation in Deep Web Data

Integration”, In Proceedings of the Fourth International Conference on Semantics,

Knowledge and Grid. IEEE, 2008.

[63] V. Crescenzi, G. Mecca, and P. Merialdo, “RoadRunner: towards automatic data

extraction from large Web sites”, In Proceedings of the 27th International Conference on

Very Large Data Bases, Rome, Italy, pp. 109-118, 2001.

Page 187: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

168

[64] L. Arlotta, , V. Crescenzi, et al., “Automatic annotation of data extracted from large

Web sites”, In Proceedings of the 6th International Workshop on Web and Databases, San

Diego, CA,pp. 7-12, 2003.

[65] H. Song, S. Giri, and F. Ma, “Data Extraction and Annotation for Dynamic Web

Pages”, In Proceedings of IEEE, pp. 499-502, 2004.

[66] J. Yang, G. Shi, Y. Zheng, and Q. Wang, “Data Extraction from Deep Web Pages”,

In Proceedings of the International Conference on Computational Intelligence and Security.

IEEE, 2007.

[67] A. Ibrahim, S.A. Fahmi, et al., “Addressing Effective Hidden Web Search Using

Iterative Deepening Search and Graph Theory”, In Proceedings of the 8th International

Conference on Computer and Information Technology Workshops. IEEE, 2008.

[68] M. Hernandez, and S. Stolfo, “The merge/purge problem for large databases”, In

Proceedings of the SIGMOD Conference, pp. 127-138, 1995.

[69] S.Tejada, C.A. Knoblock, and S. Minton, “Learning domain-independent string

transformation weights for high accuracy object identification”, In Proceedings of the

World Wide Web conference, pp. 350- 359, 2002.

[70] A. Doan, Y. Lu, Y. Lee, and J. Han, “Object matching for information integration:

A profiler-based approach”, In II Web, 53-58, 2003.

[71] Y.-Y. Ling, et al., “Entity identification for deep web data integration”, In Journal of

Computer Research and Development, pp.46-53, 2006.

[72] P. Zhao, C. Lin, W. Fang, and Z. Cui, “A Hybrid Object Matching Method for Deep

Web Information Integration”. In Proceedings of the International Conference on

Convergence Information Technology, IEEE, 2007.

[73] Z. Nie, Y. Ma, S. Shi, J. Wen, and W. Ma, “Web object retrieval”, In Proceedings of

the WWW Conference, 2007.

[74] T. Cheng, X. Yan, and K. C. C. Chang, “Entity Rank: searching entities directly and

holistically”, In Proceedings of the VLDB, 2007.

[75] Y. Kou, D. Shen, G. Yu, T. Nie, “LG-ERM: An Entity-level Ranking Mechanism

for Deep Web Query”, IEEE, 2008.

[76] Search/Retrieval via URL, Retrieved from http://www.loc.gov/standards/sru./

[77] ANSI/NISO Z39.50,“Information Retrieval: Application Service Definition and

ProtocolSpecification”,2003,

http://www.niso.org/standards/standard_detail.cfm?std_id=465.

Page 188: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

169

[78] The Open Archives Initiative Protocol for Metadata Harvesting. Protocol Version

2.0, 2003, Retrieved from http://www.openarchives.org/OAI/2.0/openarchivesprotocol.htm

[79] A. Campi, S. Ceri, E. Duvall, S. Guinea, D. Massart, and S. Ternier,

“Interoperability for searching Learning Object Repositories: The ProLearn Query

Language”, D-LIB Magazine, 2008.

[80] M. A. Khattab, Y. Fouad, and O.A. Rawash, “Proposed Protocol to Solve

Discovering Hidden Web Hosts Problem”, International Journal of Computer Science and

Network Security, 9(8), 2009.

[81] J. Grossnickle, T. Board, B. Pickens, and M. Bellmont, “RSS—Crossing into the

Mainstream”, Ipsos Insight, Yahoo, 2005.

[82] Sitemaps, “Sitemaps Protocol”, 2009. Retrieved from http://www.sitemaps.org

[83] A. Ntoulas, P. Zerfos, and J. Cho, “Downloading Textual Hidden Web Content

Through Keyword Queries”, In JCDL, IEEE, 2005.

[84] M. Alvarez, J. Raposo, F. Cacheda, and A. Pan, “A Task-specific Approach for

Crawling the Deep Web”, 2006, www.engineeringletters.com/issues_v13/issue_2/.

[85] J. Lu, Y. Wang, J. Liang, J. Chen, and J. Liu, “An Approach to Deep Web

Crawling by Sampling”, In Proceedings of IEEE/WIC/ACM Web Intelligence, Sydney,

Australia, pp. 718-724, 2008.

[86] Y.Wang, W. Zuo, T. Peng, and F. He, “Domain-Specific Deep Web Sources

Discovery”, In Fourth International Conference on Natural Computation, 978-0-7695-

3304-9, IEEE, 2008.

[87] J.Madhavan, D. Ko, L. Kot,V. Ganapathy, A. Rasmussen, and A. Halevy, “Google’s

Deep-Web Crawl”, In VLDB, pp.1241-1252, 2008.

[88] J. Liu, Z. Wu, L. Jiang, Q., Zheng, and X. Liu, “Crawling Deep Web Content

Through Query Forms”, In Proceedings of the Fifth International Conference on Web

Information Systems and Technologies, Lisbon, Portugal, pp. 634-642,2009.

[89] H. Zhao, “Study of Deep Web Query Interface Determining Technology”, In

CESCE, Vol. 1, pp.546-548, 2010.

[90] R. Madaan, A. Dixit, A.K. Sharma, and K.K. Bhatia, “A Framework for

Incremental Hidden Web Crawler”, In International Journal on Computer Science and

Engineering, Vol. 02, No. 03, pp. 753-758, 2010.

[91] L. Jiang, Z. Wu, Q. Feng, J. Liu, and Q. Zheng, “Efficient Deep Web Crawling

Using Reinforcement Learning”, In Advances in Knowledge Discovery and Data Mining,

LNCS, Vol. 6118, pp.428-439, 2010.

Page 189: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

170

[92] Dilip Kumar Sharma and A.K. Sharma, “A Novel Architecture of Deep web

crawler” In International Journal of Information Technology & Web Engineering, USA,

Vol. 6, No. 1,pp. 25-48, 2011.

[93] Wenpu Xing and Ali Ghorbani, “Weighted PageRank Algorithm”, In proceedings

of the 2rd Annual Conference on Communication Networks & Services Research, pp. 305-

314, 2004.

[94] Neelam Duhan, A. K. Sharma, and K.K. Bhatia, “Page Ranking Algorithms: A

Survey”, In proceedings of the IEEE International Advanced Computing Conference, 2009.

[95] R. Kosala and H. Blockeel, “Web Mining Research: A survey”, In ACM SIGKDD

Explorations, 2(1), PP.1–15, 2000.

[96] S. Madria, S. S. Bhowmick, W. K. Ng, and E.-P. Lim, “Research Issues in Web

Data Mining”, In Proceedings of the Conference on Data Warehousing and Knowledge

Discovery, pp. 303–319, 1999.

[97] S. Pal, V. Talwar, and P. Mitra, “Web Mining in Soft Computing Framework :

Relevance, State of the Art and Future Directions”, In IEEE Trans. Neural Networks, 13(5),

pp.1163–1177, 2002.

[98] L. Page, S. Brin, R. Motwani, and T. Winograd, “The PageRank Citation Ranking:

Bringing Order to the Web”, Technical Report, Stanford Digital Libraries SIDL-WP-1999-

0120, 1999.

[99] C. Ridings and M. Shishigin, “Pagerank Uncovered”, Technical Report, 2002.

[100] Jon Kleinberg, “Authoritative Sources in a Hyperlinked Environment”, In

Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 1998.

[101] S. Chakrabarti, B. E. Dom, S. R. Kumar, P. Raghavan, S. Rajagopalan, A. Tomkins,

D. Gibson, and J. Kleinberg, “Mining the Web’s Link Structure”, Computer, 32(8), pp.60–

67, 1999.

[102] D. Cohn and H. Chang, “Learning to Probabilistically Identify Authoritative

Documents”, In Proceedings of 17th International Conference on Machine Learning, pp.

167–174.Morgan Kaufmann, San Francisco, CA, 2000.

[103] Ricardo Baeza-Yates and Emilio Davis ,"Web page ranking using link attributes" ,

In proceedings of the 13th international World Wide Web conference on Alternate track

papers & posters, pp.328-329, 2004.

[104] Ko Fujimura, Takafumi Inoue and Masayuki Sugisaki,, “The EigenRumor

Algorithm for Ranking Blogs”, In WWW 2005 2nd Annual Workshop on the Weblogging

Ecosystem, 2005.

Page 190: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

171

[105] Ali Mohammad Zareh Bidoki and Nasser Yazdani, “DistanceRank: An Intelligent

Ranking Algorithm for Web Pages”, Information Processing and Management, 2007.

[106] H Jiang et al., "TIMERANK: A Method of Improving Ranking Scores by Visited

Time", In proceedings of the Seventh International Conference on Machine Learning and

Cybernetics, Kunming, 12-15 July 2008.

[107] Shen Jie,Chen Chen,Zhang Hui,Sun Rong-Shuang,Zhu Yan and He Kun,

"TagRank: A New Rank Algorithm for Webpage Based on Social Web" In proceedings of

the International Conference on Computer Science and Information Technology,2008.

[108] Fabrizio Lamberti, Andrea Sanna and Claudio Demartini , “A Relation-Based Page

Rank Algorithm for. Semantic Web Search Engines”, In IEEE Transaction of KDE, Vol.

21, No. 1, Jan 2009.

[109] Lian-Wang Lee, Jung-Yi Jiang, ChunDer Wu, Shie-Jue Lee, "A Query-Dependent

Ranking Approach for Search Engines", Second International Workshop on Computer

Science and Engineering, Vol. 1, PP. 259-263, 2009.

[110] Milan Vojnovic et al., “Ranking and Suggesting Popular Items”, In IEEE

Transaction of KDE, Vol. 21, No. 8, Aug 2009.

[111] NL Bhamidipati et al., "Comparing Scores Intended for Ranking", In IEEE

Transactions on Knowledge and Data Engineering, 2009.

[112] Xiang Lian and Lei Chen , “Ranked Query Processing in Uncertain databases”, In

IEEE KDE, Vol. 22, No. 3, March 2010.

[113] Su Cheng,Pan YunTao,Yuan JunPeng,Guo Hong,Yu et al. "PageRank, “HITS and

Impact Factor for Journal Ranking", In proceedings of the 2009 WRI World Congress on

Computer Science and Information Engineering – Vol. 06, PP. 285-290, 2009.

[114] Dilip Kumar Sharma and A. K. Sharma , “A Comparative Analysis of Web Page

Ranking Algorithms”, Published in International Journal on Computer Science &

Engineering, ISSN: 0975–3397, Vol. 2, No. 8, pp. 2670-2676 , 2010.

[115] Fabio Rizzotto, "Adding Intelligence to Web Analytics", White Paper,

www.idc.com/italy, June 2007.

[116] Ed H. Chi, "Improving web usability through visualization", In IEEE Internet

computing, March-April 2002.

[117] James E. Pitkow and Krishna A. Bharat, "Webviz: a tool for world-wide web access

log analysis", In proceedings of the first international www conference, 1994.

Page 191: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

172

[118] Dilip Kumar Sharma and A. K. Sharma, “Technical Review of Website Analytics

Tools”, In Book Titled “Information Technology & Energy Management” Published by

Excel Publisher, New Delhi, ISBN: 978-93-80697-07-9, pp. 405-408, 2010.

[119] www.statcounter.com

[120] Dilip Kumar Sharma and A.K. Sharma, “A Novel Ranking Algorithm of Query

Words Stored in QIIIEP Server”, In International Journal of Engineering Science &

Technology, ISSN: 0975-5462, Vol. 2(11), pp. 6097-6107, 2010.

[121] K. Bharat, and A. Broder, “ A Technique for Measuring the Relative Size and

Overlap of Public Web Search Engines”, In WWW, 1998.

[122] H. Alani, and C. Brewster, “Ontology Ranking based on the Analysis of Concept

Structures”, In 3rd International Conference on Knowledge Capture (K-Cap), pp. 51–58,

Banff, Alberta, Canada, 2005.

[123] H. Alani, and C. Brewster, “Metrics for Ranking Ontologies”, WWW2006,

Edinburgh, UK, 2006.

[124] H. Alani,C. Brewster, and N. Shadbolt, “Ranking Ontologies with AKTiveRank”,

In: The 5th International Semantic Web Conference (ISWC), 2006.

[125] K. S. Jones, “A Statistical Interpretation of Term Specificity and its Application in

Retrieval”, Journal of Documentation, 28, pp.11–21, 1972.

[126] S.E. Robertson , S. Walker, and M. Beaulieu, “Okapi at TREC–7: Automatic Ad-

hoc, Filtering, VLC and Filtering Tracks”, In Proceedings of the Seventh Text Retrieval

Conference (TREC-7), NIST Special Publication, pp. 253–264, 1999.

[127] S. E. Robertson , and K. S. Jones, “Relevance Weighting of Search Terms”, Journal

of the American Society for Information Science, Vol.27, No.3, pp.129–146,1976.

[128] G. Salton, and C. Buckley, “Term-weighting Approaches in Automatic Text

Retrieval”, Information Processing and Management, Vol.24, No.5, pp.513–523, 1988.

[129] A. Singhal, “Modern Information Retrieval: A Brief Overview”, In IEEE Data

Engineering Bulletin, Vol. 24, No.4, pp. 35-43, 2001.

[130] A. Singhal, C. John, D. Hindle, D. Lewis, D.and F. Pereira, “AT&T at TREC-7”, In

Proceedings of the Seventh Text Retrieval Conference (TREC-7), NIST Special Publication

, PP. 239–252,1999.

[131] A. Ntoulas, P. Zerfos, and J. Cho, “Downloading Hidden Web Content”, Technical

Report, UCLA, 2004.

[132] G. D. Lorenzo, H. Hacid ,and Hye-young Paik, “Data Integration in Mashups”, In

SIGMOD Record, Vol. 38, No. 1, pp. 59-66, 2009.

Page 192: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

173

[133] U. Marjit and S. Jana, “Mashups: An Emerging Content Aggregation Web2.0

Paradigm”, In 7th International CALIBER-2009, Pondicherry University, PP. 296-303.

2009.

[134] S. Kulkarni, “Enterprise Mashup: A Closer Look at Mashup and its Enterprise

Adoption”, In Indic Threads.com Conference on Java Technology, Pune, India, 2007.

[135] S. B. Ort and B. Mark, ,(Ed), “Mashups Styles, Part 2: Client-Side Mashups”

August, 2007, http://java.sun.com/developer/technicalArticles/J2EE/mashup_2/ index.html.

[136] F. D. Keukelaere , S. Bhola , M. Steiner, S. Chari, and S. Yoshihama, “SMash:

Secure Component Model for Cross-Domain Mashups on Unmodified Browsers”, In

WWW 2008, Beijing, ACM 978-1-60558-085-2/08/04, 2008.

[137] S. B. Ort and B. Mark,( Ed), “Mashup Styles, Part 1: Server-Side Mashups”, May,

2007, http://java.sun.com/developer/technicalArticles/J2EE/mashup_1/index.html.

[138] N. Jovanovic , E. Kirda and C. Kruegel, “Preventing Cross Site Request Forgery

Attacks”, In IEEE, 2006.

[139] HTML5CrossDocumentMessaging ,2010, Retrieved from

http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.

[140] S. Crites, F. Hsu and H. Chen, “OMash: Enabling Secure Web Mashups via Object

Abstractions”, In CCS’08, Virginia, USA, 2008 ACM 978-1-59593-810-7/08/10.

[141] Window.nameTransport, www.sitepen.com/blog/2008/07/22/windowname-

transport/.

[142] J. Burns, “Cross Site Reference Forgery: An Introduction to a Common Web

Application Weakness”, Information Security Partners, LLC, 2005,

http://www.isecpartners.com.

[143] C. Jackson and H. J.Wang, “Subspace: Secure Cross Domain Communication for

Web Mashups”, In WWW 2007, Banff, Alberta, Canada. ACM 9781595936547/07/0005.

[144] J. Howell, C. Jackson, H.J. Wang and X. Fan, “Mashup OS: Operating System

Abstractions for Client Mashups”, In Proceedings of the 11th USENIX workshop on Hot

topics in operating systems, 2007.

[145] M. Albinola, L. Baresi, M. Carcano and S. Guinea, “Mashlight: A Lightweight

Mashup Framework for Everyone”, In WWW2009, Madrid, Spain.

[146] FlashPolicyFiles, adobe.com/devnet/articles/crossdomain_policy_file_spec.html.

[147] Flxhr, 2010, Retrieved from http://flxhr.flensed.com.

[148] Document.domain, https://developer.mozilla.org/en/DOM/document.domain.

Page 193: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

174

[149] MicrosoftSilverlightPolicyFiles, 2010, Retrieved from

http://msdn.microsoft.com/en-us/library/cc197955(VS.95).aspx.

[150] Seda Ozses and Salih Ergül, “Cross-domain communications with JSONP, Part 1:

Combine JSONP and jQuery to quickly build powerful mashups”, 2009. Retrieved from

http://www.ibm.com/developerworks/library/wa-aj-jsonp1/.

[151] M. Akin, “Solving Cross-Domain Issues When Building Mashups”, In Ajax World,

Six Conference and Expo, New York ,2009.

[152] Crockford Douglas, JSONRequest, http://www.json.org/JSONRequest.html.

[153] B. Adida , “FragToken: Secure Web Authentication using the Fragment Identifier”,

Harvard 33 Oxford Street, Cambridge, MA 02118, PP: 1-20, 2007.

[154] FragmentIdentifierMessaging ,2010, Retrieved from http://softwareas.com/cross-

domain-communication-with-iframes

[155] Google Gears, 2010, Retrieved from http://gears.google.com/.

[156] XDomainRequest Object, 2010, Retrieved from http://msdn.microsoft.com/en-

us/library/cc288060(VS.85).aspx.

[157] CSSHttpRequest, 2010, Retrieved from http://ajaxian.com/archives/csshttprequest-

cross-domain-ajax-using-css-for-transport.

[158] Mozilla Signed Scripts, 2010, Retrieved from

www.mozilla.org/projects/security/components/signed-scripts.htm.

[159] Websockets, 2010, Retrieved from http://dev.w3.org/html5/websockets/.

[160] Dilip Kumar Sharma and A. K. Sharma, “Implementation of Secure Cross-Site

Communication on QIIIEP”, Publish In International Journal of Advancements in

Technology, ISSN: 0976-4860, Vol. 2, No. 1, pp.129-141, 2011

[161] Buchta, “Oracle Ultra Search Architecture White Paper”, January 1994,

http://www.oracle.com/technology/products/ultrasearch/index.html.

[162] Lotus extended search, “Lotus extended search”, 2010, Obtained through the

Internet from www.ibm.com/developerworks/lotus/library/ls-IntroES/index.html.

[163] Nishitani, “Crawl complex form-based authentication Web sites using IBM

OmniFind Enterprise EditionCreating a web crawler plug-in to preprocess documents”,

05July2007,http://www.ibm.com/developerworks/data/library/techarticle/dm-707nishitani/.

[164] Neuman and Theodore, “Kerberos: An Authentication Service for Computer

Networks”, USC/ISI Technical Report number ISI/RS-94-399 Copyright © 1994 Institute

of Electrical and Electronics Engineers. Reprinted, with permission, from IEEE

Communications Magazine, Volume 32, Number 9, pages 33-38, September 1994.

Page 194: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

175

[165] Dynamic URL tab, 2010, Obtained through the Internet from

http://help.yahoo.com/l/us/yahoo/search/siteexplorer/dynamic-01.html.

[166] M. Tetsuya, S. Kazuhiro and K. Hirotsugu , “A System for Search, Access

Restriction and Agents in the Clouds”. In IEEE, 2009.

[167] L. Donahue, 2009, Retrieved from http://deepwebtechblog.com/ranking-the-secret-

sauce-for-searching-the-deep-web/.

[168] D. Manning ,and H. Schutze, “Foundations of Statistical Natural Language

Processing”, The MIT Press, London, 2002.

[169] Y. Matsuo, T. Sakaki, K. Uchiyama, and M. Ishizuka, “Graph-based Word

Clustering using a Web Search Engine”, In Proceeding of the Conference on Empirical

Methods in Natural Language Processing, 2006.

[170] Dilip Kumar Sharma and A.K. Sharma, “Design of a Framework for Privatized

Information Retrieval in Deep Web Scenario”, In International Journal of Computing

Science and Communication Technologies, ISSN: 0974-3375, Vol.3, Issue 1, July-Dec,

2010.

[171] D. K. Sharma and A. K. Sharma, “A QIIIEP Based Domain Specific Hidden Web

Crawler”, In International Conference & Workshop on Emerging Trends and Technology

(ICWET 2011),Mumbai, ACM Digital Library,2011.

Page 195: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

176

APPENDIX A

PHP, MYSQL & APACHE WEB SERVER

PHP

PHP stands for Hypertext Preprocessor. Formerly it was known as Personal Home Pages.

PHP is a server side scripting language suited for web based technologies and it can be

embedded into HTML. PHP runs on server side.

PHP comprises of the following features.

1. Improved object-oriented programming:

Constructor, interfaces, final classes, final methods, static methods, abstract classes are few

of the new improved object-oriented programming features of the PHP.

2. Embedded SQLite:

PHP supports SQLite, which is an embedded database library that implements a huge

subset of the SQL 92 standard.

3. Exception handling using a try and catch structure:

PHP supports exception model similar to other programming languages. An exception can

be thrown and caught in PHP.

4. Integrated SOAP support:

We can create our own web services in PHP using SOAP (Simple Object Access Protocol).

5. The filter library:

Using filter library (filter_var) you can easily filter data without deeply understanding

regular expressions.

6. Iterators (Object Iteration):

It is very easy to iterate objects in PHP such as for each statement.

7. Better XML tools:

There are inbuilt PHP functions for XML (Extensible Markup Language) and JSON

(JavaScript Object Notation)

A simple example of PHP scipting is shown below.

<HTML>

<HEAD><TITLE>My First PHP Script </TITLE></HEAD>

<BODY>

Page 196: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

177

<?PHP

phpinfo();

?>

</BODY>

</HTML>

MySQL

MySQL is the most popular open-source database system. The data in MySQL is stored in

database objects called tables. It is a powerful Relational Database Management System

(RDBMS) using Structured Query Language (SQL) statements.

MySQL is a database server program installed on one machine, but can 'serve' the database

to a variety of locations. Fig. A1 depicts the request –response diagram of MYSQL

database

Fig. A1: Request –response diagram of MYSQL database

After installing MySQL Server, one can directly access it using various client interfaces,

which sends SQL statements to the server and then display the results to a user. Some of

these are:

Local Client - A program on the same machine as the server.

Scripting Language – It can pass SQL queries to the server and display the result.

Remote Client - A program on a different machine that can connect to the server and run

SQL statements.

Remote Login - It is used for connecting to the server machine to run one of its local

clients.

Web Browser - A web browser runs the scripts that have been previously written on the

web server.

Database Structure

A database can have many tables holding data. A table contains rows(tuples) and

columns(attributes) that define the data in a two dimensional structure. One of the example

is illustrated in Table A1.

Page 197: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

178

Table A1: MYSQL Database structure

CarID Manufacturer Year Car Model AirCon CDMulti

1112 Maruti 97 Alto 1800 FALSE FALSE

1113 Toyota 98 Corolla 1700 FALSE FALSE

APACHE WEB SERVER

Apache web server is an open source web server for operating systems like UNIX and

Windows. It provides HTTP services in synchronization with the current HTTP standards.

As a result of its sophisticated features, excellent performance and low price- free, Apache

has become the world's most popular Web server. However, because the source code is

freely available, anyone can adapt the server for specific needs. The original version of

Apache was written for UNIX, but there are now versions that run under OS/2, Windows

and other platforms.

Hence we conclude:

The Apache HTTP Server is a powerful, flexible, HTTP/1.1 compliant web server.

It implements the latest protocols, including HTTP/1.1.

It is highly configurable and extensible with third-party modules.

It can be customised by writing 'modules' using the Apache module API.

It provides full source code and comes with an unrestrictive license.

It runs on Windows, Netware 5.x and above, OS/2, and most versions of Unix, as

well as several other operating systems.

WAMP Server

WAMP stands for Windows, Apache, MySQL, PHP. WampServer is a Windows web

development environment. It allows us to create web applications with Apache2, PHP and a

MySQL database. WampServer installs automatically and with a left click on

WampServer’s icon, we can:

install and switch Apache, MySQL and PHP releases.

switch online/offline.

manage Apache, MySQL services and servers settings.

create alias.

access settings files and logs.

Page 198: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

179

APPENDIX B

PUBLICATIONS RELATED TO THIS THESIS

[1] Dilip Kumar Sharma and A.K. Sharma, “A Novel Architecture of Deep web

crawler” In International Journal of Information Technology & Web Engineering, USA,

Vol. 6, No. 1,pp. 25-48, 2011(Journal is indexed in Scopus/DBLP).

[2] Dilip Kumar Sharma and A. K. Sharma, “Search Engine: A Backbone for

Information Extraction in ICT Scenario”, In International Journal of ICTHD, USA , Vol. 3,

No. 2, pp. 38-51, 2011(Journal is indexed in DBLP).

[3] Dilip Kumar Sharma and A. K. Sharma, “Deep Web Information Retrieval Process:

A Technical Survey”, In International Journal of Information Technology & Web

Engineering, USA, Vol 5, No. 1, pp.1-22, 2010 (Journal is indexed in Scopus/DBLP).

[4] D. K. Sharma and A. K. Sharma, “A QIIIEP Based Domain Specific Hidden Web

Crawler”, In International Conference & Workshop on Emerging Trends and Technology

(ICWET 2011),Mumbai, ACM Digital Library,2011 (Indexed in DBLP).

[5] Dilip Kumar Sharma and A.K. Sharma, “Design of a Framework for Privatized

Information Retrieval in Deep Web Scenario”, In International Journal of Computing

Science and Communication Technologies, ISSN: 0974-3375, Vol.3, Issue 1, July-Dec,

2010.

[6] Dilip Kumar Sharma and A. K. Sharma, “Implementation of Secure Cross-Site

Communication on QIIIEP”, Publish In International Journal of Advancements in

Technology, ISSN: 0976-4860, Vol. 2, No. 1, pp.129-141, 2011

[7] Dilip Kumar Sharma and A.K. Sharma, “A Novel Ranking Algorithm of Query

Words Stored in QIIIEP Server”, In International Journal of Engineering Science &

Technology, ISSN: 0975-5462, Vol. 2(11), pp. 6097-6107, 2010.

[8] Dilip Kumar Sharma and A. K. Sharma , “A Comparative Analysis of Web Page

Ranking Algorithms”, Published in International Journal on Computer Science &

Engineering, ISSN: 0975–3397, Vol. 2, No. 8, pp. 2670-2676 , 2010.

[9] Dilip Kumar Sharma, and A.K. Sharma, “Analysis of Techniques for Detection of

Deep Web Search Interface”, In CSI Communication, Mumbai Vol.34, Issue 9, pp.30-33,

ISSN: 0970-647X, Dec 2010.

Page 199: DESIGN OF A FRAMEWORK FOR EXTRACTION OF ...shodhganga.inflibnet.ac.in/bitstream/10603/41707/3/thesis...DESIGN OF A FRAMEWORK FOR EXTRACTION OF DEEP WEB INFORMATION A Thesis submitted

180

[10] Dilip Kumar Sharma, and A.K. Sharma, “Query Intensive Interface Information

Extraction Protocol for Deep Web”, In Proceedings of IEEE International Conference on

Intelligent Agent & Multi-Agent Systems, pp.1-5, IEEE Explorer, 2009.

[11] Dilip Kumar Sharma and A. K. Sharma, “Technical Review of Website Analytics

Tools”, In Book Titled “Information Technology & Energy Management” Published by

Excel Publisher, New Delhi, ISBN: 978-93-80697-07-9, pp. 405-408, 2010.