creating a web crawler in 3 steps issac goldstand [email protected] mirimar networks

20
Creating a Web Crawler in 3 Steps Issac Goldstand [email protected] Mirimar Networks http:// www.mirimar.net/

Upload: david-waters

Post on 12-Jan-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Creating a Web Crawler in 3 Steps

Issac Goldstand

[email protected]

Mirimar Networks

http://www.mirimar.net/

Page 2: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

The 3 steps

• Creating the User Agent

• Creating the content parser

• Tying it together

Page 3: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Step 1 – Creating the User Agent

• Lib-WWW Perl (LWP)

• OO interface to creating user agents for interacting with remote websites and web applications

• We will look at LWP::RobotUA

Page 4: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Creating the LWP Object

• User agent

• Cookie jar

• Timeout

Page 5: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Robot UA extras

• Robot rules

• Delay

• use_sleep

Page 6: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Implementation of Step 1

use LWP::RobotUA;

# First, create the user agent - MyBot/1.0

my $ua=LWP::RobotUA->new('MyBot/1.0', \ '[email protected]');

$ua->delay(15/60); # 15 seconds delay

$ua->use_sleep(1); # Sleep if delayed

Page 7: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Step 2 – Creating the content parser

• HTML::Parser

• Event-driven parser mechanism

• OO and function oriented interfaces

• Hooks to functions at certain points

Page 8: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Subclassing HTML::Parser

• Biggest issue is non-persistence

• CGI authors may be used to this, but still makes for many caveats

• You must implement your own state preservation mechanism

Page 9: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Implementation of Step 2package My::LinkParser; # Parser classuse base qw(HTML::Parser);

use constant START=>0; # Define simple constantsuse constant GOT_NAME=>1;

sub state { # Simple access methods return $_[0]->{STATE};}sub author { return $_[0]->{AUTHOR};}

Page 10: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Implementation of Step 2 (cont)sub reset { # Clear parser state my $self=shift; undef $self->{AUTHOR}; $self->{STATE}=START; return 0;}

sub start { # Parser hook my($self, $tagname, $attr, $attrseq, $origtext) = @_; if ($tagname eq "meta" && lc($attr->{name}) eq

"author") { $self->{STATE}=GOT_NAME; $self->{AUTHOR}=$attr->{content}; }}

Page 11: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Shortcut HTML::SimpleLinkExtor

• Simple package to extract links from HTML

• Handles many links – we only want HREF type links

Page 12: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Step 3 – Tying it together

• Simple application

• Instanciate objects

• Enter request loop

• Spit data to somewhere

• Add parsed links to queue

Page 13: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Implementation of Step 3

for (my $i=0;$i<10;$i++) { # Parse loop my $response=$ua->get(pop @urls); # Get HTTP response if ($response->is_success) { # If reponse is OK $p->reset; $p->parse($response->content); # Parse for author $p->eof; if ($p->state==1) { # If state is FOUND_AUTHOR $authors{$p->author}++; # then add author count } else { $authors{'Not Specified'}++; # otherwise add default count } $linkex->parse($response->content); # parse for links unshift @urls,$linkex->a; # and add links to queue }}

Page 14: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

End result#!/usr/bin/perl

use strict;

use LWP::RobotUA;

use HTML::Parser;

use HTML::SimpleLinkExtor;

my @urls; # List of URLs to visit

my %authors;

my $ua=LWP::RobotUA->new('AuthorBot/1.0','[email protected]'); # First, create & setup the user agent

$ua->delay(15/60); # 15 seconds delay

$ua->use_sleep(1); # Sleep if delayed

my $p=My::LinkParser->new; # Create parsers

my $linkex=HTML::SimpleLinkExtor->new;

$urls[0]="http://www.beamartyr.net/"; # Initialize list of URLs

Page 15: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

End resultfor (my $i=0;$i<10;$i++) { # Parse loop

my $response=$ua->get(pop @urls); # Get HTTP response

if ($response->is_success) { # If reponse is OK

$p->reset;

$p->parse($response->content); # Parse for author

$p->eof;

if ($p->state==1) { # If state is FOUND_AUTHOR

$authors{$p->author}++; # then add author count

} else {

$authors{'Not Specified'}++; # otherwise add default count

}

$linkex->parse($response->content); # parse for links

unshift @urls,$linkex->a; # and add links to queue

}

}

print "Results:\n"; # Print results

map {print "$_\t$authors{$_}\n"} keys %authors;

Page 16: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

End resultpackage My::LinkParser; # Parser class

use base qw(HTML::Parser);

use constant START=>0; # Define simple constants

use constant GOT_NAME=>1;

sub state { # Simple access methods

return $_[0]->{STATE};

}

sub author {

return $_[0]->{AUTHOR};

}

sub reset { # Clear parser state

my $self=shift;

undef $self->{AUTHOR};

$self->{STATE}=START;

return 0;

}

Page 17: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

End resultsub start { # Parser hook

my($self, $tagname, $attr, $attrseq, $origtext) = @_;

if ($tagname eq "meta" && lc($attr->{name}) eq "author") {

$self->{STATE}=GOT_NAME;

$self->{AUTHOR}=$attr->{content};

}

}

Page 18: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

What’s missing?

• Full URLs for relative links

• Non-HTTP links

• Queues & caches

• Persistent storage

• Link (and data) validation

Page 19: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

In review

• Create robot user agent to crawl websites nicely

• Create parsers to extract data from sites, and links to the next sites

• Create a simple program to parse a queue of URLs

Page 20: Creating a Web Crawler in 3 Steps Issac Goldstand isaac@cpan.org Mirimar Networks

Thank you!

For more information:

Issac Goldstand [email protected]

http://www.beamartyr.net/

http://www.mirimar.net/