0

我创建了以下 perl 脚本来从 Web 中提取 URL:

#!perl

use strict;
use warnings;

use List::MoreUtils qw( uniq );
use WWW::Mechanize  qw( );

my ($url) = @ARGV;
my $mech = WWW::Mechanize->new();


sub getUrl {
    my $request= "@_";
    my $response = $mech->get($request);
    return $response->is_success()  or die($response->status_line() . "\n");
}

sub getLinks {
    getUrl($url);
    my @root= map { "$_\n" } sort { $a cmp $b } uniq 
        map { $_->url_abs() }
            $mech->links();
    return @root;
}
print Dumper(getLinks());

是否有解决方案如何从 HTML 站点中提取唯一 URL 和相关链接文本?

4

2 回答 2

1
my $urls;
my @result;

foreach my $link ( $mech->links() ) {
    next if exists $urls->{ $link->url_abs() };
    push @result, {
        url => $link->url_abs(),
        text => $link->text(),
    };
    $urls->{ $link->url_abs() } = 1;
}
#now you have all unique links in the array of hashes @result
#so you can sort this array like you want...
于 2013-07-02T07:55:21.040 回答
1

查看HTML::LinkExtor - 从 HTML 文档中提取链接

请参阅模块中的示例,将对您有所帮助。

于 2013-07-02T07:29:28.500 回答