0

我正在使用phpcrawl,下面是代码。我想抓取提到的链接并获得所有工作。

现在,我通过传递链接来抓取它,但它抓取了我们在页面源视图中看到的所有链接。但我只想查看我传递的链接的来源,并使用 xpath 来实现抓取作业。

    <?php

    // It may take a whils to crawl a site ...
    set_time_limit(10000);

    // Inculde the phpcrawl-mainclass
    include("libs/PHPCrawler.class.php");

    // Extend the class and override the handleDocumentInfo()-method 
    class MyCrawler extends PHPCrawler 
    {
      function handleDocumentInfo($DocInfo) 
      {
        // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
        if (PHP_SAPI == "cli") $lb = "\n";
         else $lb = "<br />";

    // Print the URL and the HTTP-status-Code
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;
    
    // Print the refering URL
    echo "Referer-page: ".$DocInfo->referer_url.$lb;
    
    // Print if the content of the document was be recieved or not
    if ($DocInfo->received == true)
      //echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
      echo $DocInfo->bytes_received;
    else
      echo "Content not received".$lb; 
    
    // Now you should do something with the content of the actual
    // received page or file ($DocInfo->source), we skip it in this example 
    
    echo $lb;
    
    flush();
    } 
    }

    // Now, create a instance of your class, define the behaviour
    // of the crawler (see class-reference for more options and details)
    // and start the crawling-process. 

    $crawler = new MyCrawler();

    // URL to crawl
    $crawler->setURL("http://careers.republic.co.uk/pb3/corporate/Republic/search.php?page=1 ");

    // Only receive content of files with content-type "text/html"
    $crawler->addContentTypeReceiveRule("#text/html#");

   // Ignore links to pictures, dont even request pictures
   $crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

   // Store and send cookie-data like a browser does
   $crawler->enableCookieHandling(true);

   // Set the traffic-limit to 1 MB (in bytes,
   // for testing we dont want to "suck" the whole site)
   $crawler->setTrafficLimit(1000 * 1024);

   // Thats enough, now here we go
   $crawler->go();

   // At the end, after the process is finished, we print a short
   // report (see method getProcessReport() for more information)
   $report = $crawler->getProcessReport();

   if (PHP_SAPI == "cli") $lb = "\n";
   else $lb = "<br />";
    
   echo "Summary:".$lb;
   echo "Links followed: ".$report->links_followed.$lb;
   echo "Documents received: ".$report->files_received.$lb;
   echo "Bytes received: ".$report->bytes_received." bytes".$lb;
   echo "Process runtime: ".$report->process_runtime." sec".$lb; 
   ?>
4

1 回答 1

0

只需在 phpcrawl 中设置页面限制为 1: $crawler->setPageLimit(1); (http://phpcrawl.cuab.de/classreferences/PHPCrawler/method_detail_tpl_method_setPageLimit.htm)

于 2012-11-23T09:24:01.430 回答