2

我正在尝试设置craweler4j。我正在从 Netbeans 的源代码构建它。我使用的是 crawler4j 的 3.5 版本,调用类与网站上曾经给出的类相同 - 为方便起见,请在下面复制 -

public class MyCrawler extends WebCrawler {

    private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g" 
                                                      + "|png|tiff?|mid|mp2|mp3|mp4"
                                                      + "|wav|avi|mov|mpeg|ram|m4v|pdf" 
                                                      + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

    /**
     * You should implement this function to specify whether
     * the given url should be crawled or not (based on your
     * crawling logic).
     */
    @Override
    public boolean shouldVisit(WebURL url) {
            String href = url.getURL().toLowerCase();
            return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu/");
    }

    /**
     * This function is called when a page is fetched and ready 
     * to be processed by your program.
     */
    @Override
    public void visit(Page page) {          
            String url = page.getWebURL().getURL();
            System.out.println("URL: " + url);

            if (page.getParseData() instanceof HtmlParseData) {
                    HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
                    String text = htmlParseData.getText();
                    String html = htmlParseData.getHtml();
                    List<WebURL> links = htmlParseData.getOutgoingUrls();

                    System.out.println("Text length: " + text.length());
                    System.out.println("Html length: " + html.length());
                    System.out.println("Number of outgoing links: " + links.size());
            }
    }

}

public class Controller {
    public static void main(String[] args) throws Exception {
            String crawlStorageFolder = "/data/crawl/root";
            int numberOfCrawlers = 7;

            CrawlConfig config = new CrawlConfig();
            config.setCrawlStorageFolder(crawlStorageFolder);

            /*
             * Instantiate the controller for this crawl.
             */
            PageFetcher pageFetcher = new PageFetcher(config);
            RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
            RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
            CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

            /*
             * For each crawl, you need to add some seed urls. These are the first
             * URLs that are fetched and then the crawler starts following links
             * which are found in these pages
             */
            controller.addSeed("http://www.ics.uci.edu/~welling/");
            controller.addSeed("http://www.ics.uci.edu/~lopes/");
            controller.addSeed("http://www.ics.uci.edu/");

            /*
             * Start the crawl. This is a blocking operation, meaning that your code
             * will reach the line after this only when crawling is finished.
             */
            controller.start(MyCrawler.class, numberOfCrawlers);    
    }

}

代码编译成功但引发以下运行时异常。请建议。

Exception in thread "Crawler 1" java.lang.NoSuchMethodError: edu.uci.ics.crawler4j.parser.HtmlParseData.getOutgoingUrls()Ljava/util/Set;
    at MyCrawler.visit(MyCrawler.java:42)
    at edu.uci.ics.crawler4j.crawler.WebCrawler.processPage(WebCrawler.java:351)
    at edu.uci.ics.crawler4j.crawler.WebCrawler.run(WebCrawler.java:220)
    at java.lang.Thread.run(Thread.java:744)

我仔细研究了代码,并在那里找到了一个同名的类。但仍然是错误。

4

1 回答 1

1

你的代码看起来不错。

你可能不知何故陷入了一些依赖类路径的地狱——也许你有两个不同版本的 crawler4j 库?

无论如何我建议如下:看看新的 crawler4j github:https ://github.com/yasserg/crawler4j

使用 maven 依赖系统,你所有的麻烦都会消失!:

<dependency>
    <groupId>edu.uci.ics</groupId>
    <artifactId>crawler4j</artifactId>
    <version>4.1</version>
</dependency>

您将获得最新版本(现在在 github 上而不是 google 代码上)并且使用 Maven 您将自动逃脱所有类路径地狱......

在最新版本上,无论如何我已经修复了很多错误,所以我真的建议转向最新和最好的

于 2015-08-24T13:56:40.743 回答