0

我几乎没有需要使用stormcrawler 抓取的网址。根据[链接] https://medium.com/analytics-vidhya/web-scraping-and-indexing-with-stormcrawler-and-elasticsearch-a105cb9c02ca[/link ]我跟着所有步骤,并在我的弹性中被刮掉并加载了内容。

根据上面的博客,他使用 Flux 命令将拓扑注入 ES。

spouts: 
  - 
    className: com.digitalpebble.stormcrawler.spout.FileSpout
    constructorArgs: 
      - "stormcrawlertest-master/"
      - seeds.txt
      - true
    id: spout
    parallelism: 1
streams: 
  - 
    from: spout
    grouping: 
      customClass: 
        className: com.digitalpebble.stormcrawler.util.URLStreamGrouping
        constructorArgs: 
          - byHost
      streamId: status
      type: CUSTOM
    to: status

这会将 url 注入 ES。我在 Flux 中跟随同一个类并创建了一个主类

String[] argsa = new String[] { "-conf","/crawler-conf.yaml", "-conf","/es-conf.yaml", "-local" };      
ConfigurableTopology.start(new InjectorTopology(), argsa);
public class InjectorTopology extends ConfigurableTopology {
    @Override
    protected int run(String[] args) {
        TopologyBuilder builder = new TopologyBuilder();
        builder.setSpout("spout", new FileSpout("stormcrawlertest-master/","seeds.txt", true), 1);
        builder.setBolt("status", new StatusUpdaterBolt(), 1).customGrouping("spout",new URLStreamGrouping(Constants.PARTITION_MODE_HOST));
        return submit("ESInjectorInstance", conf, builder);
    }}

并通过 maven run pythonstorm.py jar target/stormcrawlertest-1.0-SNAPSHOT.jar com.my.sitescraper.main.SiteScraper 清理和打包这不是向 ES 注入任何 url。

我错过了什么。

4

0 回答 0