0

我正在尝试为我的 Watson 发现服务设置一个本地 mongodb 爬虫。MongoDB 已启动并正在运行。我下载了 JDBC 连接器(mongodb-driver-3.4.2.jar)并将其放在 /opt/ibm/crawler/connectorFramework/crawler-connector-framework-0.1.18/lib/java/database/

让我告诉你我是如何修改配置文件的:

在 crawler.conf 的主要部分“input_adapter”下,我更改了以下值:

crawl_config_file = "connectors/database.conf",
crawl_seed_file = "seeds/database-seed.conf",
extra_jars_dir = "database",

在 seed/database-seed.conf 的种子 > 属性部分,url 的部分如下所示:

{
  name ="url",
  value="mongo://localhost:27017/local/tweets?per=1000"
},

(也尝试使用 mongodb 代替 mongo)

在 connector/database.conf 上,文件的第一部分如下所示:

crawl_extender {
  attribute = [
    {
      name="protocol",
      value="mongo"
    },
    {
      name="collection",
      value="SomeCollection"
    }
  ],

(也尝试使用 mongodb 代替 mongo)

当我运行爬虫命令时,这是我的输出:

pish@ubuntu-crawler:~$ crawler crawl --config ./crawler-config/config/crawler.conf 
2017-08-02 04:29:10,206 INFO: Connector Framework service will start and connect to crawler on port 35775
2017-08-02 04:29:10,460 INFO: This crawl is running in CrawlRun mode
2017-08-02 04:29:10,460 INFO: Running a crawl...
2017-08-02 04:29:10,465 INFO: URLs matching these patterns will be not be processed: (?i)\.(xlsx?|pptx?|jpe?g|gif|png|mp3|tiff)$
2017-08-02 04:29:10,500 INFO: HikariPool-1 - Starting...
2017-08-02 04:29:10,685 INFO: HikariPool-1 - Start completed.
2017-08-02 04:29:12,161 ERROR: There was a problem processing URL mongo://localhost:27017/local/tweets?per=1000: Couldn't load JDBC driver : 
2017-08-02 04:29:17,184 INFO: HikariPool-1 - Shutdown initiated...
2017-08-02 04:29:17,196 INFO: HikariPool-1 - Shutdown completed.
2017-08-02 04:29:17,198 INFO: The service for the Connector Framework Input Adapter was signaled to halt.
Attempting to shutdown the crawler cleanly.

我在爬虫中遗漏了什么或做错了什么?

4

1 回答 1

0

最后,事实证明我还必须在其中一个配置文件中指定连接字符串。现在可以了。

于 2017-08-28T16:40:03.937 回答