我可以使用Elasticsearch File System Crawler将 pdf 文件索引到本地 Elasticsearch 。默认情况下,fscrawler 设置具有端口、主机和方案参数,如下所示。
{
"name" : "job_name2",
"fs" : {
"url" : "/tmp/es",
"update_rate" : "15m",
"excludes" : [ "~*" ],
"json_support" : false,
"filename_as_id" : false,
"add_filesize" : true,
"remove_deleted" : true,
"add_as_inner_object" : false,
"store_source" : false,
"index_content" : true,
"attributes_support" : false,
"raw_metadata" : true,
"xml_support" : false,
"index_folders" : true,
"lang_detect" : false,
"continue_on_error" : false,
"pdf_ocr" : true,
"ocr" : {
"language" : "eng"
}
},
"elasticsearch" : {
"nodes" : [ {
"host" : "127.0.0.1",
"port" : 9200,
"scheme" : "HTTP"
} ],
"bulk_size" : 100,
"flush_interval" : "5s"
},
"rest" : {
"scheme" : "HTTP",
"host" : "127.0.0.1",
"port" : 8080,
"endpoint" : "fscrawler"
}
}
但是,我很难使用它来索引 AWS elasticsearch 服务,因为要索引到 AWS elasticsearch,我必须提供 AWS_ACCESS_KEY、AWS_SECRET_KEY、区域和服务,如此处所述。有关如何将 pdf 文件索引到 AWS elasticsearch 服务的任何帮助是高度赞赏。