0

我正在尝试启动一个 Druid 主管来摄取存储到 hadoop 中的 PArqurt 数据。但是我收到以下错误,我找不到任何有关它的信息:

“错误”:“无法将类型 id 'index_hadoop' 解析为 [简单类型,类 io.druid.indexing.overlord.supervisor.SupervisorSpec] 的子类型:已知类型 ids = [NoopSupervisorSpec, kafka]\n 在 [来源: (org.eclipse.jetty.server.HttpInputOverHTTP)

我试图修复它在扩展加载列表中加载 hadoop 深度存储、parquet 和 avro 扩展,但这不起作用。

这是我的主管 JSON 配置:

{
  "type" : "index_hadoop",
  "spec" : {
    "dataSchema" : {
      "dataSource" : "hadoop-batch-timeseries",
      "parser" : {
        "type": "parquet",
        "parseSpec" : {
            "format" : "parquet",
            "flattenSpec": {
                "useFieldDiscovery": true,
                "fields": [
                ]
            },
            "timestampSpec" : {
                "column" : "timestamp",
                "format" : "auto"
            },
            "dimensionsSpec" : {
                "dimensions": [ "installation", "var_id", "value" ],
                "dimensionExclusions" : [],
                "spatialDimensions" : []
            }
        }
      },
      "metricsSpec" : [
        {
          "type" : "count",
          "name" : "count"
        }
      ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "DAY",
        "queryGranularity" : "NONE",
        "intervals" : [ "2018-10-01/2018-11-30" ]
      }
    },
    "ioConfig": {
      "type": "hadoop",
      "inputSpec": {
        "type": "granularity",
        "dataGranularity": "day",
        "inputFormat": "org.apache.druid.data.input.parquet.DruidParquetInputFormat",
        "inputPath": "/warehouse/tablespace/external/hive/demo.db/integers",
        "filePattern": "*.parquet",
        "pathFormat": "'year'=yyy/'month'=MM/'day'=dd"
      },
    },
    "tuningConfig" : {
      "type": "hadoop"
    }
  },
  "hadoopDependencyCoordinates": "3.1.0"
}
4

1 回答 1

0

我遇到了同样的问题。通过将其作为任务提交而不是作为主管提交来解决它:

curl -X POST -H 'Content-Type: application/json' -d @my-spec.json http://my-druid-coordinator-url:8081/druid/indexer/v1/task
于 2020-09-09T15:38:34.663 回答