尝试使用 Spark 1.4.1 数据帧读取 JSON 文件并在其中导航。似乎猜测的架构不正确。
JSON文件是:
{
"FILE": {
"TUPLE_CLI": [{
"ID_CLI": "C3-00000004",
"TUPLE_ABO": [{
"ID_ABO": "T0630000000000004",
"TUPLE_CRA": {
"CRA": "T070000550330",
"EFF": "Success"
},
"TITRE_ABO": ["Mr",
"OOESGUCKDO"],
"DATNAISS": "1949-02-05"
},
{
"ID_ABO": "T0630000000100004",
"TUPLE_CRA": [{
"CRA": "T070000080280",
"EFF": "Success"
},
{
"CRA": "T070010770366",
"EFF": "Failed"
}],
"TITRE_ABO": ["Mrs",
"NP"],
"DATNAISS": "1970-02-05"
}]
},
{
"ID_CLI": "C3-00000005",
"TUPLE_ABO": [{
"ID_ABO": "T0630000000000005",
"TUPLE_CRA": [{
"CRA": "T070000200512",
"EFF": "Success"
},
{
"CRA": "T070010410078",
"EFF": "Success"
}],
"TITRE_ABO": ["Miss",
"OB"],
"DATNAISS": "1926-11-22"
}]
}]
}
}
火花代码是:
val j = sqlContext.read.json("/user/arthur/test.json")
j.printSchema
结果是:
root
|-- FILE: struct (nullable = true)
| |-- TUPLE_CLI: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- ID_CLI: string (nullable = true)
| | | |-- TUPLE_ABO: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- DATNAISS: string (nullable = true)
| | | | | |-- ID_ABO: string (nullable = true)
| | | | | |-- TITRE_ABO: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- TUPLE_CRA: string (nullable = true)
很明显 TUPLE_CRA 是一个数组。我不明白为什么它没有被猜到。在我看来,推断的模式应该是:
root
|-- FILE: struct (nullable = true)
| |-- TUPLE_CLI: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- ID_CLI: string (nullable = true)
| | | |-- TUPLE_ABO: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- DATNAISS: string (nullable = true)
| | | | | |-- ID_ABO: string (nullable = true)
| | | | | |-- TITRE_ABO: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- TUPLE_CRA: array (nullable = true)
| | | | | | |-- element: struct (containsNull = true)
| | | | | | | |-- CRA: string (nullable = true)
| | | | | | | |-- EFF: string (nullable = true)
有人有解释吗?如果 JSON 模式更复杂,有没有办法轻松地告诉 Spark 实际模式是什么?