0
 >>> from pyspark.sql import SQLContext
 >>> sqlContext = SQLContext(sc)
 >>> rdd =sqlContext.jsonFile("tmp.json") 
 >>> rdd_new= rdd.map(lambda x:x.name,x.age) 

它工作正常。但是有值列表 list1=["name","age","gene","xyz",.....] 当我通过时

 For each_value in list1:
     `rdd_new=rdd.map(lambda x:x.each_value)` I am getting error
4

1 回答 1

2

我认为您需要传递要选择的字段的名称。在这种情况下,请参阅以下内容:

r1 = ssc.jsonFile("test.json")
    r1.printSchema()
    r1.show()

    l1 = ['number','string']
    s1 = r1.select(*l1)
    s1.printSchema()
    s1.show()

root
 |-- array: array (nullable = true)
 |    |-- element: long (containsNull = true)
 |-- boolean: boolean (nullable = true)
 |-- null: string (nullable = true)
 |-- number: long (nullable = true)
 |-- object: struct (nullable = true)
 |    |-- a: string (nullable = true)
 |    |-- c: string (nullable = true)
 |    |-- e: string (nullable = true)
 |-- string: string (nullable = true)

array                boolean null number object  string     
ArrayBuffer(1, 2, 3) true    null 123    [b,d,f] Hello World
root
 |-- number: long (nullable = true)
 |-- string: string (nullable = true)

number string     
123    Hello World

这是通过数据框完成的。注意 arg list 的传递方式。有关更多信息,您可以查看此链接

于 2015-05-27T13:29:51.993 回答