0

在 Pyspark(一种 Spark/Hadoop 输入语言)中:我想在数据集中查找关键字,例如“SJC”,并从与找到关键字“SJC”的行对应的第二列返回文本。

例如,以下数据集读取:

[年份] [延误] [目的地] [航班#]

|1987| |-5| |SJC| |500|

|1987| |-5| |SJC| |250|

|1987| |07| |旧金山| |700|

|1987| |09| |SJC| |350|

|1987| |-5| |SJC| |650|

我希望能够查询“SJC”并将 [Delay] 值作为列表或字符串返回。

我已经走了这么远,但没有运气:

import sys
from pyspark import SparkContext

logFile = "hdfs://<ec2 host address>:9000/<dataset folder (on ec2)>"
sc = SparkContext("local", "simple app")
logData = sc.textFile(logFile).cache()
numSJC = logData.filter(lambda line: 'SJC' in line).first()

print "Lines with SJC:" + ''.join(numSJC)

谢谢您的帮助!

4

1 回答 1

0

你几乎已经自己完成了

假设您有一个以竖线分隔的文件 `/tmp/demo.txt':

Year|Delay|Dest|Flight #
1987|-5|SJC|500
1987|-5|SJC|250
1987|07|SFO|700
1987|09|SJC|350
1987|-5|SJC|650

在 PySpark 中,您应该这样做:

# First, point Spark to the file
log = sc.textFile('file:///tmp/demo.txt')
# Second, replace each line with array of the values, thus string 
# '1987|-5|SJC|500' is replaced with ['1987', '-5', 'SJC', '500']
log = log.map(lambda line: line.split('|'))
# Now filter leaving only the lists with 3rd element equal to 'SJC'
log = log.filter(lambda x: x[2]=='SJC')
# Now leave only the second column, 'Delay'
log = log.map(lambda x: x[1])
# And here's the result
log.collect()
于 2015-01-13T15:39:52.467 回答