14

I have the following table as a RDD:

Key Value
1    y
1    y
1    y
1    n
1    n
2    y
2    n
2    n

I want to remove all the duplicates from Value.

Output should come like this:

Key Value
1    y
1    n
2    y
2    n

While working in pyspark, output should come as list of key-value pairs like this:

[(u'1',u'n'),(u'2',u'n')]

I don't know how to apply for loop here. In a normal Python program it would have been very easy.

I wonder if there is some function in pyspark for the same.

4

3 回答 3

19

恐怕我对python一无所知,所以我在这个答案中提供的所有参考资料和代码都是相对于java的。但是,将其翻译成python代码应该不是很困难。

你应该看看下面的网页。它重定向到Spark的官方网页,该网页提供了Spark支持的所有转换和操作的列表。

如果我没记错的话,最好的方法(在你的情况下)是使用distinct()转换,它返回一个新数据集,其中包含源数据集的不同元素(取自链接)。在java中,它会是这样的:

JavaPairRDD<Integer,String> myDataSet = //already obtained somewhere else
JavaPairRDD<Integer,String> distinctSet = myDataSet.distinct();

因此,例如:

Partition 1:

1-y | 1-y | 1-y | 2-y
2-y | 2-n | 1-n | 1-n

Partition 2:

2-g | 1-y | 2-y | 2-n
1-y | 2-n | 1-n | 1-n

将转换为:

Partition 1:

1-y | 2-y
1-n | 2-n 

Partition 2:

1-y | 2-g | 2-y
1-n | 2-n |

当然,您仍然会有多个 RDD 数据集,每个数据集包含不同元素的列表。

于 2014-09-18T14:10:05.133 回答
9

distinct使用Apache Spark 中的 pyspark 库的操作很容易解决这个问题。

from pyspark import SparkContext, SparkConf

# Set up a SparkContext for local testing
if __name__ == "__main__":
    sc = SparkContext(appName="distinctTuples", conf=SparkConf().set("spark.driver.host", "localhost"))

# Define the dataset
dataset = [(u'1',u'y'),(u'1',u'y'),(u'1',u'y'),(u'1',u'n'),(u'1',u'n'),(u'2',u'y'),(u'2',u'n'),(u'2',u'n')]

# Parallelize and partition the dataset 
# so that the partitions can be operated
# upon via multiple worker processes.
allTuplesRdd = sc.parallelize(dataset, 4)

# Filter out duplicates
distinctTuplesRdd = allTuplesRdd.distinct() 

# Merge the results from all of the workers
# into the driver process.
distinctTuples = distinctTuplesRdd.collect()

print 'Output: %s' % distinctTuples

这将输出以下内容:

Output: [(u'1',u'y'),(u'1',u'n'),(u'2',u'y'),(u'2',u'n')]
于 2015-06-23T16:18:16.423 回答
4

如果你想从一个特定的列或一组列中删除所有重复项,即执行一distinct组列,那么 pyspark 具有函数dropDuplicates,它将接受特定的列集以区分。

又名

df.dropDuplicates(['value']).show()
于 2015-08-17T22:30:27.497 回答