3

The situation is as follows: working on a enterprise cluster with spark 2.3, I want to run pandas_udf which requires pyarrow which requires numpy 0.14 (AFAIK). Been able to distribute pyarrow (I think, no way of verifying this 100%):

 pyspark.sql.SparkSession.builder.appName("pandas_udf_poc").config("spark.executor.instances","2")\
                                              .config("spark.executor.memory","8g")\
                                              .config("spark.driver.memory","8g")\
                                              .config("spark.driver.maxResultSize","8g")\
                                              .config("py-files", "pyarrow_depnd.zip")\
                                              .getOrCreate()  

spark.sparkContext.addPyFile("pyarrow_depnd.zip")

The zip is the result of pip install to dir and zipping it.

But pyarrow does not play along with the nodes numpy 0.13, I guess I could try and distribute a full env to all nodes, but my question is, is there a way to avoid this and make the node use a diffrent numpy (which is already distributed in the pyarrow zip)

Thanks

4

1 回答 1

2

好吧,最后,不必使用虚拟环境,但无法避免将 python 的完整副本(包含所需的依赖项)分发到所有节点。

首先构建了 python 的完整副本(确实使用了 conda env,但您可能可以使用其他方式):

conda create --prefix /home/me/env_conda_for_pyarrow
source activate /home/me/env_conda_for_pyarrow
conda install numpy 
conda install pyarrow

在这种特定情况下,必须在安装之前打开 conda-forge 频道,以获得最新版本。

二、压缩分布:

zip -r env_conda_for_pyarrow.zip env_conda_for_pyarrow

然后使用档案来分发 zip 和 env var PYSPARK_PYTHON 指向它:

import os, sys
os.environ['PYSPARK_PYTHON']="dist_python/env_conda_for_pyarrow/bin/python"

import pyspark
spark = \
pyspark.sql.SparkSession.builder.appName("pysaprk_python")\
.config("spark.yarn.dist.archives", "env_conda_for_pyarrow.zip#dist_python")\
.getOrCreate()

print spark.version, spark.sparkContext.master

就这样,完成了。以下是我用于测试的一些脚本:

def list_nodes_dir(x): # hack to see workers file dirs
    import os
    return os.listdir('dist_python')

spark.sparkContext.parallelize(range(1), 1).map(list_nodes_dir).collect()    



def npv(x): # hack to see workers numpy version
    import numpy as np
    return np.__version__

set(spark.sparkContext.parallelize(range(10), 10).map(npv).collect())



# spark documents example
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import IntegerType, StringType
slen = pandas_udf(lambda s: s.str.len(), IntegerType())  

@pandas_udf(StringType())  
def to_upper(s):
    return s.str.upper()

@pandas_udf("integer", PandasUDFType.SCALAR)  
def add_one(x):
    return x + 1

df = spark.createDataFrame([(1, "John Doe", 21)], ("id", "name", "age"))  
df.select(slen("name").alias("slen(name)"), to_upper("name"), 
add_one("age")).show() 
于 2018-12-24T12:02:21.533 回答