我正在尝试运行 Hadoop 流作业来处理地理空间数据。为此,我使用了需要libgeos的Shapely函数。
但是,作业失败,因为集群上没有安装 libgeos。
有没有办法将 libgeos 发送到集群并让 Shapely.so
从目录中读取文件(可能是通过-archives
or -files
)?
运行命令示例
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D stream.num.map.output.key.fields=2 -D mapred.text.key.partitioner.options=-k1,1 -archives hdfs://namenode:port/user/anaconda.zip#anaconda -files /some/other/stuff -input /path/to/input -output /user/geo_stuff -file /home/mr_files/mapper.py -mapper "mapper.py"
mapper.py 从哪里开始...
#!./anaconda/anaconda/bin/python
import shapely
from cartopy.io import shapereader
from shapely.geometry import Point
...more stuff
这会产生以下错误
from shapely.geos import lgeos
File "./anaconda/anaconda/lib/python2.7/site-packages/shapely/geos.py", line 58, in <module>
_lgeos = load_dll('geos_c', fallbacks=['libgeos_c.so.1', 'libgeos_c.so'])
File "./anaconda/anaconda/lib/python2.7/site-packages/shapely/geos.py", line 54, in load_dll
libname, fallbacks or []))
OSError: Could not find library geos_c or load any of its variants ['libgeos_c.so.1', 'libgeos_c.so']