7

我是 Spark 的新手,我用 Python 编写代码。

完全按照我的“学习 Spark”指南,我看到“你不需要安装 Hadoop 来运行 Spark”

然而,当我只是尝试使用 Pyspark 计算一个文件中的行数时,我收到以下错误。我错过了什么?

>>> lines = sc.textFile("README.md")
15/02/01 13:27:12 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0,
 maxMem=278019440
15/02/01 13:27:12 INFO MemoryStore: Block broadcast_0 stored as values in memory
 (estimated size 32.0 KB, free 265.1 MB)
>>> lines.count()
15/02/01 13:27:18 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
15/02/01 13:27:18 WARN LoadSnappy: Snappy native library not loaded
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 847, in co
unt
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 838, in su
m
    return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 759, in re
duce
    vals = self.mapPartitions(func).collect()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 723, in co
llect
    bytesInJava = self._jrdd.collect().iterator()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:5
6)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
        at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala
:305)
        at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)

>>> lines.first()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1167, in f
irst
    return self.take(1)[0]
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1126, in t
ake
    totalParts = self._jrdd.partitions().size()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.sc
ala:50)
        at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)

>>>
4

4 回答 4

4

我没有尝试在 Windows 系统中运行 spark,但在我看来问题是:

py4j.protocol.Py4JJavaError:调用 o26.collect 时出错。: org.apache.hadoop.mapred.InvalidInputException: 输入路径不存在: file:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md

您必须正确引用要加载的文件。如果您从 spark 文件夹(即:)运行 pyspark C:\spark,那么lines = sc.textFile("README.md")是正确的。但是,如果您从bin(即:)运行 pyspark,C:\spark\bin则必须将其引用为:lines = sc.textFile("../README.md"),或使用文件的绝对路径。

于 2015-02-01T10:04:48.527 回答
2

这是我在 Windows 中托管的 Spark 集群上遇到的这个错误的解决方案:

加载原始 HVAC.csv 文件,使用函数解析它

data = sc.textFile("wasb:///HdiSamples/SensorSampleData/hvac/HVAC.csv")

我们使用 (wasb:///) 来允许 Hadoop 访问 azure 博客存储文件,三个斜线是对正在运行的节点容器文件夹的相对引用。

例如:如果您的文件在 Spark 集群仪表板的文件资源管理器中的路径是:

sflcc1\sflccspark1\HdiSamples\SensorSampleData\hvac

所以描述路径如下: sflcc1:是存储账户的名称。sflccspark:是集群节点名。

所以我们用相对的三个斜杠来引用当前集群节点名。

希望这可以帮助。

于 2016-02-18T23:59:00.587 回答
1

我参加聚会有点晚了。我有一个类似的问题(ec2 spark cluster)。就我而言,hdfs dint 有我正在寻找的文件。因此,我不得不使用以下命令手动添加我想要的文件

~/ephemeral-hdfs/bin/hadoop fs -put /dir/filename.txt filename.txt

希望这会有所帮助。

于 2015-04-04T21:22:58.933 回答
0

我遇到了同样的问题并通过以下方式解决

scala> val textFile = spark.read.textFile("file:///usr/local/spark-3.1.2/README.md")
textFile: org.apache.spark.sql.Dataset[String] = [value: string]
于 2021-10-13T08:56:46.287 回答