我正在学习 hadoop、机器学习和 spark。我已经下载了 Cloudera 5.7 快速启动 VM。我还从https://github.com/apache/spark下载了示例作为 zip 文件并将它们复制到 Cloudera VM。我在运行机器学习和 https://github.com/apache/spark中的任何示例时遇到了挑战。我尝试运行简单的字数统计示例,但失败了。以下是我的步骤和我得到的错误
[cloudera@quickstart.cloudera] cd /spark-master/examples/src/main/python/ml [cloudera@quickstart.cloudera] spark-submit word2vec_example.py
我尝试运行的所有示例都失败并出现以下错误。
Traceback(最近一次调用最后一次):文件“/home/cloudera/training/spark-master/examples/src/main/python/ml/word2vec_example.py”,第 23 行,从 pyspark.sql 导入 SparkSession
我搜索了文件 pyspark.sql 但我只能找到以下文件 cd /spark-master find 。-name pyspark.sql ./python/docs/pyspark.sql.rst
请告知我如何解决这些错误,以便我可以运行此示例以加速我的机器学习和大数据。
字数统计示例的代码如下
猫 word2vec_example.py
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
# $example on$
from pyspark.ml.feature import Word2Vec
# $example off$
from pyspark.sql import SparkSession
if __name__ == "__main__":
spark = SparkSession\
.builder\
.appName("Word2VecExample")\
.getOrCreate()
# $example on$
# Input data: Each row is a bag of words from a sentence or document.
documentDF = spark.createDataFrame([
("Hi I heard about Spark".split(" "), ),
("I wish Java could use case classes".split(" "), ),
("Logistic regression models are neat".split(" "), )
], ["text"])
# Learn a mapping from words to Vectors.
word2Vec = Word2Vec(vectorSize=3, minCount=0, inputCol="text", outputCol="result")
model = word2Vec.fit(documentDF)
result = model.transform(documentDF)
for feature in result.select("result").take(3):
print(feature)
# $example off$
spark.stop()