1

在我的 windows 10 机器上安装 anaconda 后,我按照以下教程在我的机器上设置它并使用 jupyter 运行它:https ://changhsinlee.com/install-pyspark-windows-jupyter/

  • spark版本是3.1.2 python是3.8.8所以它是兼容的,现在将kafka与pyspark集成这是我的代码:
import findspark

findspark.init()

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
import time

kafka_topic_name = "test_spark"
kafka_bootstrap_servers = '192.168.1.3:9092'

spark = SparkSession \
    .builder \
    .appName("PySpark Structured Streaming with Kafka and Message Format as JSON") \
    .master("local[*]") \
    .getOrCreate()

#Construct a streaming DataFrame that reads from TEST-SPARK
df = spark \
    .readStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", kafka_bootstrap_servers) \
    .option("subscribe", kafka_topic_name) \
    .option("startingOffsets", "latest") \
    .load()

在这里它向我显示了我需要部署连接器的错误:

AnalysisException:找不到数据源:kafka。请按照“结构化流+Kafka集成指南”的部署部分部署应用程序

我转到页面并找到部署它的命令是:./bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 ...

  • 我导航到我的 spark 文件夹所在的位置,并在 PowerShell 中以管理员身份执行命令,但出现以下错误:
PS D:\Spark\spark-3.1.2-bin-hadoop3.2> .\bin\spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 ...
:: loading settings :: url = jar:file:/D:/Spark/spark-3.1.2-bin-hadoop3.2/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: C:\Users\T460S\.ivy2\cache
The jars for the packages stored in: C:\Users\T460S\.ivy2\jars
org.apache.spark#spark-sql-kafka-0-10_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-03401043-c6c7-40dd-8667-8001083bfb4c;1.0
        confs: [default]
        found org.apache.spark#spark-sql-kafka-0-10_2.12;3.1.2 in central
        found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.1.2 in central
        found org.apache.kafka#kafka-clients;2.6.0 in central
        found com.github.luben#zstd-jni;1.4.8-1 in central
        found org.lz4#lz4-java;1.7.1 in central
        found org.xerial.snappy#snappy-java;1.1.8.2 in central
        found org.slf4j#slf4j-api;1.7.30 in local-m2-cache
        found org.spark-project.spark#unused;1.0.0 in central
        found org.apache.commons#commons-pool2;2.6.2 in central
:: resolution report :: resolve 595ms :: artifacts dl 19ms
        :: modules in use:
        com.github.luben#zstd-jni;1.4.8-1 from central in [default]
        org.apache.commons#commons-pool2;2.6.2 from central in [default]
        org.apache.kafka#kafka-clients;2.6.0 from central in [default]
        org.apache.spark#spark-sql-kafka-0-10_2.12;3.1.2 from central in [default]
        org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.1.2 from central in [default]
        org.lz4#lz4-java;1.7.1 from central in [default]
        org.slf4j#slf4j-api;1.7.30 from local-m2-cache in [default]
        org.spark-project.spark#unused;1.0.0 from central in [default]
        org.xerial.snappy#snappy-java;1.1.8.2 from central in [default]
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   9   |   0   |   0   |   0   ||   9   |   0   |
        ---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-03401043-c6c7-40dd-8667-8001083bfb4c
        confs: [default]
        0 artifacts copied, 9 already retrieved (0kB/19ms)
21/06/23 19:32:08 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.spark.SparkException: Failed to get main class in JAR with error 'D:\Spark\spark-3.1.2-bin-hadoop3.2\... (Accès refusé)'.  Please specify one with --class.
        at org.apache.spark.deploy.SparkSubmit.error(SparkSubmit.scala:968)
        at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:486)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我尝试寻找解决方案,但没有任何效果,我不知道在 --class 他们告诉我要添加的参数中传递什么,它说:访问拒绝,这意味着访问被拒绝我不明白这一点,所以谁能告诉我该怎么做?

  • ps:环境变量都到位并且工作完美,所以我不认为它来自那个
4

1 回答 1

1

类似的错误(和相同的答案) - Spark Kafka Data Consuming Package

你真的...--packages选项之后写了吗?

该错误告诉您提供一个.py文件或--class包含您的应用程序代码的 JAR 文件

如果您确实提供了一个,那么 Spark 用户似乎无法访问D:\您提供的驱动器路径,您可能需要使用它winutils chmod来修改它


如果你想在 Jupyter 中运行代码,你也可以在--packages那里添加

import os

SCALA_VERSION = '2.12'
SPARK_VERISON = '3.1.2'

os.environ['PYSPARK_SUBMIT_ARGS'] = f'--packages org.apache.spark:spark-sql-kafka-0-10_{SCALA_VERSION}:{SPARK_VERSION} pyspark-shell'

import findspark
import pyspark

findspark.init()

...

或使用findspark.add_packages()- https://github.com/minrk/findspark/pull/11

于 2021-06-23T19:29:06.620 回答