1

我正在运行一个 Kubernetes 集群,我在不同的场合启用了两个服务网格 Istio 和 Linkerd。

当我尝试部署 Spark Standalone 集群时,每个 Spark 工作人员和 Master 将在不同的 pod 中运行,工作人员无法连接到 Spark Master。

可以通过服务(通过 sidecars)从 worker 到 master 运行 curl 请求,以获取 Spark Master UI。但是,当尝试启动连接到 Master 的 Spark 工作器时,它会失败。

这是服务清单:

apiVersion: v1 kind: Service metadata: labels: app: sparkmaster name: spark-submit2 namespace: spark spec: ports: - port: 7077 protocol: TCP targetPort: 7077 selector: app: sparkmaster type: ClusterIP

例如,当我跑步时

sparkuser@sparkslave-6897c9cdd7-frcgs:~$ /opt/spark-2.4.4-bin-hadoop2.7/sbin/start-slave.sh spark://spark-submit2:7077

我收到如下所示的以​​下错误。

这个问题的正确解决方案是什么?

注意:如果我在没有启用服务网格的命名空间中执行完全相同的过程,它会起作用。

20/02/13 15:05:55 INFO Worker: Connecting to master spark-submit2:7077...
20/02/13 15:05:55 INFO TransportClientFactory: Successfully created connection to spark-submit2/10.96.121.191:7077 after 259 ms (0 ms spent in bootstraps)
20/02/13 15:05:55 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from spark-submit2/10.96.121.191:7077 is closed
20/02/13 15:05:55 WARN Worker: Failed to connect to master spark-submit2:7077
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:101)
    at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:109)
    at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:253)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Connection from spark-submit2/10.96.121.191:7077 closed
    at org.apache.spark.network.client.TransportResponseHandler.channelInactive(TransportResponseHandler.java:146)
    at org.apache.spark.network.server.TransportChannelHandler.channelInactive(TransportChannelHandler.java:108)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
    at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
    at org.apache.spark.network.util.TransportFrameDecoder.channelInactive(TransportFrameDecoder.java:182)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1354)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
    at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:917)
    at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:822)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    ... 1 more
4

1 回答 1

0

像 Istio 这样的服务网格需要绑定地址为 0.0.0.0,因此除非您将排除 Inbound/Outbound Spark 端口添加到配置中,否则无法在集群模式下运行 spark 应用程序。

spark.kubernetes.executor.annotation.traffic.sidecar.istio.io/excludeOutboundPorts=7078,7079
spark.kubernetes.driver.annotation.traffic.sidecar.istio.io/excludeInboundPorts=7078,7079

其他选项是使用 Spark 客户端模式并添加 spark.driver.bindAddress = 0.0.0.0。否则等待服务网格支持与 pod IP 绑定寻址

于 2021-03-22T15:05:42.853 回答