3

我正在尝试Spark 2.1.0使用Jupyter. Apache Toree SparkR kernel内核加载正确,但是当我尝试执行一个单元格时,会出现错误并无限重复。

使用 Scala 和 Python 内核连接到 Spark 可以完美运行。通过 RStudio 使用 R 连接到 Spark 可以完美运行。

错误日志:

Loading required package: methods

Attaching package: ‘SparkR’

The following objects are masked from ‘package:stats’:

`cov, filter, lag, na.omit, predict, sd, var, window`

The following objects are masked from ‘package:base’:

`as.data.frame, colnames, colnames<-, drop, endsWith, intersect,`
`rank, rbind, sample, startsWith, subset, summary, transform, union`

警告信息:

`In rm(".sparkRcon", envir = .sparkREnv) : objeto` '.sparkRcon' no encontrado
[1] "ExistingPort:" "43101"        
Error in value[[3L]](cond) : 
  Failed to connect JVM: Error in socketConnection(host = hostname, port = port, server = FALSE, : el argumento "timeout" está ausente, sin valor por omisión
Calls: sparkR.connect ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Ejecución interrumpida
17/05/04 11:04:12 [ERROR] o.a.t.k.i.s.SparkRProcessHandler - null process exited: 1
4

0 回答 0