1

是否有使用 Spark 1.X 将 Excel 文件转换为 csv 的工具?执行此教程时遇到此问题 https://github.com/ZuInnoTe/hadoopoffice/wiki/Read-Excel-document-using-Spark-1.x

Exception in thread "main" java.lang.NoClassDefFoundError: org/zuinnote/hadoop/office/format/mapreduce/ExcelFileInputFormat
        at org.zuinnote.spark.office.example.excel.SparkScalaExcelIn$.convertToCSV(SparkScalaExcelIn.scala:63)
        at org.zuinnote.spark.office.example.excel.SparkScalaExcelIn$.main(SparkScalaExcelIn.scala:56)
        at org.zuinnote.spark.office.example.excel.SparkScalaExcelIn.main(SparkScalaExcelIn.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.zuinnote.hadoop.office.format.mapreduce.ExcelFileInputFormat
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
4

2 回答 2

1

Spark 无法org.zuinnote.hadoop.office.format.mapreduce.ExcelFileInputFormat在类路径中找到文件格式类。

--jars使用参数提供以下依赖项到 spark-submit-

<!-- https://mvnrepository.com/artifact/com.github.zuinnote/hadoopoffice-fileformat -->
<dependency>
    <groupId>com.github.zuinnote</groupId>
    <artifactId>hadoopoffice-fileformat</artifactId>
    <version>1.0.4</version>
</dependency>

命令:

spark-submit --jars hadoopoffice-fileformat-1.0.4.jar  \
#rest of the command arguments
于 2017-12-13T16:05:19.800 回答
0

您必须构建一个包含所有必要依赖项的胖 jar。HadoopOffice 页面上的示例项目显示了您如何构建一个。一个你构建你只需在 Spark 峰会中使用它的 fat/uber jar。

于 2018-10-01T22:04:01.617 回答