1

你能让我知道如何解决 java.lang.NoSuchMethodException: org.apache.hadoop.hive.ql.io.orc.OrcStruct.<init>()

用于启动 pyspark 的命令

pyspark --jars“hive-exec-0.13.1-cdh5.3.3.jar,hadoop-common-2.5.0-cdh5.3.3.jar,hadoop-mapreduce-client-app-2.5.0-cdh5.3.3.jar ,hadoop-mapreduce-client-common-2.5.0-cdh5.3.3.jar,hadoop-mapreduce-client-core-2.5.0-cdh5.3.3.jar,hadoop-core-2.5.0-mr1-cdh5.3.3 .jar,hive-metastore-0.13.1-cdh5.3.3.jar"

在 pyspark shell 中执行以下命令

distFile = sc.newAPIHadoopFile(path="orcdatafolder/",inputFormatClass="org.apache.hadoop.hive.ql.io.orc.OrcNewInputFormat", keyClass="org.apache.hadoop.io.NullWritable", valueClass=" org.apache.hadoop.hive.ql.io.orc.OrcStruct")

错误:

16/07/31 19:49:53 WARN scheduler.TaskSetManager:在阶段 0.0 中丢失任务 0.0(TID 0,sj1dra096.corp.adobe.com):java.lang.RuntimeException:java.lang.NoSuchMethodException:org.apache。 hadoop.hive.ql.io.orc.OrcStruct.<init>() 在 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131) 在 org.apache.hadoop.io.WritableUtils.clone(WritableUtils .java:217) 在 org.apache.spark.api.python.WritableToJavaConverter.org$apache$spark$api$python$WritableToJavaConverter$$convertWritable(PythonHadoopUtil.scala:96) 在 org.apache.spark.api.python。 WritableToJavaConverter.convert(PythonHadoopUtil.scala:104) at org.apache.spark.api.python.PythonHadoopUtil$$anonfun$convertRDD$1.apply(PythonHadoopUtil.scala:183) at org.apache.spark.api.python.PythonHadoopUtil$ $anonfun$convertRDD$1.apply(PythonHadoopUtil.scala:183) 在 scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 在 scala.collection.Iterator$$anon$10.next(Iterator.scala:312) 在 scala.collection.Iterator$class.foreach (Iterator.scala:727) 在 scala.collection.AbstractIterator.foreach(Ite​​rator.scala:1157) 在 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 在 scala.collection .mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to (TraversableOnce.scala:27​​3) 在 scala.collection.AbstractIterator.to(Iterator.scala:1157) 在 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 在 scala.collection.AbstractIterator.toBuffer(Iterator. scala:1157) 在 scala.collection.TraversableOnce$class。toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$26.apply(RDD.scala:1081) at org.apache .spark.rdd.RDD$$anonfun$26.apply(RDD.scala:1081) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.SparkContext $$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala :56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$ Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 原因:java.lang。NoSuchMethodException:org.apache.hadoop.hive.ql.io.orc.OrcStruct.() at java.lang.Class.getConstructor0(Class.java:2849) at java.lang.Class.getDeclaredConstructor(Class.java:2053)在 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125) ... 28 更多

4

0 回答 0