1

我正在使用 CDH 5.9.0、Spark 1.6 和 Scala 2.10.0。我创建了一个 scala 和 spark 程序来创建一个表并将数据从文件加载到配置单元。当我使用 spark submit 运行它时,它完成了。但是通过 oozie 提交相同的程序时,它会引发以下异常。

下面是例外。

    Log Type: stdout
Log Upload Time: Fri Oct 27 10:08:28 -0400 2017
Log Length: 172584
2017-10-27 10:08:20,652 INFO  [main] yarn.ApplicationMaster (SignalLogger.scala:register(47)) - Registered signal handlers for [TERM, HUP, INT]
2017-10-27 10:08:21,306 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - ApplicationAttemptId: appattempt_1507999204018_0292_000001
2017-10-27 10:08:21,952 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing view acls to: username
2017-10-27 10:08:21,953 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing modify acls to: username
2017-10-27 10:08:21,956 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(58)) - SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(username); users with modify permissions: Set(username)
2017-10-27 10:08:21,970 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Starting the user application in a separate Thread
2017-10-27 10:08:21,997 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Waiting for spark context initialization
2017-10-27 10:08:21,998 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Waiting for spark context initialization ... 
2017-10-27 10:08:22,308 WARN  [Driver] security.UserGroupInformation (UserGroupInformation.java:doAs(1701)) - PriviledgedActionException as:username (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
2017-10-27 10:08:22,309 WARN  [Driver] ipc.Client (Client.java:run(682)) - Exception encountered while connecting to the server : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
2017-10-27 10:08:22,310 WARN  [Driver] security.UserGroupInformation (UserGroupInformation.java:doAs(1701)) - PriviledgedActionException as:username (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
2017-10-27 10:08:22,391 INFO  [Driver] spark.SparkContext (Logging.scala:logInfo(58)) - Running Spark version 1.6.0
2017-10-27 10:08:22,417 INFO  [Driver] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing view acls to: username
2017-10-27 10:08:22,418 INFO  [Driver] spark.SecurityManager (Logging.scala:logInfo(58)) - Changing modify acls to: username
2017-10-27 10:08:22,418 INFO  [Driver] spark.SecurityManager (Logging.scala:logInfo(58)) - SecurityManager: authentication enabled; ui acls disabled; users with view permissions: Set(username); users with modify permissions: Set(username)
2017-10-27 10:08:22,572 INFO  [Driver] util.Utils (Logging.scala:logInfo(58)) - Successfully started service 'sparkDriver' on port 44049.
2017-10-27 10:08:22,901 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-4] slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
2017-10-27 10:08:22,936 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-4] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2017-10-27 10:08:23,062 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-4] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@a.b.c.d:38305]
2017-10-27 10:08:23,064 INFO  [sparkDriverActorSystem-akka.actor.default-dispatcher-4] Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting now listens on addresses: [akka.tcp://sparkDriverActorSystem@a.b.c.d:38305]
2017-10-27 10:08:23,174 INFO  [Driver] util.Utils (Logging.scala:logInfo(58)) - Successfully started service 'sparkDriverActorSystem' on port 38305.
2017-10-27 10:08:23,195 INFO  [Driver] spark.SparkEnv (Logging.scala:logInfo(58)) - Registering MapOutputTracker
2017-10-27 10:08:23,207 INFO  [Driver] spark.SparkEnv (Logging.scala:logInfo(58)) - Registering BlockManagerMaster
2017-10-27 10:08:23,216 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/01/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-ba42749b-3498-4c1d-ba8b-dc6720e815a0
2017-10-27 10:08:23,217 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/02/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-d9375d30-699d-4e40-8b42-559f79f27f85
2017-10-27 10:08:23,217 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-fc2caf3b-3fa0-4f1e-be01-b33b6f6d52d5
2017-10-27 10:08:23,217 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/04/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-450319a4-2d4f-4159-a633-3dd2a71bafe1
2017-10-27 10:08:23,217 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/05/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-c3dbf9b3-cb95-4104-b4bf-9e7b1987e210
2017-10-27 10:08:23,217 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/06/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-5d9c58a6-29bb-4e8e-a8fb-3720db0004d4
2017-10-27 10:08:23,218 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/07/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-999eecaf-f183-4ede-8845-eeb57a87276b
2017-10-27 10:08:23,218 INFO  [Driver] storage.DiskBlockManager (Logging.scala:logInfo(58)) - Created local directory at /data/08/yarn/nm/usercache/username/appcache/application_1507999204018_0292/blockmgr-216d2449-14b1-45aa-b6c6-d6271815f485
2017-10-27 10:08:23,221 INFO  [Driver] storage.MemoryStore (Logging.scala:logInfo(58)) - MemoryStore started with capacity 491.7 MB
2017-10-27 10:08:23,283 INFO  [Driver] spark.SparkEnv (Logging.scala:logInfo(58)) - Registering OutputCommitCoordinator
2017-10-27 10:08:23,394 INFO  [Driver] ui.JettyUtils (Logging.scala:logInfo(58)) - Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
2017-10-27 10:08:23,413 INFO  [Driver] server.Server (Server.java:doStart(272)) - jetty-8.y.z-SNAPSHOT
2017-10-27 10:08:23,448 INFO  [Driver] server.AbstractConnector (AbstractConnector.java:doStart(338)) - Started SelectChannelConnector@0.0.0.0:36123
2017-10-27 10:08:23,448 INFO  [Driver] util.Utils (Logging.scala:logInfo(58)) - Successfully started service 'SparkUI' on port 36123.
2017-10-27 10:08:23,449 INFO  [Driver] ui.SparkUI (Logging.scala:logInfo(58)) - Started SparkUI at http://a.b.c.d:36123
2017-10-27 10:08:23,498 INFO  [Driver] cluster.YarnClusterScheduler (Logging.scala:logInfo(58)) - Created YarnClusterScheduler
2017-10-27 10:08:23,524 INFO  [Driver] util.Utils (Logging.scala:logInfo(58)) - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44418.
2017-10-27 10:08:23,525 INFO  [Driver] netty.NettyBlockTransferService (Logging.scala:logInfo(58)) - Server created on 44418
2017-10-27 10:08:23,527 INFO  [Driver] storage.BlockManager (Logging.scala:logInfo(58)) - external shuffle service port = 7337
2017-10-27 10:08:23,527 INFO  [Driver] storage.BlockManagerMaster (Logging.scala:logInfo(58)) - Trying to register BlockManager
2017-10-27 10:08:23,530 INFO  [dispatcher-event-loop-11] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(58)) - Registering block manager a.b.c.d:44418 with 491.7 MB RAM, BlockManagerId(driver, a.b.c.d, 44418)
2017-10-27 10:08:23,533 INFO  [Driver] storage.BlockManagerMaster (Logging.scala:logInfo(58)) - Registered BlockManager
2017-10-27 10:08:24,106 INFO  [Driver] scheduler.EventLoggingListener (Logging.scala:logInfo(58)) - Logging events to hdfs://.../user/spark/applicationHistory/application_1507999204018_0292_1
2017-10-27 10:08:24,133 INFO  [Driver] cluster.YarnClusterSchedulerBackend (Logging.scala:logInfo(58)) - SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
2017-10-27 10:08:24,133 INFO  [Driver] cluster.YarnClusterScheduler (Logging.scala:logInfo(58)) - YarnClusterScheduler.postStartHook done
2017-10-27 10:08:24,140 INFO  [dispatcher-event-loop-13] cluster.YarnSchedulerBackend$YarnSchedulerEndpoint (Logging.scala:logInfo(58)) - ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@a.b.c.d:44049)
2017-10-27 10:08:24,191 INFO  [main] yarn.YarnRMClient (Logging.scala:logInfo(58)) - Registering the ApplicationMaster
2017-10-27 10:08:24,295 INFO  [main] yarn.ApplicationMaster (Logging.scala:logInfo(58)) - Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
2017-10-27 10:08:25,107 INFO  [Driver] hive.HiveContext (Logging.scala:logInfo(58)) - Initializing execution hive, version 1.1.0
2017-10-27 10:08:25,146 INFO  [Driver] client.ClientWrapper (Logging.scala:logInfo(58)) - Inspected Hadoop version: 2.6.0-cdh5.9.0
2017-10-27 10:08:25,147 INFO  [Driver] client.ClientWrapper (Logging.scala:logInfo(58)) - Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.9.0
2017-10-27 10:08:25,582 INFO  [Driver] metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(644)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2017-10-27 10:08:25,600 INFO  [Driver] metastore.ObjectStore (ObjectStore.java:initialize(333)) - ObjectStore, initialize called
2017-10-27 10:08:25,671 WARN  [Driver] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/datanucleus-core-3.2.2.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/data/05/yarn/nm/filecache/507/datanucleus-core-3.2.2.jar."
2017-10-27 10:08:25,687 WARN  [Driver] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/datanucleus-api-jdo-3.2.1.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/data/07/yarn/nm/filecache/582/datanucleus-api-jdo-3.2.1.jar."
2017-10-27 10:08:25,688 WARN  [Driver] DataNucleus.General (Log4JLogger.java:warn(96)) - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/data/08/yarn/nm/filecache/554/datanucleus-rdbms-3.2.1.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/datanucleus-rdbms-3.2.1.jar."
2017-10-27 10:08:25,709 INFO  [Driver] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
2017-10-27 10:08:25,710 INFO  [Driver] DataNucleus.Persistence (Log4JLogger.java:info(77)) - Property datanucleus.cache.level2 unknown - will be ignored
2017-10-27 10:08:26,178 WARN  [Driver] bonecp.BoneCPConfig (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
2017-10-27 10:08:26,180 ERROR [Driver] Datastore.Schema (Log4JLogger.java:error(125)) - Failed initialising database.
Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/tmp/spark-633fb1f8-1f38-44ac-a54e-81465354bedc/metastore;create=true, username = APP. Terminating connection pool. Original Exception: ------
java.sql.SQLException: No suitable driver found for jdbc:derby:;databaseName=/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/tmp/spark-633fb1f8-1f38-44ac-a54e-81465354bedc/metastore;create=true
        at java.sql.DriverManager.getConnection(DriverManager.java:689)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:254)
        at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:305)
        at com.jolbox.bonecp.BoneCPDataSource.maybeInit(BoneCPDataSource.java:150)
        at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:112)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:479)
        at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:304)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
        at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1069)
        at org.datanucleus.NucleusContext.initialise(NucleusContext.java:359)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:768)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:326)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:195)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:411)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:440)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:335)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:291)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:648)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:626)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:675)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5999)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:203)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:194)
        at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:238)
        at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:220)
        at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:210)
        at org.apache.spark.sql.hive.HiveContext.functionRegistry$lzycompute(HiveContext.scala:464)
        at org.apache.spark.sql.hive.HiveContext.functionRegistry(HiveContext.scala:463)
        at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:330)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:90)
        at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
        at prfrx.externaltableerror$.main(externaltableerror.scala:28)
        at prfrx.externaltableerror.main(externaltableerror.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)

        at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:312)
        at com.jolbox.bonecp.BoneCPDataSource.maybeInit(BoneCPDataSource.java:150)
        at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:112)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:479)
        at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:304)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
        at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1069)
        at org.datanucleus.NucleusContext.initialise(NucleusContext.java:359)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:768)
        ... 62 more
Caused by: java.sql.SQLException: No suitable driver found for jdbc:derby:;databaseName=/data/03/yarn/nm/usercache/username/appcache/application_1507999204018_0292/container_e69_1507999204018_0292_01_000001/tmp/spark-633fb1f8-1f38-44ac-a54e-81465354bedc/metastore;create=true
        at java.sql.DriverManager.getConnection(DriverManager.java:689)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:254)
        at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:305)

下面是我正在使用的代码。

object externaltableerror {
def main(args: Array[String]) {

val conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://...")
conf.addResource("hdfs://.../core-site.xml");
conf.addResource("hdfs://.../hdfs-site.xml");
conf.addResource("hdfs://.../hive-site.xml");
val fs = FileSystem.get(conf)
val os = fs.create(new Path("/.../Error.txt"))

try {
  //System.setProperty("hive.metastore.uris", "thrift://...");
  val sc = new SparkContext(new SparkConf().setAppName("withhive"))
  val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)

  val files = sc.textFile("hdfs://.../Example.txt").first()

  val rdd = sc.parallelize(List(files))

  val fm = rdd.flatMap(line => line.split("\t")).map(x => x.concat(" string"))

  val alternative = fm.reduce((s1, s2) => s1 + "," + s2)

  val ddl = "Create external table table_name(" + alternative + ") ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LOCATION 'hdfs://.../' tblproperties (\"skip.header.line.count\"=\"1\")"

  hiveContext.sql(ddl)
  sc.stop()
} catch{
    // case e : Exception => new PrintWriter("hdfs://.../Error.txt") { write(e.getStackTrace.mkString("\n")); close }
    // println("H" + e.getStackTrace)
  case e : Exception => os.write(e.getStackTrace.mkString("\n").getBytes)
}

} }

任何有关如何使用 oozie 运行工作的建议都会有很大帮助。谢谢!

4

1 回答 1

3

我有同样的问题 - 我通过--files /etc/hive/conf/hive-site.xml在我的 spark-submit 作业中使用参数来修复它。(首先我在 shell 中尝试,然后在 oozie 中尝试,因为我启动了一个包含 spark-submit 句子的 .sh 文件)

于 2018-02-06T13:09:08.317 回答