0

当我试图将数据从 mysql 拉到 hadoop 时,我正在写这个命令

sudo import --connect jdbc:mysql://localhost/naresh --table marks --username root --password root

我收到此错误

13/09/04 17:00:43 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/09/04 17:00:43 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
13/09/04 17:00:43 INFO tool.CodeGenTool: Beginning code generation
13/09/04 17:00:43 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `marks` AS t LIMIT 1
13/09/04 17:00:43 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `marks` AS t LIMIT 1
13/09/04 17:00:43 INFO orm.CompilationManager: HADOOP_HOME is /usr/lib/hadoop
13/09/04 17:00:43 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop/hadoop-core.jar
13/09/04 17:00:44 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-nareshkumar/compile/b66caff07ef718bd6ff55ff7744d20a6/marks.jar
13/09/04 17:00:44 WARN manager.MySQLManager: It looks like you are importing from mysql.
13/09/04 17:00:44 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
13/09/04 17:00:44 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
13/09/04 17:00:44 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
13/09/04 17:00:44 INFO mapreduce.ImportJobBase: Beginning import of marks
13/09/04 17:00:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
13/09/04 17:00:49 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
13/09/04 17:00:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
13/09/04 17:00:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
13/09/04 17:00:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
13/09/04 17:00:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
13/09/04 17:00:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
13/09/04 17:00:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
13/09/04 17:00:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
13/09/04 17:00:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
13/09/04 17:00:57 ERROR security.UserGroupInformation: PriviledgedActionException as:nareshkumar (auth:SIMPLE) cause:java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
13/09/04 17:00:57 ERROR tool.ImportTool: Encountered IOException running import job: java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
    at org.apache.hadoop.ipc.Client.call(Client.java:1155)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:511)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:496)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479)
    at org.apache.hadoop.mapreduce.Job$1.run(Job.java:539)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at                                                                                 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
    at org.apache.hadoop.mapreduce.Job.connect(Job.java:537)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:525)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:556)
    at com.cloudera.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:143)
    at com.cloudera.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:203)
    at com.cloudera.sqoop.manager.SqlManager.importTable(SqlManager.java:464)
    at com.cloudera.sqoop.manager.MySQLManager.importTable(MySQLManager.java:101)
    at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:382)
    at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:455)
    at com.cloudera.sqoop.Sqoop.run(Sqoop.java:146)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:182)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:221)
    at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230)
    at com.cloudera.sqoop.Sqoop.main(Sqoop.java:239)
        Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
    at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
    at org.apache.hadoop.ipc.Client.call(Client.java:1121)
    ... 26 more

我尝试在我的 core-site.xml 和 mapred-site.xml 中搜索并进行更改

请帮助我提前谢谢

4

2 回答 2

0

我认为这是端口号的问题。希望下面的命令会有所帮助

 sudo import --connect jdbc:mysql://localhost:3306/naresh --table marks --username root --password root
于 2013-09-04T11:49:51.800 回答
0

我在学习 SQOOP 时也遇到过类似的问题。就我而言,我解决了以下问题。

使用以下命令检查 Hadoop 的 Namenode 是否处于安全模式:

$bin/hadoop dfsadmin -safemode get

希望输出如下:

Safe mode is ON

在 hdfs 集群启动期间,直到 namenode 从 fsimage 接收到文件系统状态,编辑日志和 datanode 块报告,它保持在安全模式(只读)。稍后,namenode 会自动退出安全模式。如果没有发生,我们会遇到上述问题。

使用以下命令手动关闭安全模式:

$ bin/hadoop dfsadmin -safemode leave

现在再次使用上述get命令检查模式。您会发现它已关闭。

Safe mode is OFF

关闭安全模式后,您可以使用相应的命令将数据从数据库导入/增量导入到 hdfs。

于 2016-09-21T16:18:18.930 回答