1

我想从运行在 dataproc 集群上的 Spark 中删除 bigquery 表中的数据。但是在运行 Spark 应用程序时出现 SIGSEGV 运行时错误。这是尝试时的完整错误:

 datasource.getConnection()

 A fatal error has been detected by the Java Runtime Environment:
 SIGSEGV (0xb) at pc=0x00007f218680fc80, pid=25451, tid=0x00007f21f8c20700

 JRE version: OpenJDK Runtime Environment (8.0_322-b06) (build 1.8.0_322-b06)
 Java VM: OpenJDK 64-Bit Server VM (25.322-b06 mixed mode linux-amd64 compressed oops)
 Problematic frame:
 C  0x00007f218680fc80
 Failed to write core dump. Core dumps have been disabled. To enable core dumping, try ulimit -c unlimited" before starting Java again
 An error report file with more information is saved as: /tmp/4f00321a-e1ae-4869-9540-ff789383b27c/hs_err_pid25451.log
 If you would like to submit a bug report, please visit: https://github.com/adoptium/adoptium-support/issues

这是我正在关注的示例代码:

public class ConnectServiceAuthentication {
public static Connection connectViaDS() throws SQLException {
    Connection connection = null;
    DataSource ds = new 
com.simba.googlebigquery.jdbc.DataSource();
    ds.setURL("jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;\n" + "OAuthType=3;ProjectId=myProjectid;");
    ds.setProjectId("myProjectid");
    ds.setOAuthType(0); // Service Authentication
ds.setOAuthServiceAcctEmail("myproject_bigquery@myproject.iam.gserviceaccount.com");
    ds.setOAuthPvtKey("/home/key.json");
    connection = ds.getConnection();
    return connection;
}

public static void main(String[] args) throws SQLException {
    Connection connection = connectViaDS();
    String sql = "select count(1) from  temp.tempn1;";

    PreparedStatement ps = connection.prepareStatement(sql);
    ResultSet rs = ps.executeQuery();
    if (rs.next())
        System.out.println(rs.getInt(1));
    
    connection.close();
}

}

4

0 回答 0