5

我正在运行 cassandra 1.2.6 和 Apache Spark 0.8.0。

在这里,我使用 Spark 的 newAPIHadoopRDD 从 Cassandra 创建一个 RDD。

当我从 spark appln 运行作业时,我发现 cassandra 正在记录以下错误消息

INFO 21:36:28,629 Listening for thrift clients...
DEBUG 21:36:29,730 Disseminating load info ...
DEBUG 21:36:57,844 Started replayAllFailedBatches
DEBUG 21:36:57,845 forceFlush requested but everything is clean in batchlog
DEBUG 21:36:57,846 Finished replayAllFailedBatches
DEBUG 21:37:29,731 Disseminating load info ...
DEBUG 21:37:57,846 Started replayAllFailedBatches
DEBUG 21:37:57,847 forceFlush requested but everything is clean in batchlog
DEBUG 21:37:57,847 Finished replayAllFailedBatches
DEBUG 21:38:29,732 Disseminating load info ...
DEBUG 21:38:57,847 Started replayAllFailedBatches
DEBUG 21:38:57,849 forceFlush requested but everything is clean in batchlog
DEBUG 21:38:57,849 Finished replayAllFailedBatches
DEBUG 21:39:29,732 Disseminating load info ...
DEBUG 21:39:57,849 Started replayAllFailedBatches
DEBUG 21:39:57,850 forceFlush requested but everything is clean in batchlog
DEBUG 21:39:57,850 Finished replayAllFailedBatches
DEBUG 21:39:57,956 computing ranges for -3011659447910895493
DEBUG 21:40:00,043 Thrift transport error occurred during processing of message.
org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
    at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
    at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:22)
    at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
DEBUG 21:40:00,048 Thrift transport error occurred during processing of message.
org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
    at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
    at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:22)
    at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
DEBUG 21:40:03,071 execute_cql3_query
DEBUG 21:40:03,090 request complete
DEBUG 21:40:03,253 prepare_cql3_query
DEBUG 21:40:03,267 execute_prepared_cql3_query
DEBUG 21:40:03,275 request complete
DEBUG 21:40:03,291 prepare_cql3_query
DEBUG 21:40:03,310 execute_prepared_cql3_query
DEBUG 21:40:03,312 request complete
DEBUG 21:40:03,314 prepare_cql3_query
DEBUG 21:40:03,326 execute_prepared_cql3_query
DEBUG 21:40:03,327 request complete
4

1 回答 1

1

请尝试适用于 Spark 的新 DataStax Cassandra 驱动程序(可在此处获得)。它能够直接访问 Cassandra,无需 Hadoop API。

于 2014-07-02T13:26:38.340 回答