2

我正在开发一个 4 节点 cassandra 1.2.6 集群,它有一个键空间,复制因子为 2(最初为 3,但下降到 2)和 10 个左右的列族。它正在运行 Oracle 1.7 jvm。它混合了读取和写入,写入次数可能是读取次数的两到三倍。

即使在少量负载下,我也看到了非常大的读取延迟,并且我得到了很多读取超时(使用 datastax java 驱动程序)。以下是列族之一的 nodetool cfstats 的示例输出:

    Column Family: user_scores
    SSTable count: 1
    SSTables in each level: [1, 0, 0, 0, 0, 0, 0, 0, 0]
    Space used (live): 7539098
    Space used (total): 7549091
    Number of Keys (estimate): 42112
    Memtable Columns Count: 2267
    Memtable Data Size: 1048576
    Memtable Switch Count: 2
    Read Count: 2101
    **Read Latency: 272334.202 ms.**
    Write Count: 24947
    Write Latency: NaN ms.
    Pending Tasks: 0
    Bloom Filter False Positives: 0
    Bloom Filter False Ratio: 0.00000
    Bloom Filter Space Used: 55376
    Compacted row minimum size: 447
    Compacted row maximum size: 219342
    Compacted row mean size: 1051

如您所见,我尝试使用 level base compaction 策略来尝试改善读取延迟,但您也可以看到延迟很大。我有点难过。我有一个 cassandra 1.1.6 集群运行良好,但到目前为止 1.2 还没有运气。

集群在具有 4 个 CPU 和 7 Gb 内存的 VM 上运行。数据驱动器设置为跨 4 个磁盘的条带化 RAID。这台机器似乎不受 IO 限制。

我正在运行一个非常普通的配置,具有所有默认值。

我确实看到了奇怪的 CPU 行为,即使在较小的负载下 CPU 也会出现峰值。有时我看到压缩正在运行,但它们很好,所以我不认为是罪魁祸首。

我正试图弄清楚下一步该去哪里。任何帮助表示赞赏!

[使用 rpc_timeout 跟踪更新] 还在玩这个。这是一个示例跟踪。看起来合并步骤花费的时间太长了。

 activity                                                                | timestamp    | source        | source_elapsed
-------------------------------------------------------------------------+--------------+---------------+----------------
                                                      execute_cql3_query | 04:57:18,882 | 100.69.176.51 |              0
 Parsing select * from user_scores where user_id='26257166' LIMIT 10000; | 04:57:18,884 | 100.69.176.51 |           1981
                                                      Peparing statement | 04:57:18,885 | 100.69.176.51 |           2997
                         Executing single-partition query on user_scores | 04:57:18,885 | 100.69.176.51 |           3657
                                            Acquiring sstable references | 04:57:18,885 | 100.69.176.51 |           3724
                                             Merging memtable tombstones | 04:57:18,885 | 100.69.176.51 |           3779
                                            Key cache hit for sstable 32 | 04:57:18,886 | 100.69.176.51 |           3910
                             Seeking to partition beginning in data file | 04:57:18,886 | 100.69.176.51 |           3930
                              Merging data from memtables and 1 sstables | 04:57:18,886 | 100.69.176.51 |           4211
                                                        Request complete | 04:57:38,891 | 100.69.176.51 |       20009870

下面的旧痕迹:

[较新的跟踪] 通过完全重建集群数据存储库解决了日志中提到的问题后,我仍然遇到了问题,尽管花了相当长的时间。这是我在坏状态时抓取的痕迹:

跟踪会话:a6dbefc0-ea49-11e2-84bb-ef447a7d9a48

 activity                                                                                        | timestamp    | source         | source_elapsed
-------------------------------------------------------------------------------------------------+--------------+----------------+----------------
                                                                              execute_cql3_query | 16:48:02,755 | 100.69.196.124 |              0
                                                      Parsing select * from user_scores limit 1; | 16:48:02,756 | 100.69.196.124 |           1774
                                                                              Peparing statement | 16:48:02,759 | 100.69.196.124 |           4006
                                                                   Determining replicas to query | 16:48:02,759 | 100.69.196.124 |           4286
                                                             Enqueuing request to /100.69.176.51 | 16:48:02,763 | 100.69.196.124 |           8849
                                                         Sending message to cdb002/100.69.176.51 | 16:48:02,764 | 100.69.196.124 |           9456
                                                           Message received from /100.69.196.124 | 16:48:03,449 |  100.69.176.51 |            160
                                                            Message received from /100.69.176.51 | 16:48:09,646 | 100.69.196.124 |        6891860
                                                         Processing response from /100.69.176.51 | 16:48:09,647 | 100.69.196.124 |        6892426
 Executing seq scan across 1 sstables for [min(-9223372036854775808), min(-9223372036854775808)] | 16:48:10,288 |  100.69.176.51 |        6838754
                                                     Seeking to partition beginning in data file | 16:48:10,289 |  100.69.176.51 |        6839689
                                                              Read 1 live and 0 tombstoned cells | 16:48:10,289 |  100.69.176.51 |        6839927
                                                     Seeking to partition beginning in data file | 16:48:10,289 |  100.69.176.51 |        6839998
                                                              Read 1 live and 0 tombstoned cells | 16:48:10,289 |  100.69.176.51 |        6840082
                                                                    Scanned 1 rows and matched 1 | 16:48:10,289 |  100.69.176.51 |        6840162
                                                           Enqueuing response to /100.69.196.124 | 16:48:10,289 |  100.69.176.51 |        6840229
                                                              Sending message to /100.69.196.124 | 16:48:10,299 |  100.69.176.51 |        6850072
                                                                                Request complete | 16:48:09,648 | 100.69.196.124 |        6893029

[更新] 我应该补充一点,在我的 macbook pro 上使用单独的 cassandra 实例时,一切都很好。AKA 在我的机器上工作...:)

[使用跟踪数据更新]

这是一些跟踪数据。这是来自java驱动程序。缺点是我只能跟踪成功的查询。在每个查询开始超时之前,我总共进行了 67 个查询。奇怪的是,它看起来并没有那么糟糕。在查询 68 处,我不再得到响应,并且其中两台服务器正在运行热。

2013-07-11 02:15:45 STDIO [INFO] ***************************************
66:Host (queried): cdb003/100.69.198.47
66:Host (tried): cdb003/100.69.198.47
66:Trace id: c95e51c0-e9cf-11e2-b9a9-5b3c0946787b

66:-----------------------------------------------------+--------------+-----------------+--------------
66:            Enqueuing data request to /100.69.176.51 | 02:15:42.045 |  /100.69.198.47 |          200
66:          Enqueuing digest request to /100.69.176.51 | 02:15:42.045 |  /100.69.198.47 |          265
66:                  Sending message to /100.69.196.124 | 02:15:42.045 |  /100.69.198.47 |          570
66:                   Sending message to /100.69.176.51 | 02:15:42.045 |  /100.69.198.47 |          574
66:                Message received from /100.69.176.51 | 02:15:42.107 |  /100.69.198.47 |        62492
66:             Processing response from /100.69.176.51 | 02:15:42.107 |  /100.69.198.47 |        62780
66:                Message received from /100.69.198.47 | 02:15:42.508 | /100.69.196.124 |           31
66:     Executing single-partition query on user_scores | 02:15:42.508 | /100.69.196.124 |          406
66:                        Acquiring sstable references | 02:15:42.508 | /100.69.196.124 |          473
66:                         Merging memtable tombstones | 02:15:42.508 | /100.69.196.124 |          577
66:                        Key cache hit for sstable 11 | 02:15:42.508 | /100.69.196.124 |          807
66:         Seeking to partition beginning in data file | 02:15:42.508 | /100.69.196.124 |          849
66:          Merging data from memtables and 1 sstables | 02:15:42.509 | /100.69.196.124 |         1500
66:                Message received from /100.69.198.47 | 02:15:43.379 |  /100.69.176.51 |           60
66:     Executing single-partition query on user_scores | 02:15:43.379 |  /100.69.176.51 |          399
66:                        Acquiring sstable references | 02:15:43.379 |  /100.69.176.51 |          490
66:                         Merging memtable tombstones | 02:15:43.379 |  /100.69.176.51 |          593
66:                         Key cache hit for sstable 7 | 02:15:43.380 |  /100.69.176.51 |         1098
66:         Seeking to partition beginning in data file | 02:15:43.380 |  /100.69.176.51 |         1141
66:          Merging data from memtables and 1 sstables | 02:15:43.380 |  /100.69.176.51 |         1912
66:                  Read 1 live and 0 tombstoned cells | 02:15:43.438 |  /100.69.176.51 |        59094
66:                Enqueuing response to /100.69.198.47 | 02:15:43.438 |  /100.69.176.51 |        59225
66:                   Sending message to /100.69.198.47 | 02:15:43.438 |  /100.69.176.51 |        59373
66:Started at: 02:15:42.04466:Elapsed time in micros: 63105
2013-07-11 02:15:45 STDIO [INFO] ***************************************
67:Host (queried): cdb004/100.69.184.134
67:Host (tried): cdb004/100.69.184.134
67:Trace id: c9f365d0-e9cf-11e2-a4e5-7f3170333ff5

67:-----------------------------------------------------+--------------+-----------------+--------------
67:               Message received from /100.69.184.134 | 02:15:42.536 |  /100.69.198.47 |           36
67:     Executing single-partition query on user_scores | 02:15:42.536 |  /100.69.198.47 |          273
67:                        Acquiring sstable references | 02:15:42.536 |  /100.69.198.47 |          311
67:                         Merging memtable tombstones | 02:15:42.536 |  /100.69.198.47 |          353
67:                         Key cache hit for sstable 8 | 02:15:42.536 |  /100.69.198.47 |          436
67:         Seeking to partition beginning in data file | 02:15:42.536 |  /100.69.198.47 |          455
67:          Merging data from memtables and 1 sstables | 02:15:42.537 |  /100.69.198.47 |          811
67:                  Read 1 live and 0 tombstoned cells | 02:15:42.550 |  /100.69.198.47 |        14242
67:               Enqueuing response to /100.69.184.134 | 02:15:42.550 |  /100.69.198.47 |        14456
67:                  Sending message to /100.69.184.134 | 02:15:42.551 |  /100.69.198.47 |        14694
67:            Enqueuing data request to /100.69.198.47 | 02:15:43.021 | /100.69.184.134 |          323
67:                   Sending message to /100.69.198.47 | 02:15:43.021 | /100.69.184.134 |          565
67:                Message received from /100.69.198.47 | 02:15:43.038 | /100.69.184.134 |        17029
67:             Processing response from /100.69.198.47 | 02:15:43.038 | /100.69.184.134 |        17230
67:Started at: 02:15:43.021
67:Elapsed time in micros: 17622

这是使用 cqlsh 的跟踪:

Tracing session: d0f845d0-e9cf-11e2-8882-ef447a7d9a48

 activity                                                                | timestamp    | source         | source_elapsed
-------------------------------------------------------------------------+--------------+----------------+----------------
                                                      execute_cql3_query | 19:15:54,833 | 100.69.196.124 |              0
 Parsing select * from user_scores where user_id='39333433' LIMIT 10000; | 19:15:54,833 | 100.69.196.124 |            103
                                                      Peparing statement | 19:15:54,833 | 100.69.196.124 |            455
                         Executing single-partition query on user_scores | 19:15:54,834 | 100.69.196.124 |           1400
                                            Acquiring sstable references | 19:15:54,834 | 100.69.196.124 |           1468
                                             Merging memtable tombstones | 19:15:54,835 | 100.69.196.124 |           1575
                                            Key cache hit for sstable 11 | 19:15:54,835 | 100.69.196.124 |           1780
                             Seeking to partition beginning in data file | 19:15:54,835 | 100.69.196.124 |           1822
                              Merging data from memtables and 1 sstables | 19:15:54,836 | 100.69.196.124 |           2562
                                      Read 1 live and 0 tombstoned cells | 19:15:54,838 | 100.69.196.124 |           4808
                                                        Request complete | 19:15:54,838 | 100.69.196.124 |           5810
4

2 回答 2

1

跟踪似乎表明大部分时间正在执行或等待网络操作。也许您的网络有问题?

如果只有某些操作失败,则可能只有一个节点有问题。当不需要该节点时,一切正常,但当需要它时,事情就变得很糟糕。可能值得查看其他节点上的日志文件。

于 2013-07-11T07:36:11.257 回答
1

看起来我遇到了 1.2 的性能问题。幸运的是,一个补丁刚刚应用于 1.2 分支,所以当我从源代码构建时,我的问题就消失了。

有关详细说明,请参阅https://issues.apache.org/jira/browse/CASSANDRA-5677

谢谢大家!

于 2013-07-12T20:20:58.583 回答