3

我刚刚在我的 Cassandra DC 中添加了一个新节点。之前我的拓扑是这样的:

  1. DC Cassandra:1 个节点
  2. DC Solr:5 个节点

当我为 Cassandra DC 引导第二个节点时,我注意到要流式传输的总字节数几乎与现有节点的负载一样大(916gb 流式传输;现有 cassandra 节点的负载为 956gb)。尽管如此,我还是允许引导程序继续进行。它在几个小时前完成,现在我的恐惧得到了证实:Cassandra DC 完全不平衡。

Nodetool 状态显示如下:

Datacenter: Solr
================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
UN  solr node4                                     322.9 GB   40.3%             30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
UN  solr node3                                     233.16 GB  39.7%             c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
UN  solr node5                                     252.42 GB  41.6%             2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
UN  solr node2                                     245.97 GB  40.5%             7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
UN  solr node1                                     402.33 GB  38.0%             12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns (effective)  Host ID                               Token                                    Rack
UN  cs node2                                       705.58 GB  99.4%             fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
UN  cs node1                                      1013.52 GB  0.6%              6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1

请注意 Cassandra DC 中的“拥有”列:节点 2 拥有 99.4%,而节点 1 拥有 0.6%(尽管节点 2 的“负载”小于节点 1)。我希望他们每个人拥有 50% 的股份,但这就是我得到的。我不知道这是什么原因造成的。我记得的是,当我启动新节点的引导程序时,我正在 Solr 节点 1 中运行完整修复。到目前为止,修复仍在运行(我认为它实际上是在新节点完成引导时重新启动的)

我该如何解决?(修理?)

在 Cassandra DC 处于此状态时批量加载新数据是否安全?

一些附加信息:

  1. DSE 4.0.3(卡桑德拉 2.0.7)
  2. 网络拓扑策略
  3. 卡桑德拉 DC 中的 RF1;Solr DC 中的 RF2
  4. DC 由 DSE 自动分配
  5. 已启用 Vnode
  6. 新节点的配置模仿现有节点的配置;所以或多或少是正确的

编辑:

原来我也不能在 cs-node1 中运行清理。我收到以下异常:

Exception in thread "main" java.lang.AssertionError: [SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18509-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-18512-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38320-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38325-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38329-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38322-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38330-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38331-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38321-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38323-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38344-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38345-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38349-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38348-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38346-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13913-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-13915-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38389-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-39845-Data.db'), SSTableReader(path='/home/cassandra/data/my_ks/my_cf/my_ks-my_cf-jb-38390-Data.db')]
    at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2115)
    at org.apache.cassandra.db.ColumnFamilyStore$13.call(ColumnFamilyStore.java:2112)
    at org.apache.cassandra.db.ColumnFamilyStore.runWithCompactionsDisabled(ColumnFamilyStore.java:2094)
    at org.apache.cassandra.db.ColumnFamilyStore.markAllCompacting(ColumnFamilyStore.java:2125)
    at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:214)
    at org.apache.cassandra.db.compaction.CompactionManager.performCleanup(CompactionManager.java:265)
    at org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilyStore.java:1105)
    at org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(StorageService.java:2220)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
    at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
    at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
    at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
    at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
    at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
    at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
    at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
    at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
    at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
    at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
    at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
    at sun.rmi.transport.Transport$1.run(Transport.java:177)
    at sun.rmi.transport.Transport$1.run(Transport.java:174)
    at java.security.AccessController.doPrivileged(Native Method)
    at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
    at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
    at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

编辑:

Nodetool 状态输出(无键空间)

Note: Ownership information does not include topology; for complete information, specify a keyspace
Datacenter: Solr
================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns   Host ID                               Token                                    Rack
UN  solr node4                                     323.78 GB  17.1%  30f411c3-7419-4786-97ad-395dfc379b40  -8998044611302986942                     rack1
UN  solr node3                                     236.69 GB  17.3%  c7db42c6-c5ae-439e-ab8d-c04b200fffc5  -9145710677669796544                     rack1
UN  solr node5                                     256.06 GB  16.2%  2d3dfa16-a294-48cc-ae3e-d4b99fbc947c  -9004172260145053237                     rack1
UN  solr node2                                     246.59 GB  18.3%  7dbbcc88-aabc-4cf4-a942-08e1aa325300  -9176431489687825236                     rack1
UN  solr node1                                     411.25 GB  13.9%  12976524-b834-473e-9bcc-5f9be74a5d2d  -9197342581446818188                     rack1
Datacenter: Cassandra
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address                                        Load       Owns   Host ID                               Token                                    Rack
UN  cs node2                                       709.64 GB  17.2%  fa55e0bb-e460-4dc1-ac7a-f71dd00f5380  -9114885310887105386                     rack1
UN  cs node1                                      1003.71 GB  0.1%   6ab7062e-47fe-45f7-98e8-3ee8e1f742a4  -3083852333946106000                     rack1

来自node1的Cassandra yaml:https ://www.dropbox.com/s/ptgzp5lfmdaeq8d/cassandra.yaml (与node2的唯一区别是listen_address和commitlog_directory)

关于 CASSANDRA-6774,它有点不同,因为我没有停止之前的清理工作。尽管我认为我现在通过启动清理(仍在进行中)而不是像他们建议的解决方法那样首先重新启动节点而采取了错误的路线。

更新(2014/04/19):

执行以下操作后,nodetool cleanup 仍然失败并出现断言错误:

  1. 完全擦洗键空间
  2. 全集群重启

我现在正在对 cs-node1 中的键空间进行全面修复

更新(2014 年 4 月 20 日):

任何修复 cs-node1 中的主键空间的尝试都失败了:

丢失通知。您应该检查服务器日志以了解密钥空间的修复状态

我刚才也看到了这个(dsetool ring的输出)

Note: Ownership information does not include topology, please specify a keyspace.
Address          DC           Rack         Workload         Status  State    Load             Owns                 VNodes
solr-node1       Solr         rack1        Search           Up      Normal   447 GB           13.86%               256
solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        18.30%               256
solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        17.29%               256
cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        17.21%               256
solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        16.21%               256
solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        17.07%               256
cd-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.06%                256
Warning:  Node cs-node2 is serving 270.56 times the token space of node cs-node1, which means it will be using 270.56 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management
Warning:  Node solr-node2 is serving 1.32 times the token space of node solr-node1, which means it will be using 1.32 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management

键空间感知:

Address          DC           Rack         Workload         Status  State    Load             Effective-Ownership  VNodes
solr-node1       Solr         rack1        Search           Up      Normal   447 GB           38.00%               256
solr-node2       Solr         rack1        Search           Up      Normal   267.52 GB        40.47%               256
solr-node3       Solr         rack1        Search           Up      Normal   262.16 GB        39.66%               256
cs-node2         Cassandra    rack1        Cassandra        Up      Normal   808.61 GB        99.39%               256
solr-node5       Solr         rack1        Search           Up      Normal   296.14 GB        41.59%               256
solr-node4       Solr         rack1        Search           Up      Normal   340.53 GB        40.28%               256
cs-node1         Cassandra    rack1        Cassandra        Up      Normal   896.68 GB        0.61%                256
Warning:  Node cd-node2 is serving 162.99 times the token space of node cs-node1, which means it will be using 162.99 times more disk space and network bandwidth. If this is unintentional, check out http://wiki.apache.org/cassandra/Operations#Ring_management

这是一个强有力的指标,表明 cs-node2 的引导方式有问题(正如我在帖子开头所描述的那样)。

4

1 回答 1

0

看起来您的问题是您很可能在现有节点上从单个令牌切换到 vnode。所以他们所有的代币都排成一排。这在当前的 Cassandra 版本中实际上是不可能做到的,因为它很难做到正确。

修复它并能够添加新节点的唯一真正方法是停用您添加的第一个新节点,然后按照有关从单个节点切换到 vnodes 的当前文档进行操作,这基本上是您需要建立全新的数据中心使用其中的节点使用全新的 vnode,然后停用现有节点。

于 2015-11-22T20:30:24.070 回答