0

所以我遵循了 Slony 教程并能够复制我的数据库,但我注意到它仅在第一次启动时才有效。如果我让复制运行任何新数据永远不会进入从属。我发现解决此问题的唯一方法是卸载集群/节点并恢复它们,然后再次复制仅在启动时发生。

我在这里按照本教程

我目前的步骤是:

  1. 在 Master 和 Slave 上启动 postgres

  2. 使用此脚本卸载集群/节点(我有另一个用于从节点,主机作为该节点。

    #!/bin/sh
    slonik <<_EOF_
    cluster name = $CLUSTERNAME;
    
    node 1 admin conninfo = 'dbname=$TEST_DB host=$MASTERHOST user=test';
    
    uninstall node ( id = 1 );
    _EOF_
    
  3. 设置集群

    #!/bin/sh
    slonik <<_EOF_
    cluster name = $CLUSTERNAME;
    node 1 admin conninfo = 'dbname=$TEST_DB host=$MASTERHOST user=test';
    node 2 admin conninfo = 'dbname=$TEST_DB host=$SLAVEHOST user=test';
    init cluster (id=1, comment = 'Master Node');
    
    create set (id=1, origin=1, comment='All test tables');
    set add table (set id=1, origin=1, id=1, fully qualified name = 'test.amqp_status', comment='amqp status');
    set add table (set id=1, origin=1, id=2, fully qualified name = 'test.corba_status', comment='corba status');
    set add table (set id=1, origin=1, id=3, fully qualified name = 'test.icmp_status', comment='ping status');
    set add table (set id=1, origin=1, id=4, fully qualified name = 'test.test_status', comment='teststatus');
    set add table (set id=1, origin=1, id=5, fully qualified name = 'test.ntp_status', comment='ntp status');
    set add table (set id=1, origin=1, id=6, fully qualified name = 'test.snmp_status', comment='snmp status');
    set add table (set id=1, origin=1, id=7, fully qualified name = 'test.subsystem_service_status', comment='subsystem_service status');
    set add table (set id=1, origin=1, id=8, fully qualified name = 'test.subsystem_status', comment='subsystem status');
    set add table (set id=1, origin=1, id=9, fully qualified name = 'test.switch_device_file', comment='switch_device_file');
    set add table (set id=1, origin=1, id=10, fully qualified name = 'test.host_status', comment='host status');
    
    store node (id=2, comment = 'Slave Node', event node=1);
    store path (server = 1, client = 2, conninfo='dbname=$TEST_DB host=$MASTERHOST user=test');
    store path (server = 2, client = 1, conninfo='dbname=$TEST_DB host=$SLAVEHOST user=test');
    _EOF_
    
  4. 使用以下命令在每个节点上运行 slon:

    slon $CLUSTERNAME "dbname=$TEST_DB user=test host=$MASTERHOST"
    
  5. 在 Master 上运行复制脚本(我已经尝试过 no 和 yes 没有区别。)

    #!/bin/sh
    slonik <<_EOF_
    cluster name = $CLUSTERNAME;
    
    node 1 admin conninfo = 'dbname=$TEST_DB host=$MASTERHOST user=test';
    node 2 admin conninfo = 'dbname=$TEST_DB host=$SLAVEHOST user=test';
    
    subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);
    _EOF_
    

一旦最后一个脚本在一秒钟内运行,我的表已经复制到从服务器,我可以看到每个主机的 slon 输出中发生同步,但即使我看到这个同步消息,我也看不到表正在更新。

我已经手动登录到 PostgreSQL 并插入到表中。我也尝试过使用 PSQL 命令代替并将 java 插入到 postgres 中。Slony 似乎在最初的副本之后没有看到任何东西。

至于 postgres 设置,我将复制设置为“副本”,但其他方面没有太大变化,因为 Slony 文档没有提出任何建议。

我想我缺少一些基本的东西,但请帮助我,谢谢。

4

1 回答 1

0

A couple minor details...

  • I would suggest leaving the "replication" mode alone; Slony manages that itself, and mucking around with it is liable to cause things to break more confusingly.

  • FYI, having forwarding on or off is pretty irrelevant if you only have 2 nodes; that won't be important.

My first thought was that perhaps you were only running a slon process against the subscriber node, and not against the origin; that would cause the perceived phenomenon. The slon running against the origin doesn't do any replication work, but it does mark SYNC events, so that subscribers know that they have data to pull. No slon against the master means subscribers think they have no work to do.

A next thought is to see if changes are being successfully captured on the origin. Update some data on the master, and, on the master, look in tables [SlonySchemaName].sl_log_1 and .sl_log_2. The data changes should be captured there; if they're not, then we'd be able start looking to why not.

A further thought... Turn the debugging level up a bit. Info level (log_level = 0) should be enough normally, but when something confusing is happening, head to log_level = 1 which is DEBUG1.

On the origin, all you'll see, for the most part, is that, when busy, SYNCs get generated fairly frequently, and, if not busy, SYNCs get generated infrequently.

The action takes place on the subscriber, and, in the logs, at DEBUG1, you'll get a fair bit more indication of what replication work is going on.

The documentation about Log Analysis should be fairly helpful; see http://www.slony.info/documentation/2.2/loganalysis.html

于 2017-12-12T00:01:03.460 回答