3

我正在使用 docker swarm 运行 Hazelcast 集群。即使节点建立连接

Members [1] {                                                                                                
        Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this                                   
}                                                                                                            

May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService                            
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Hazelcast will connect to Hazelcast Management Center on address: 
http://10.0.0.3:8080/mancenter                                                                               
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService                            
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to pull tasks from management center                       
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService                            
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to connect to:http://10.0.0.3:8080/mancenter/collector.do  
May 27, 2017 2:38:12 PM com.hazelcast.core.LifecycleService                                                  
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] [10.0.0.3]:5701 is STARTED                                        
May 27, 2017 2:38:12 PM com.hazelcast.internal.partition.impl.PartitionStateManager                          
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Initializing cluster partition table arrangement...               
May 27, 2017 2:38:19 PM com.hazelcast.internal.cluster.ClusterService                                        
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8]                                                                   

Members [2] {                                                                                                
        Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this                                   
        Member [10.0.0.4]:5701 - b3bd51d4-9366-45f0-bb66-78e67b13268c                                        
}                                                                                                            

May 27, 2017 2:38:19 PM com.hazelcast.internal.partition.impl.MigrationManager                               
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Re-partitioning cluster data... Migration queue size: 271         
May 27, 2017 2:38:21 PM com.hazelcast.internal.partition.InternalPartitionService                            

之后我不断收到错误:

WARNING: [10.0.0.3]:5701 [kpts-cluster] [3.8] Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=18, /10.0.0.3:5701->/10.0.0.3:49575, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=17, /10.0.0.2:49575->/10.0.0.2:5701, endpoint=[10.0.0.2]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side

我想这与每个节点上的接口 eth0 有关系。分配了2个地址-集群管理器的一个“真实”和一个“假”...出于某种原因,它被宣传为端点...

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
82: eth0@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.2/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe00:3/64 scope link
       valid_lft forever preferred_lft forever
84: eth1@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.3/16 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe12:3/64 scope link
       valid_lft forever preferred_lft forever
86: eth2@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:ff:00:07 brd ff:ff:ff:ff:ff:ff
    inet 10.255.0.7/16 scope global eth2
       valid_lft forever preferred_lft forever
    inet 10.255.0.6/32 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:feff:7/64 scope link
       valid_lft forever preferred_lft forever

这是从其中一个节点读取的网络配置:

[                                                                                                  
    {                                                                                              
        "Name": "hazelcast-net",                                                                   
        "Id": "ly1p50ykwjhf68k88220gxih6",                                                         
        "Created": "2017-05-27T16:38:04.638580169+02:00",                                          
        "Scope": "swarm",                                                                          
        "Driver": "overlay",                                                                       
        "EnableIPv6": false,                                                                       
        "IPAM": {                                                                                  
            "Driver": "default",                                                                   
            "Options": null,                                                                       
            "Config": [                                                                            
                {                                                                                  
                    "Subnet": "10.0.0.0/24",                                                       
                    "Gateway": "10.0.0.1"                                                          
                }                                                                                  
            ]                                                                                      
        },                                                                                         
        "Internal": false,                                                                         
        "Attachable": true,                                                                        
        "Containers": {                                                                            
            "0fa2bd8f8e8e931e1140e2d4bee1b43ff1f7bd5e3049d95e9176c63fa9f47e4f": {                  
                "Name": "kpts.1zhprrumdjvenkl4cvsc7bt40.2ugiv46ubar8utnxc5hko1hdf",                
                "EndpointID": "0c5681aebbacd27672c300742077a460c07a081d113c2238f4c707def735ebec",  
                "MacAddress": "02:42:0a:00:00:03",                                                 
                "IPv4Address": "10.0.0.3/24",                                                      
                "IPv6Address": ""                                                                  
            }                                                                                      
        },                                                                                         
        "Options": {                                                                               
            "com.docker.network.driver.overlay.vxlanid_list": "4097"                               
        },                                                                                         
        "Labels": {},                                                                              
        "Peers": [                                                                                 
            {                                                                                      
                "Name": "c4-6f6cd87e898f",                                                         
                "IP": "10.6.225.34"                                                                
            },                                                                                     
            {                                                                                      
                "Name": "c5-77d9f542efe8",                                                         
                "IP": "10.6.225.35"                                                                
            }                                                                                      
        ]                                                                                          
    }                                                                                              
]  
4

2 回答 2

0

尝试使用 docker swarm 发现 SPI。它将为 swarm 提供一个自定义的 AddressPicker 实现,完全摆脱 Hazelcast 中接口选择和“此节点不是请求端点”错误的这个恒定问题。我真的希望他们能解决这个问题

https://github.com/bitsofinfo/hazelcast-docker-swarm-discovery-spi

import org.bitsofinfo.hazelcast.discovery.docker.swarm.SwarmAddressPicker;
...

Config conf =new ClasspathXmlConfig("yourHzConfig.xml");

NodeContext nodeContext = new DefaultNodeContext() {
    @Override
    public AddressPicker createAddressPicker(Node node) {
        return new SwarmAddressPicker(new ILogger() {
            // you provide the impl... or use provided "SystemPrintLogger"
        });
    }
};

HazelcastInstance hazelcastInstance = HazelcastInstanceFactory
        .newHazelcastInstance(conf,"myAppName",nodeContext);
于 2017-07-07T13:57:35.740 回答
0

您可能会发现上一个问题很有用:

Docker 网络 - “此节点不是请求端点”错误 #4537

现在更重要的是。您有良好的连接工作,这就是节点能够加入的原因;但是,您很可能(估计是因为我没有您的 hazelcast.xml)绑定到所有接口,因此您希望将网络绑定更改为仅绑定到所需的地址。我们默认绑定到 * 因为我们不知道您要使用哪个网络。

希望这可以帮助,

于 2017-06-06T16:47:00.703 回答