我需要配置一个高可用的 graylog2 集群,该集群分为 2 个数据中心。如果第一个数据中心完全关闭,第二个数据中心必须继续运行,反之亦然。(前端场外的负载均衡器)
例如,每个数据中心可以有 1 个 elasticsearch、1 个 graylog 和 2 个 mongodb 实例。最后,我有 2 个 elasticsearch、2 个 graylog 和 4 个 mongodb 实例。
当我从 mongodb 文档中阅读时,我需要奇数个选民。所以假设只有选民是其中的 3 个。(第一个数据中心 2 和第二个有 1)
通过一些配置,弹性搜索按预期工作。但是mongodb不是:(
那么,在任何数据中心完全关闭的情况下,是否可以使用 2 个数据中心进行高可用性配置?
最后,我想分享我的配置。注意:我当前的配置只有 2 个 mongodb
谢谢..
弹性搜索第一:
cluster.name: graylog
node.name: graylog-1
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.0.0.2"]
discovery.zen.minimum_master_nodes: 1
index.number_of_replicas: 2
弹性搜索第二:
cluster.name: graylog
node.name: graylog-2
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.0.0.1"]
discovery.zen.minimum_master_nodes: 1
mongodb 第一和第二(rs.conf()):
{
"_id" : "rs0",
"version" : 4,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "10.0.0.1:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "10.0.0.2:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("****")
}
}
灰色日志第一:
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = ***
root_password_sha2 = ***
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.0.0.1:9000/api/
web_listen_uri = http://10.0.0.1:9000/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 2
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_discovery_zen_ping_unicast_hosts = 10.0.0.1:9300, 10.0.0.2:9300
elasticsearch_network_host = 0.0.0.0
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://10.0.0.1,10.0.0.2/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32
灰色日志第二:
is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret = ***
root_password_sha2 = ***
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.0.0.2:9000/api/
web_listen_uri = http://10.0.0.2:9000/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 2
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_discovery_zen_ping_unicast_hosts = 10.0.0.1:9300, 10.0.0.2:9300
elasticsearch_transport_tcp_port = 9350
elasticsearch_network_host = 0.0.0.0
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://10.0.0.1,10.0.0.2/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32