我正在使用分片(通过副本集)并尝试转储数据库。通过 key files_id 为 mycms-prod.fs.chunks 启用分片。附加信息:http ://groups.google.com/group/mongodb-user/browse_thread/thread/a8f05cbf495d6487 我已阅读此说明:http ://www.mongodb.org/display/DOCS/Backing+Up+Sharded+集群 (用于小型集群)。
$ /opt/mongodb/bin/mongodump -h localhost:30000 -d mycms-prod
....
Other collections that are not currently in sharding dumps ok.
mycms-prod.tracking_daystat to dump/mycms-prod/
tracking_daystat.bson
370 objects
....
mycms-prod.fs.chunks to dump/mycms-prod/fs.chunks.bson
assertion: 11010 count fails:{ assertion: "setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master...",
assertionCode: 10429, errmsg: "db assertion failure", ok: 0 }
在 mongos.log 中:
#########################
Tue Apr 12 01:20:14 [mongosMain] connection accepted from
127.0.0.1:42975 #27
Tue Apr 12 01:20:15 [conn27] setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master", ok: 0.0 }
Tue Apr 12 01:20:15 [conn27] Assertion: 10429:setShardVersion failed
host[server1.domain.com:28000] { errmsg: "not master", ok: 0.0 }
0x51f4a9 0x69b163 0x69acf2 0x69acf2 0x69acf2 0x576ba6 0x5774b6
0x575630 0x575b31 0x65f661 0x57bdcc 0x631062 0x66432c 0x6761c7
0x57ea3c 0x69ec30 0x3a9be0673d 0x3a9b6d40cd
/opt/mongodb/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x129) [0x51f4a9]
/opt/mongodb/bin/mongos [0x69b163]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/mongos [0x69acf2]
/opt/mongodb/bin/
mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBa seERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi
+0x16) [0x576ba6]
/opt/mongodb/bin/
mongos(_ZN5mongo17ClientConnections13checkVersionsERKSs+0x1c6)
[0x5774b6]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnection5_initEv+0x2d0)
[0x575630]
/opt/mongodb/bin/mongos(_ZN5mongo15ShardConnectionC1ERKNS_5ShardERKSs
+0xa1) [0x575b31]
/opt/mongodb/bin/
mongos(_ZN5mongo15dbgrid_pub_cmds8CountCmd3runERKSsRNS_7BSONObjERSsRNS_14BS ONObjBuilderEb
+0x9e1) [0x65f661]
/opt/mongodb/bin/
mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONOb jBuilderE
+0x67c) [0x57bdcc]
/opt/mongodb/bin/
mongos(_ZN5mongo14SingleStrategy7queryOpERNS_7RequestE+0x262)
[0x631062]
/opt/mongodb/bin/mongos(_ZN5mongo7Request7processEi+0x29c) [0x66432c]
/opt/mongodb/bin/
mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21Abstract MessagingPortEPNS_9LastErrorE
+0x77) [0x6761c7]
/opt/mongodb/bin/mongos(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE
+0x34c) [0x57ea3c]
/opt/mongodb/bin/mongos(thread_proxy+0x80) [0x69ec30]
/lib64/libpthread.so.0 [0x3a9be0673d]
/lib64/libc.so.6(clone+0x6d) [0x3a9b6d40cd]
Tue Apr 12 01:20:15 [conn27] end connection 127.0.0.1:42975
#########################
然后我要去server1.domain.com。
[moskrc@server9 db]$ /opt/mongodb/bin/mongo server1.domain.com:28000
MongoDB shell version: 1.8.1
connecting to: server1.domain.com:28000/test
rs1:SECONDARY> use mycms-prod
switched to db mycms-prod
rs1:SECONDARY> db.fs.chunks.count()
Tue Apr 12 01:22:23 uncaught exception: count failed: { "errmsg" :
"not master", "ok" : 0 }
rs1:SECONDARY>
我认为 mongos 不应该与这台计算机联系。因为它是副本集的 SECONDARY。
这是一个错误?
1.8.1 版的所有组件。
有细微差别。这个数据库是在 mongorestore 的帮助下恢复的。
我认为转储可能有问题。我刚刚创建了一个新的数据库分片测试,将 500 个文件(在 gridfs 中)复制到其中并激活了分片。
等到所有数据都分布在服务器上。并试图转储数据库。令人惊讶的是,它成功了!来自不同的 mongos 也可以工作。这是什么意思?
This is console output:
{ "_id" : "shard-test", "partitioned" : true, "primary" : "rs2" }
shard-test.fs.chunks chunks:
rs1 3
rs3 3
rs2 5
{ "files_id" : { $minKey : 1 } } -->> { "files_id" :
ObjectId("4da48f64d8b9bb5239000000") } on : rs1 { "t" : 2000, "i" :
0 }
{ "files_id" : ObjectId("4da48f64d8b9bb5239000000") } -->>
{ "files_id" : ObjectId("4da49002d8b9bb527400005d") } on : rs3 { "t" :
3000, "i" : 0 }
{ "files_id" : ObjectId("4da49002d8b9bb527400005d") } -->>
{ "files_id" : ObjectId("4da49006d8b9bb5274000132") } on : rs1 { "t" :
4000, "i" : 0 }
{ "files_id" : ObjectId("4da49006d8b9bb5274000132") } -->>
{ "files_id" : ObjectId("4da49009d8b9bb527400028e") } on : rs3 { "t" :
5000, "i" : 0 }
{ "files_id" : ObjectId("4da49009d8b9bb527400028e") } -->>
{ "files_id" : ObjectId("4da4900ed8b9bb52740003d9") } on : rs1 { "t" :
6000, "i" : 0 }
{ "files_id" : ObjectId("4da4900ed8b9bb52740003d9") } -->>
{ "files_id" : ObjectId("4da4902ad8b9bb5274000530") } on : rs3 { "t" :
7000, "i" : 0 }
{ "files_id" : ObjectId("4da4902ad8b9bb5274000530") } -->>
{ "files_id" : ObjectId("4da49032d8b9bb52740005e1") } on : rs2 { "t" :
7000, "i" : 1 }
{ "files_id" : ObjectId("4da49032d8b9bb52740005e1") } -->>
{ "files_id" : ObjectId("4da49039d8b9bb5274000697") } on : rs2 { "t" :
2000, "i" : 2 }
{ "files_id" : ObjectId("4da49039d8b9bb5274000697") } -->>
{ "files_id" : ObjectId("4da4906ed8b9bb5274000749") } on : rs2 { "t" :
3000, "i" : 2 }
{ "files_id" : ObjectId("4da4906ed8b9bb5274000749") } -->>
{ "files_id" : ObjectId("4da490a1d8b9bb52be000007") } on : rs2 { "t" :
7000, "i" : 2 }
{ "files_id" : ObjectId("4da490a1d8b9bb52be000007") } -->>
{ "files_id" : { $maxKey : 1 } } on : rs2 { "t" : 7000, "i" : 3 }
> bye
(env)[moskrc@server2 tmp]$ /opt/mongodb/bin/mongodump -h localhost:
30000 -d shard-test
connected to: localhost:30000
DATABASE: shard-test to dump/shard-test
shard-test.system.indexes to dump/shard-test/system.indexes.bson
4 objects
shard-test.fs.chunks to dump/shard-test/fs.chunks.bson
600/1496 40%
700/1496 46%
900/1496 60%
1100/1496 73%
1400/1496 93%
1496 objects
shard-test.fs.files to dump/shard-test/fs.files.bson
804 objects
注意到另一个细微差别。
如果我这样做:
[moskrc@server9 mycms-prod]$ /opt/mongodb/bin/mongodump -h localhost:
30000 -d mycms-prod
connected to: localhost:30000
DATABASE: mycms-prod to dump/mycms-prod
mycms-prod.cms_comment to dump/mycms-prod/cms_comment.bson
16 objects
mycms-prod.system.indexes to dump/mycms-prod/system.indexes.bson
67 objects
mycms-prod.cms_pdfcontent to dump/mycms-prod/cms_pdfcontent.bson
18 objects
mycms-prod.djangoratings_vote to dump/mycms-prod/
djangoratings_vote.bson
25 objects
mycms-prod.auth_permission to dump/mycms-prod/auth_permission.bson
192 objects
mycms-prod.tracking_pagevisit to dump/mycms-prod/
tracking_pagevisit.bson
assertion: 11010 count fails:{ assertion: "setShardVersion failed
host[server2.domain.com:28000] { errmsg: "not maste...",
assertionCode: 10429, errmsg: "db assertion failure", ok: 0 }
tracking_pagevisit 集合发生错误。
那么......让我们尝试单独转储这个集合。
[moskrc@server9 mycms-prod]$ /opt/mongodb/bin/mongodump -h localhost:
30000 -d mycms-prod -c tracking_pagevisit
connected to: localhost:30000
DATABASE: mycms-prod to dump/mycms-prod
mycms-prod.tracking_pagevisit to dump/mycms-prod/
tracking_pagevisit.bson
14158 objects
这个功课!!!发生了什么?
我的系统:
CentOS 5.5
Kernel: Linux server9.domain.com 2.6.18-194.el5xen #1 SMP Fri Apr 2 15:34:40 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
我有 6 个 mongos。实际上总是只使用 2 个。用于 2 个应用程序。每个应用程序都有自己的 mongos。
总共我有 9 台服务器。在每次使用参数运行 mongod 时:shardsvr = true replSet = rs1(rs2 和 rs3)。三个复制品。每个副本由一个 3-mongod 组成。和三个配置服务器(server4.domain.com:28001,server6.domain.com:28001,server1.domain.com:28001)。
Mongos 参数 bind_ip = 127.0.0.1,123.456.789.12 端口 = 30000 fork = true configdb = server4.domain.com:28001,server6.domain.com:28001,server1.domain.com:28001
我重新启动了使用过的mongos。这有帮助。数据库现在是相同的。但是转储仍然无法正常工作。我写了上面的错误。
谢谢。