我想对单个节点上的 Cassandra 实例(v1.1.10)进行简单的写入操作。我只是想看看它如何处理持续写入,以及它是否能跟上写入速度。
pool = ConnectionPool('testdb')
test_cf = ColumnFamily(pool,'test')
test2_cf = ColumnFamily(pool,'test2')
test3_cf = ColumnFamily(pool,'test3')
test_batch = test_cf.batch(queue_size=1000)
test2_batch = test2_cf.batch(queue_size=1000)
test3_batch = test3_cf.batch(queue_size=1000)
chars=string.ascii_uppercase
counter = 0
while True:
counter += 1
uid = uuid.uuid1()
junk = ''.join(random.choice(chars) for x in range(50))
test_batch.insert(uid, {'junk':junk})
test2_batch.insert(uid, {'junk':junk})
test3_batch.insert(uid, {'junk':junk})
sys.stdout.write(str(counter)+'\n')
pool.dispose()
长时间写入后(当计数器约为 10M+ 时),代码不断崩溃,并显示以下消息
pycassa.pool.AllServersUnavailable: An attempt was made to connect to each of the servers twice, but none of the attempts succeeded. The last failure was timeout: timed out
我设置了queue_size=100
这没有帮助。cqlsh -3
此外,在脚本崩溃并出现以下错误后,我启动了控制台以截断表格:
Unable to complete request: one or more nodes were unavailable.
Tailing/var/log/cassandra/system.log
没有给出错误信号,但有关于 Compaction、FlushWriter 等的 INFO。我究竟做错了什么?