我选择 TreeDB 作为京都内阁后端,希望它能够扩展到巨大的价值。不幸的是,有一个问题:
# ./kyotobench
Generated string length: 1024
1000 records, type t 74.008887ms throughput: 13511 /sec
2000 records, type t 145.390096ms throughput: 13756 /sec
4000 records, type t 290.13486ms throughput: 13786 /sec
8000 records, type t 584.46691ms throughput: 13687 /sec
16000 records, type t 1.150792756s throughput: 13903 /sec
32000 records, type t 2.134860729s throughput: 14989 /sec
64000 records, type t 4.378002268s throughput: 14618 /sec
128000 records, type t 9.41012632s throughput: 13602 /sec
256000 records, type t 20.457090225s throughput: 12513 /sec
512000 records, type t 45.934115353s throughput: 11146 /sec
1024000 records, type t 1m39.120917207s throughput: 10330 /sec
2048000 records, type t 3m41.720146906s throughput: 9236 /sec
4096000 records, type t 15m26.041653712s throughput: 4423 /sec
8192000 records, type t 5h5m31.431477812s throughput: 446 /sec
我打开一个 TreeDB,生成 2 个随机长度的随机字符串 ( 0<len<1024
) 并将它们分别用作键和值。编码:
这是什么原因?
更新:
我之前应该澄清一下,我不是在精确测量 KyotoDB 吞吐量之后,而是试图测试 KDB 的可扩展性,即 r/w 吞吐量如何随着数据库中键数量的增加而表现,即摊销成本添加/读取记录。
创建 1 个随机字符串的摊销成本为 O(1),创建 N 个随机字符串的摊销成本为 O(N)。只要每 1 个 DB 操作创建的随机字符串数量恒定,它所施加的惩罚就每秒组合操作而言是恒定的,因此它对每秒DB 操作数没有摊销影响。
我测量了随机字符串创建的吞吐量:
1000 strings, type t 65.380289ms throughput: 15295 /sec
2000 strings, type t 130.345234ms throughput: 15343 /sec
4000 strings, type t 259.886865ms throughput: 15391 /sec
8000 strings, type t 519.380392ms throughput: 15402 /sec
16000 strings, type t 1.040323537s throughput: 15379 /sec
32000 strings, type t 1.855234924s throughput: 17248 /sec
64000 strings, type t 3.709873467s throughput: 17251 /sec
128000 strings, type t 7.371360742s throughput: 17364 /sec
256000 strings, type t 14.705493792s throughput: 17408 /sec
512000 strings, type t 29.488131398s throughput: 17362 /sec
1024000 strings, type t 59.46313568s throughput: 17220 /sec
2048000 strings, type t 1m58.688153868s throughput: 17255 /sec
4096000 strings, type t 3m57.415585291s throughput: 17252 /sec
8192000 strings, type t 7m57.054025376s throughput: 17172 /sec
代码: http: //pastebin.com/yfVXYbSt
正如所料,成本为 O(n)。还要比较时间,例如创建随机字符串时 8192000 条记录大约 8 分钟,将它们写入数据库时大约 5 小时 5 分钟。
更新#2:
这似乎与唯一/冲突键有关。在此代码中:http: //pastie.org/8906676 我以类似于此处使用的方法的方式使用键和值:http: //blog.creapptives.com/post/8330476086/leveldb-vs-kyoto-cabinet -my-findings ( http://www.pastie.org/2295228 ),即生成带有线性递增整数后缀的“key”(“key1”、“key2”等)。
(更新后的代码也使用每 50,000 次写入的事务,这似乎有一些影响)
现在吞吐量下降很慢(如果确实存在的话):
4000 records, type t 10.220836ms throughput: 391357 /sec
8000 records, type t 18.113652ms throughput: 441655 /sec
16000 records, type t 36.6948ms throughput: 436029 /sec
32000 records, type t 74.048029ms throughput: 432151 /sec
64000 records, type t 148.585114ms throughput: 430729 /sec
128000 records, type t 303.646709ms throughput: 421542 /sec
256000 records, type t 633.831383ms throughput: 403892 /sec
512000 records, type t 1.297555153s throughput: 394588 /sec
1024000 records, type t 2.471077696s throughput: 414394 /sec
2048000 records, type t 5.970116441s throughput: 343041 /sec
4096000 records, type t 11.449808222s throughput: 357735 /sec
8192000 records, type t 23.142591222s throughput: 353979 /sec
16384000 records, type t 46.90204795s throughput: 349323 /sec
再一次,请查看吞吐量的趋势,而不是绝对值。
理论上 TreeDB 是 B+ 树,因此向其写入记录应该是 ~O(log n)。
但事实并非如此。看起来好像在某处存在哈希冲突。