0

我正在测试一些 NoSQL 解决方案,主要关注读取性能。今天是 MongoDb 日。测试机器是具有四核 Xeon @2.93GHz 和 8GB RAM 的 VM。

我只使用数据库和一个包含约 100.000 个文档的集合进行测试。BSON 文档大小约为 20Kb,或多或少。

我正在使用的托管对象是:

private class Job
{
    public int Id { get; set; }
    public string OrganizationName { get; set; }
    public List<string> Categories { get; set; }
    public List<string> Industries { get; set; }
    public int Identifier { get; set; }
    public string Description { get; set; }
}

测试过程:

- 创建 100 个线程。

- 启动所有线程。

- 每个线程从集合中随机读取 20 个文档。

这是我正在使用的选择方法:

private static void TestSelectWithCursor(object state)
{
    resetEvent.WaitOne();

    MongoCollection jobs = (state as MongoCollection);
    var q = jobs.AsQueryable<Job>();
    Random r = new Random(938432094);
    List<int> ids = new List<int>();
    for (int i = 0; i != 20; ++i)
    {
        ids.Add(r.Next(1000, 100000));
    }
    Stopwatch sw = Stopwatch.StartNew();
    var subset = from j in q
                 where j.Id.In(ids)
                 select j;

    int count = 0;
    foreach (Job job in subset)
    {
        count++;
    }
    Console.WriteLine("Retrieved {0} documents in {1} ms.", count, sw.ElapsedMilliseconds);
    ThreadsCount++;
}

“count++”只是假装我在检索光标后正在做某事,所以请忽略它。

无论如何,这个想法是我得到了在我看来非常缓慢的阅读时间。这是一个典型的测试结果:

> 100 threads created.
> 
> Retrieved 20 documents in 272 ms. Retrieved 20 documents in 522 ms.
> Retrieved 20 documents in 681 ms. Retrieved 20 documents in 732 ms.
> Retrieved 20 documents in 769 ms. Retrieved 20 documents in 843 ms.
> Retrieved 20 documents in 1038 ms. Retrieved 20 documents in 1139 ms.
> Retrieved 20 documents in 1163 ms. Retrieved 20 documents in 1170 ms.
> Retrieved 20 documents in 1206 ms. Retrieved 20 documents in 1243 ms.
> Retrieved 20 documents in 1322 ms. Retrieved 20 documents in 1378 ms.
> Retrieved 20 documents in 1463 ms. Retrieved 20 documents in 1507 ms.
> Retrieved 20 documents in 1530 ms. Retrieved 20 documents in 1557 ms.
> Retrieved 20 documents in 1567 ms. Retrieved 20 documents in 1617 ms.
> Retrieved 20 documents in 1626 ms. Retrieved 20 documents in 1659 ms.
> Retrieved 20 documents in 1666 ms. Retrieved 20 documents in 1687 ms.
> Retrieved 20 documents in 1711 ms. Retrieved 20 documents in 1731 ms.
> Retrieved 20 documents in 1763 ms. Retrieved 20 documents in 1839 ms.
> Retrieved 20 documents in 1854 ms. Retrieved 20 documents in 1887 ms.
> Retrieved 20 documents in 1906 ms. Retrieved 20 documents in 1946 ms.
> Retrieved 20 documents in 1962 ms. Retrieved 20 documents in 1967 ms.
> Retrieved 20 documents in 1969 ms. Retrieved 20 documents in 1977 ms.
> Retrieved 20 documents in 1996 ms. Retrieved 20 documents in 2005 ms.
> Retrieved 20 documents in 2009 ms. Retrieved 20 documents in 2025 ms.
> Retrieved 20 documents in 2035 ms. Retrieved 20 documents in 2066 ms.
> Retrieved 20 documents in 2093 ms. Retrieved 20 documents in 2111 ms.
> Retrieved 20 documents in 2133 ms. Retrieved 20 documents in 2147 ms.
> Retrieved 20 documents in 2150 ms. Retrieved 20 documents in 2152 ms.
> Retrieved 20 documents in 2155 ms. Retrieved 20 documents in 2160 ms.
> Retrieved 20 documents in 2166 ms. Retrieved 20 documents in 2196 ms.
> Retrieved 20 documents in 2202 ms. Retrieved 20 documents in 2254 ms.
> Retrieved 20 documents in 2256 ms. Retrieved 20 documents in 2262 ms.
> Retrieved 20 documents in 2263 ms. Retrieved 20 documents in 2285 ms.
> Retrieved 20 documents in 2326 ms. Retrieved 20 documents in 2336 ms.
> Retrieved 20 documents in 2337 ms. Retrieved 20 documents in 2350 ms.
> Retrieved 20 documents in 2372 ms. Retrieved 20 documents in 2384 ms.
> Retrieved 20 documents in 2412 ms. Retrieved 20 documents in 2426 ms.
> Retrieved 20 documents in 2457 ms. Retrieved 20 documents in 2473 ms.
> Retrieved 20 documents in 2521 ms. Retrieved 20 documents in 2528 ms.
> Retrieved 20 documents in 2604 ms. Retrieved 20 documents in 2659 ms.
> Retrieved 20 documents in 2670 ms. Retrieved 20 documents in 2687 ms.
> Retrieved 20 documents in 2961 ms. Retrieved 20 documents in 3234 ms.
> Retrieved 20 documents in 3434 ms. Retrieved 20 documents in 3440 ms.
> Retrieved 20 documents in 3452 ms. Retrieved 20 documents in 3466 ms.
> Retrieved 20 documents in 3502 ms. Retrieved 20 documents in 3524 ms.
> Retrieved 20 documents in 3561 ms. Retrieved 20 documents in 3611 ms.
> Retrieved 20 documents in 3652 ms. Retrieved 20 documents in 3655 ms.
> Retrieved 20 documents in 3666 ms. Retrieved 20 documents in 3711 ms.
> Retrieved 20 documents in 3742 ms. Retrieved 20 documents in 3821 ms.
> Retrieved 20 documents in 3850 ms. Retrieved 20 documents in 4020 ms.
> Retrieved 20 documents in 5143 ms. Retrieved 20 documents in 6607 ms.
> Retrieved 20 documents in 6630 ms. Retrieved 20 documents in 6633 ms.
> Retrieved 20 documents in 6637 ms. Retrieved 20 documents in 6639 ms.
> Retrieved 20 documents in 6801 ms. Retrieved 20 documents in 9302 ms.

最重要的是,我期望获得比这更快的读取时间。我还在想我做错了什么。不知道我现在可以提供哪些其他信息,但如果有任何遗漏,请告诉我。

我还包括,希望它会有所帮助,对测试执行的查询之一的 explain() 跟踪:

{
        "cursor" : "BtreeCursor _id_ multi",
        "nscanned" : 39,
        "nscannedObjects" : 20,
        "n" : 20,
        "millis" : 0,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "isMultiKey" : false,
        "indexOnly" : false,
        "indexBounds" : {
                "_id" : [
                        [
                                3276,
                                3276
                        ],
                        [
                                8257,
                                8257
                        ],
                        [
                                11189,
                                11189
                        ],
                        [
                                21779,
                                21779
                        ],
                        [
                                22293,
                                22293
                        ],
                        [
                                23376,
                                23376
                        ],
                        [
                                28656,
                                28656
                        ],
                        [
                                29557,
                                29557
                        ],
                        [
                                32160,
                                32160
                        ],
                        [
                                34833,
                                34833
                        ],
                        [
                                35922,
                                35922
                        ],
                        [
                                39141,
                                39141
                        ],
                        [
                                49094,
                                49094
                        ],
                        [
                                54554,
                                54554
                        ],
                        [
                                67684,
                                67684
                        ],
                        [
                                76384,
                                76384
                        ],
                        [
                                85612,
                                85612
                        ],
                        [
                                85838,
                                85838
                        ],
                        [
                                91634,
                                91634
                        ],
                        [
                                99891,
                                99891
                        ]
                ]
        }
}

如果您有任何想法,那么我将非常渴望阅读它。先感谢您!

马塞尔

4

1 回答 1

2

我怀疑“in”(通用修饰符)正在强制进行顺序扫描,并完全提取每个文档以检查 where 子句,从而绕过使用 _id 索引的效率。鉴于随机数可以相当分布,我的猜测是每个线程/查询基本上都在扫描整个数据库。

我建议尝试几件事。(1) 通过单独的单个 id 单独查询 20 个文档中的每一个 (2) 考虑使用 MongoCursor 并使用 Explain 来获取有关查询的索引使用的信息

祝福,

-加里

PS 线程时间似乎表明工作中也有一些线程调度效果。

于 2012-04-05T20:34:09.160 回答